STE WILLIAMS

Killer SecOps Skills: Soft Is the New Hard

The sooner we give mindsets and tool sets equal bearing, the better. We must put SOC team members through rigorous training for emergency situations.

I spend a lot of time with security operations center (SOC) and incident response teams — functions that have been hit particularly hard by the cybersecurity talent shortage. As I witness my colleagues struggling to fill open SOC positions, I can’t help but notice their tendency to value technical skills and specific product knowledge over all other criteria. Now that breaches are the new normal, so-called “soft skills” — such as communication and teamwork skills — are just as important as technical skills but are almost always overlooked when hiring.

Don’t get me wrong — technical skills and product knowledge are essential, but when a breach is discovered, SOC staff flip from being the last line of defense against an attack to the first ones responding to it. SOC analysts have evolved into cybersecurity first responders, but they’re not evaluated and trained the way first responders in other domains are, and they should be. Think about it — when a cyberattack occurs, an analyst with 10 years of experience with Windows Sysinternals and Wireshark won’t be much of an asset if he or she doesn’t perform well under pressure.

No reputable EMT provider would hire paramedics only because of their experience with a certain kind of defibrillator, yet that’s how we hire in cybersecurity. Even in SOC analyst job descriptions where soft skills are given lip service, rarely are those traits vetted with any rigor during the interview process.

● Excellent communication skills: At just about every customer site, we are asked to help train SOC managers to do a better job of communicating technical information to non-technical executives. This is hard enough to do when you have time to prepare what you want to say, so imagine how stressful it can be to explain the nuances of a ransomware situation to a CFO or CEO when a decision on whether or not to pay the ransom needs to be made in a matter of minutes.

● Teamwork skills: SOC teams must be able to collect and disseminate information and tasks across multiple teams. For example, when correlating information about a new attack, clues usually come from multiple sources: network and endpoint experts, malware analysts, operations teams, and additional team members. Incident responders must not only communicate effectively and succinctly, they must be able to delegate to and project manage multiple teams that may have limited understanding of cybersecurity, and under accelerated timelines where broken communication channels can have irreversible negative consequences. 

● Creative thinking/problem solving: Out-of-the-box thinking is an asset valued by the hacker community for good reason. The same qualities than enable attackers to get into a network are just as useful for defense. For example, one of our customers simulated an attacker blocking the Windows Task Manager process. One clever incident responder renamed and ran the file, which revealed the attacker’s actions and enabled them to squash the attack.

● Functions well under pressure: This is a signature quality of any first responder in any field. Currently, few cyber incidents qualify as life-and-death situations, but as attacks against industrial control system targets such as electric and nuclear power plants and Internet of Things systems such as car computers and medical equipment become mainstream, that is likely to change. Incident responders who are unable to function under extreme pressure should be identified and transferred into other roles now, before cyberattacks have the capacity to become fatal.

Any profession that requires people to perform well in high-stakes, high-pressure situations (doctors, pilots, paramedics soldiers, professional athletes) are evaluated and trained the same way — through realistic, experiential drills that simulate or otherwise recreate real-world conditions.

When preparing for the Battle of Normandy, General Eisenhower created a replica of the enemy’s coastal defenses, and in order to get soldiers used to the pressure, had the troops repeatedly practice landing on the shore amid simulated gunfire and explosions. Other fields, including ER doctors and pilots, undergo equally elaborate simulated training as well. It’s time we did the same with SOC analysts.

But we can’t just look for these traits when hiring and then move on once the analyst has been hired. To make sure cybersecurity first responders are prepared for an attack, especially given how quickly SOC tools and attack tactics, techniques, and procedures evolve, incident responders should be drilled regularly — at least quarterly.

It will be a culture and process shift to adapt our hiring and training processes accordingly, but it’s an entirely doable proposition — we just need to make it happen. The skills shortage won’t go away overnight, but when it comes to hiring and training incident responders, the sooner we give mindsets and tool sets equal bearing, the better.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Edy Almer leads Cyberbit’s product strategy. Prior to joining Cyberbit, Almer served as vice president of product for Algosec. During this period the company’s sales grew by over four times in five years. Before Algosec, Almer served as vice president of marketing and … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/killer-secops-skills-soft-is-the-new-hard/a/d-id/1334661?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Financial Sector Under Siege

The old take-the-money-and-run approach has been replaced by siege tactics such as DDOS attacks and land-and-expand campaigns with multiple points of persistence and increased dwell time.

Banks have long been the favorite targets of thieves and gangsters like John Dillinger, back in the 1930s. But increasingly, modern banking isn’t conducted on Main Street. It’s online and global, and the transactions never stop. But just as banking is changing, so is crime. Today’s bank robbers have swapped their Tommy guns and masks for expertise in coding — and there are more of them than ever. As a result, cybersecurity has never been more a critical worry for banks, their clients, and the broader financial sector.

Over the past year, 67% of the banks, credit unions, and other financial firms cited in a recent cybersecurity report by Carbon Black said they’d experienced a greater number of attempted cyberattacks and hacks. Further, 79% reported that the bad guys are getting smarter. Unfortunately, cybersecurity is having trouble keeping up with the sheer volume of hackers who are focusing their efforts on banks; according to the Carbon Black report, many attacks were successful in stealing data or money or otherwise disrupting a bank’s business.

Money-Hungry Attackers and State Actors Are Seeking Prey
The financial sector is facing a wide variety of sophisticated threats. Obviously, the treasure trove of customer data and financial assets held by banks makes greedy cybercriminals drool. But although banks typically are better at safeguarding their assets than other industries, they’re also up against the world’s best hackers, organized crime syndicates, and highly motivated nation-states with deep pockets that want to throw a wrench into the work of their competitors or enemies.

The impact of successful attacks extends beyond the immediate losses of money or customer data. Clients and the markets can lose faith and trust in the company, and mending fences and restoring trust are expensive and sometimes long, drawn-out processes. Also, because of the millions of interconnections between banks, a crisis at one can affect all the others to a greater or lesser degree. This increases risk all around.

The fact is, global rivalries no longer unfold on the battlefield; they happen in cyberspace. A few rogue states — notably North Korea — have managed to sidestep economic sanctions by launching attacks on the Society for Worldwide Financial Telecommunications (SWIFT) and other payment networks. The Hidden Cobra hacking group, from North Korea, is notorious in this regard.

In this environment, it’s little surprise that 70% of the surveyed financial institutions reported that financially motivated attackers are their biggest concern. Another 30% of these institutions said that hostile nation-state activities are another big worry.

Attacks Are More Damaging
One of the realities that keeps bank IT security people up at night is the fact that some online attackers are choosing not to simply extort sums of money but to cripple the bank by destroying infrastructure, disabling websites and networks, or taking down entire business units.

In fact, over a quarter (26%) of the surveyed financial institutions said they were victims of attacks in 2018 that were intended to interrupt banking services or erase financial data. This type of attack has skyrocketed over the past year, rising by an astonishing 160%.

And attacks are not only becoming more frequent, they’re also changing in nature. The old take-the-money-and-run approach has been replaced by long-lasting tactics more akin to a siege. The preferred methods for such attacks include distributed denial-of-service (DDoS) attacks, land-and-expand attacks that set up multiple points of persistence, and increased dwell time within a firm.

Roughly one-third (32%) of the financial institutions surveyed by Carbon Black said they had run into “counter-incident responses.” In other words, the bad guys pushed back. Rather than just slipping through the fingers of IT, they took counter-measures to thwart corporate IT teams and stay on the network.

Homegrown Problems
Just as there are commonalities among attacks, there are similarities in their timing. Many attacks happen shortly after a bank introduces a new platform — online or mobile banking, say — which can unwittingly introduce cybersecurity vulnerabilities that attackers are ready and willing to exploit.

Launching new online services without thoroughly vetting them for security gaps is simply asking for trouble, yet it continues to happen. It is a systemic mistake, and a potentially fatal one — and banks know it. Some 79% of Carbon Black’s surveyed financial institutions say they are aware that cybercriminals are more sophisticated than ever. Still, IT security teams find themselves teetering at the edge of crisis mode, or neck-deep in it, instead of proactively finding and fixing the vulnerabilities in their systems.

But we can’t blame the IT teams for always being on the run. Sixty-two percent (62%) of the CISOs of the surveyed financial institutions still report to their CIO. This is a governance problem. CISOs must be entrusted with greater authority and their own budgets to maintain security and soundness in the financial sector. They should also report to CEOs or chief risk officers because the way they think is often at odds with content-driven goals of the CIO: uptime and availability.

On the bright side, 69% of the financial institutions CISOs in the survey plan to bump up their cybersecurity expenditures by 10% or more. Seven out of 10 (68%) of these CISOs say they’ll use some of that money to recruit more security professionals — although this may be hard to do, given the current global shortage of that sort of expertise.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across … View Full Bio

Article source: https://www.darkreading.com/cloud/financial-sector-under-siege/a/d-id/1334725?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

97% of Americans Can’t Ace a Basic Security Test

Still, a new Google study uncovers a bit of good news, too.

The majority of people believe they are more proficient in online security than they actually are.

According to a March study of more than 2,000 US adults conducted by the Harris Poll for Google, while 55% of Americans 16 years and above give themselves an A or a B in online security, 97% got at least one question wrong on a basic, six-question security test. The test asked people to identity whether links without https were OK or to identify links with bad characters.

“On a more positive note, website growth is on the rise, with 48% saying they plan to create a website in the future,” says Stephanie Duchesneau, program manager of the Google Registry. “That’s a doubling of the number of online creators, which is a good sign.”

Frank Dickson, research vice president at IDC who focuses on security, says while there’s certainly a disconnect, he was surprised by the growth in new creators.

“I think the fact that 20% have created a website and 48% plan to create one bodes well,” he says. “After all, people don’t just create websites for work. I’ve created websites for my son’s baseball team, so people create websites for all kinds of activities outside of work.”

Also on the plus side: Eighty-nine out of the top 100 websites default to https. In addition, 93% of Chrome traffic on Macs is encrypted, while 90% of Windows traffic on Chrome is encrypted. As far as how people plan to use websites in the future, 45% say for a business, 43% for a hobby, and 40% for personal reasons.

Still, there’s more to do given 42% didn’t realize there was a difference in the security level between a website that uses https and one that doesn’t. In fact, 29% of Americans ages 16 or older don’t check to see whether there was an “s” on a site URL, even after being told that it means a secure connection. In addition, 64% didn’t realize they could be redirected to a website without their knowledge, even if the website has an https address.

Related Content:

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full Bio

Article source: https://www.darkreading.com/cloud/97--of-americans-cant-ace-a-basic-security-test-/d/d-id/1334763?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bots rigged Russian finale of ‘The Voice Kids’ talent show

Sure, bots might be all over the US electorate, but this is serious. This is The Voice. Think of the children!

That’s what Russian bots were doing, in fact: robo-thinking of the children. Make that one child in particular – the daughter of pop singer Alsou and wealthy businessman Yan Abramov, whom they robo-voted in by a suspiciously large margin to win the sixth season of Russia’s popular TV talent show “The Voice Kids.”

Mikella Abramova, 10, won with 56.5% of the phone-in vote.

The state-owned channel that broadcasts the show, Channel One TV, announced on Thursday that it had decided to cancel the results of the vote.

Channel One said it’s working on boosting the safety of the voting system – before the start of the next season – so this never happens again.

What happened in the 6th season of “Voice of the Child” should be the first and the last case when someone tried to control the audience choice.

It came to the decision after having called on Group-IB to investigate the vote. Group-IB, an infosec firm that analyzes threats originating in Russia and Eastern Europe and which is an official partner of Interpol and Europol, released the initial results of that investigation on Thursday and said that their investigation is ongoing.

Massive text and call spamming

What it’s found so far: analysis of text and voice messages show “massive automated SMS spamming” in favor of one of The Voice Kids participants. In other words, bots placing a slew of robocalls and robo-SMS messages.

More than 8,000 text messages were sent from about 300 phone numbers during the vote. Whoever spammed the voting system used sequential phone numbers to send automated votes.

Group-IB’s analysis found that whoever pulled off the “massive vote manipulation” ran into a technical glitch: a piece of code designed to automate the sending of messages wound up in the text messages. One number in that stray string of code indicated the participant’s phone number – a clue that enabled Group-IB to determined that all of the 300 phone numbers behind the 8,000 text messages were sent by one person, using the same rate plan.

As far as calls go, Group-IB ranked voting regions and found one, in particular, that was unusually active after the start of voting. Sequential calls placed by bots in that region accounted for more than 30,000 calls for one participant.

According to the BBC, Russia’s Kommersant Daily reported that other competitors received less than 3,000 votes each.

More than unfair

Robocalls and robo-texts aren’t just unfair to kids competing in a talent competition. They do more than skewer what should be fair elections, and they’re more than just illegal and aggravating – they’re also dangerous. Reports indicate that out of the four billion illegal robocalls made in August 2018 alone, 1.8 billion were associated with a scam. Analysis by global communications platform First Orion done at that time predicted that by this year, half of all mobile calls would be scams.

In the US, state attorney generals have been pleading with the Federal Communications Commission (FCC) to pull the plug on robocalls. A huge part of the problem is that it’s cheap and easy, the AGs said:

Virtually anyone can send millions of illegal robocalls and frustrate law enforcement with just a computer, inexpensive software (i.e., auto-dialer and spoofing programs), and an internet connection.

That’s backed up by the testimony of the “robocaller king” himself, Adrian Abramovich. A year ago, the FCC fined Abramovich $120 million for the nearly 97 million spoofed calls his marketing companies made to sell vacations at resorts that, surprise surprise, turned out to be so not the Marriott, Expedia, Hilton and TripAdvisor vacations initially mentioned.

In April 2018, the Senate Commerce, Science Transportation Committee had subpoenaed Abramovich to explain exactly how easy it is to download automated phone-calling technology and to spoof numbers to make it look like calls are coming from a local neighbor.

What he told senators:

There is available open source software, totally customizable to your needs, that can be misused by someone to make thousands of automated calls with the click of a button.

Don’t blame the kids

Poor Mikella. We assume that she didn’t want to win by somebody flipping the switch on a bot voting onslaught, and now that invalidated vote has been taken away from her.

The Voice Kids, a worldwide franchise, was spun off from the hugely popular talent show The Voice. Channel One plans to celebrate “all the kids’ remarkable talent” with a special one-off show on 24 May, “in which all the season finalists and the semi-finalists will perform”, it said.

Don’t blame the children for any of this, Channel One said. It’s not their fault. Each participant becomes a member of the big Voice family, the station said, and “in a difficult moment, families become even more united.”

Hopefully, Mikella will be there at the second finale, getting a second try, belting it out on a more level playing field.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/d5MDySr9Wz4/

Facebook bans accounts of fake news firm

Facebook has shut down 265 fake accounts, many linked to an Israel-based social media company, that were being used to spread fake news and influence political discourse in a number of nations – mostly in Africa, but also in Latin America and Southeast Asia.

The company announced on Thursday that the accounts, which were on both Facebook and Instagram, had engaged in what Facebook dubbed “coordinated inauthentic behavior.”

In the ongoing back-and-forth over the use of social media as a platform from which to launch political meddling, companies such as Facebook and Twitter have been wrestling with the way their platforms have been used to spread disinformation. Singling out a company like Facebook did with Archimedes Group is a new twist, though.

The company promises its clients that it can bend reality for them. Archimedes Group, based in Tel Aviv, calls itself a leader in large-scale, worldwide “campaigns” and promises to “use every tool and take every advantage available in order to change reality according to our client’s wishes.”

…at least, the site was promising that when the Washington Post wrote up the news. Its site is strange to navigate, so either I can’t find that text, or perhaps Archimedes Group has yet again warped reality… and tweaked its site to remove the “by any means necessary” message.

Nathaniel Gleicher, Facebook’s head of global cybersecurity policy, said in Thursday’s post that the Pages and accounts weren’t taken down because of their content. Rather, it was their coordinated behavior that set off red flags:

As in other cases involving coordinated inauthentic behavior, the individuals behind this activity coordinated with one another to mislead others about who they were and what they were doing, and that was the basis for our action.

Gleicher said that the people behind the network used fake accounts to run Pages, disseminate content and artificially pump up engagement. They also lied about being locals – including local news organizations – and published what was allegedly leaked information about politicians.

Facebook’s investigation showed that some of the activity was linked to Archimedes Group, which it banned from both its main platform and its Instagram service. Facebook also sent the company a cease and desist letter.

Before the ban, Archimedes Group was running 65 Facebook accounts, 161 Pages, 23 Groups, 12 events and four Instagram accounts. The Pages and accounts frequently posted about politics, including elections, candidate views and criticism of political opponents, focusing mainly on the African nations of Nigeria, Senegal, Togo, Angola, Niger and Tunisia, along with some activity in Latin America and Southeast Asia.

The Pages and accounts had about 2.8 million followers, and about 5,500 accounts joined at least one of the Groups. About 920 people followed one or more of the Instagram accounts.

Facebook says that the accounts paid it around $812,000 for Facebook ads, paid for in Brazilian reals, Israeli shekel, and US dollars. The accounts took out their first ad in December 2012, while the most recent ad ran last month.

The Pages hosted nine events between October 2017 and May 2019, with up to 2,900 people having expressed interest in at least one of the events. Facebook couldn’t determine whether any of those events actually took place.

Who’s behind it?

While Facebook traced much of the coordinated, “inauthentic” behavior to Archimedes Group, it’s unclear who paid the Israeli firm for the disinformation campaign(s). Graham Brookie, the director of the Digital Forensic Research Lab at the Atlantic Council, told the Washington Post that it’s easy enough to follow the ad-buying money trail to Archimedes, but it gets hazy after that:

The useful thing about the ads is it gives us high confidence it was Archimedes, but it doesn’t give us high confidence who was paying Archimedes.

The lack of transparency into who’s behind the first hop in the money trail points to a vulnerability in Facebook’s transparency tools, he noted. What we do know is that somebody doesn’t mind paying for bogus news:

It is disinformation for money. It’s the convergence of ideological disinformation, and disinformation for economic gain.

The use of coordinated accounts in disinformation campaigns was one of the techniques used by the Russian government-linked propaganda factory known as the Internet Research Agency (IRA) during the disinformation campaign around the 2016 US presidential election.

Using both Facebook and Instagram was another similarity between the Archimedes Group and the IRA. In reports prepared for the Senate Intelligence Committee and released in December, researchers concluded that while Facebook, Twitter or Google reached the most amount of people, Instagram was where the action was: that’s where the disinformation and political meddling posts got far more play.

In a years-long propaganda campaign that preceded the election and which didn’t stop after, Facebook’s photo-sharing subsidiary generated responses that dwarfed those of other platforms: researchers counted 187 million Instagram comments, likes and other user reactions, which was more than Twitter and Facebook combined.

But just because the Archimedes Group used similar tactics to the IRA doesn’t suggest anything more than that the Archimedes Group, like others around the globe, can easily take a page from Russia’s playbook. As it is, disinformation campaigns, and the tactics used therein, are now being widely deployed around the world, experts told the Post – including in the US.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9TUXUy6jEgs/

Brave browser concerned that Client Hints could be abused for tracking

The people at privacy-focused browser, Brave, have criticised an industry proposal it says would make it easier for websites to identify a browser using a passive, cookie-less technique called fingerprinting.

Called HTTP Client Hints, the proposal provides a standard way for a web server to ask a browser for information about itself. It comes from the Internet Engineering Task Force (IETF). This organization works with industry members to create voluntary standards for internet protocols, and it has a lot of power. It standardized TCP and HTTP, two of the internet’s foundational protocols. 

HTTP already offers a technique called proactive negotiation, which lets a server ask a browser about itself. This technique makes the browser describe its capabilities every time it sends a request, though. That takes too much bandwidth, says the IETF.

Client Hints makes things easier. It defines a new response header that servers can send whenever they like, asking the browser for information about things like its display width and height in pixels, the amount of memory it has, and its colour depth. 

The IETF says that Client Hints would make it easier for servers to deliver the right content for a browser. You wouldn’t want a massive picture delivered if you’re viewing on a mobile device, for example.

So Client Hints doesn’t seem to ask the browser for information that a server couldn’t already find by other means. And, in fact, in its security guidelines for those implementing the proposed standard, the IEFT urges them not to request any information to the server that isn’t available via other means (such as HTML, CSS, or JavaScript). 

This doesn’t mollify the team at Brave, though. It views Client Hints as yet another tracking method providing a way for browsers to serve up information about users. It says:

Brave is working on preventing websites from learning many of these values using JavaScript, while at the same time not breaking websites; adding Client-Hints into the browser platform would expose an additional tracking method to block and potentially make it even more difficult to maintain a usable, private Web.

Third-party delegation

Brave also dislikes another part of Client Hints: It lets a server instruct a browser to send its information to third parties (a process it calls third-party delegation). These other websites could include advertising networks serving up ads on a page.   

The Client Hints proposal also makes it easier for companies in between your browser and the website you’re visiting to know more about your device, warns Brave. It’s referring here to content distribution networks (CDNs). These are services that cache website content around the world so it’s closer to the people that read it, improving website performance. 

The IETF proposal urges developers to only deliver Client Hints to the website they’re viewing (the origin), rather than to third party sites that may interact with it. But these security guidelines are just that: guidelines. The technology itself won‘t stop unscrupulous sites from contravening them.

Brave points out that it is the server that opts to serve these requests, and that users don’t get to choose:

The browser won’t send the values unless the server requests them, but should provide them when the server does request them.

Opt-in mechanisms for the user themselves aren’t mandatory, apparently because it’s hard to explain. The IETF proposal says: 

Implementers MAY provide user choice mechanisms so that users may balance privacy concerns with bandwidth limitations. However, implementers should also be aware that explaining the privacy implications of passive fingerprinting to users may be challenging.

Ultimately, browser vendors will have the right to implement the standard or not, and Brave can do as it sees fit. Even if major browsers do opt to implement it, most have shown a willingness to hobble the standards if they’re abused for fingerprinting instead of the intended purpose.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/E6o-DYPn48c/

CEO told to hand back 757,000 fraudulently obtained IP addresses

A company accused of fraudulently obtaining 757,000 IPv4 addresses has been ordered to hand them back after the American Registry for Internet Numbers (ARIN) won a landmark judgment against it.

The dispute began in late 2018 when ARIN, which allocates IPv4 addresses in the US, Canada and parts of the Caribbean on a non-profit basis, discovered that a company called Micfo and its owner Amir Golestan had fraudulently tricked it into handing over the IP blocks.

IPv4 addresses are in incredibly short supply (see below), which means that getting hold of them involves waiting lists. Scarcity also makes them valuable on resale – between $13 and $19 each. That would make the IP addresses Micfo obtained worth between $9.8 million and $14.3 million.

Not surprisingly, cases of pocket-lining IP address fraud have risen, as ARIN’s senior director of global registry knowledge, warned about in conference presentation in 2016.

Second-hand addresses

How do the fraudsters get hold of the addresses? By using the simple technique ARIN accused Micfo of deploying.

The key is that a lot of IPv4 addresses were handed out in the past when nobody worried about shortages – a surprising proportion of which fell into disuse.

Criminals attempt to detect these dormant ranges using public data from ARIN and Whois, checking which ones are still being used (i.e. routed).

If they’re not, and no longer have an active admin, they attempt to take them over using re-registration, claiming rights to them from ARIN.

According to ARIN, from 2014 onwards Golestan and Micfo used 11 ‘shelf’ companies across the US as fronts to obtain the 757, 760 IP addresses, backing this up with faked notarised affidavits from staff who turned out not to exist.

Even when ARIN detected the fraud, Micfo continued to resist, seeking a restraining court order against the organisation. It also filed for arbitration, the first time this has happened in such a case.

On 1 May, Micro lost this arbitration and was ordered to hand back the addresses and pay ARIN $350,000 to cover legal fees. Golestan now faces charges of wire fraud carrying a possible 20-year sentence.

Some of the addresses are being used by bona fide buyers and probably won’t be returned. Nevertheless, the case has highlighted the growing problem of IP address fraud. Said ARIN president and CEO, John Curran:

We are stepping up our efforts to actively investigate suspected cases of fraud against ARIN and will revoke resources and report unlawful activity to law enforcement whenever appropriate.

Why the shortage?

As a 32-bit addressing scheme, IPv4 is limited to a maximum of 232,  or 4,294,967,296, possibilities. When it was defined decades ago, that seemed plenty.

Even though not every device needs one of these addresses (router/ISP Network Address Translation hides lots of networks and devices behind a single IP), this won’t work for routable servers receiving incoming traffic.

Warnings about the imminent exhaustion of these IPv4 addresses go back years with IANA announcing that it was running out in 2011, followed by Europe’s RIPE in 2012, and North America’s ARIN in 2015.

What they meant by ‘running out’ is that as time passes they are managing scarcity by handing out smaller and smaller blocks of addresses to organisations requesting them.

Ironically, a lot of already allocated IPv4 addresses are still out there and have merely fallen into disuse, which is where address recycling comes in.

The long-term solution of supposed to be IPv6, finalised in 1998, which increases the address space to 128 bits and the number of possible IP addresses to a very large number (2128).

The problem with moving to IPv6 is that it because it requires operating systems, websites and routing hardware to support it, migration is happening very slowly.

If you already have a website registered at an IPv4 address, why bother firing up an IPv6 equivalent? Having an internet with two separate address spaces is like driving on the left but being told that it might be a good idea to drive on the right too – people understandably stick to what they know.

What might eventually drive people to IPv6 in is economics. As soon as the cost of IPv4 addresses crosses a threshold, IPv6 will suddenly look more attractive.

Unfortunately, exactly the same thing will draw criminals to second-hand IPv4 addresses. ARIN’s latest case is unlikely to be its last.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/u_V5vmHdUrM/

‘Software delivered to Boeing’ now blamed for 737 Max warning fiasco

As the 737 Max scandal rolls on, “software delivered to Boeing” has been blamed by the company for the malfunctioning of a safety display.

In a statement issued over the weekend, the American airliner manufacturer admitted that its software was not properly displaying fleet-standard warning captions to pilots. This admission comes after sustained media reporting over cockpit angle-of-attack (AOA) displays and warnings, one of which was sold by Boeing to airlines as an optional extra for their aircraft.

Warning captions (wording that flashes up on the pilot’s display screen) on the 737 Max included one, AOA Disagree, which alerted the pilots if the 737 Max’s two AOA sensors were delivering different readings from each other. If the two go out of sync, the logic goes, one must therefore be faulty.

Worse, Boeing engineers knew about the problem in 2017 – months before the fatal Lion Air and Ethiopian Airways crashes. The company only revealed this to US Federal Aviation Authority regulators after Lion Air flight JT610 crashed in October 2018, claiming in this week’s statement that “the issue did not adversely impact airplane safety or operation”.

“Senior company leadership was not involved in the review and first became aware of this issue in the aftermath of the Lion Air accident,” added Boeing.

The AOA sensors feed the controversial MCAS trim system, another software feature that did not work properly. Improper MCAS activations seemingly caused by faulty AOA readings are suspected to have contributed to two fatal Boeing 737 Max crashes within the last year, costing hundreds of lives.

Boeing said the 737 Max’s “display system software did not correctly meet the AOA Disagree alert requirements”, adding that “software delivered to Boeing linked the AOA Disagree alert to the AOA indicator, which is an optional feature on the Max” and earlier versions of the 737.

“Accordingly,” continued Boeing, “the software activated the AOA Disagree alert only if an airline opted for the AOA indicator.”

This was not what should have happened. Even if an airline didn’t pay extra for the AOA indicator display gauge (pictured here on a schematic for earlier 737 versions than the Max), if the sensors went out of sync, a warning should have been shown to the pilots.

AOA gauges have been offered as a feature on Boeing 737s since the mid-1990s 737-600 model, known in marketing terms as the first of the 737 Next Generation (NG). The NG series, comprising the 737-600, -700 and -800 models, preceded the controversial Max series.

Boeing is now issuing a display system software update to correct this fault, it said. This is on top of a promised software update to MCAS to stop it from attempting to push the 737 Max’s nose towards the ground. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/07/boeing_blames_software_737_max_aoa_warning_captions/

Let adware be treated as malware, Canuck boffins declare after breaking open Wajam ad injector

Analysis The technology industry has numerous terms for sneaky software, including malware, adware, spyware, ransomware, and the ever adorable PUPs – potentially unwanted programs. But there isn’t always a clear difference between malware and less threatening descriptors.

In a research paper distributed this month through pre-print server ArXiv, a pair of researchers from Concordia University in Montreal, Canada – Xavier de Carné de Carnavalet and Mohammad Mannan – show that in the case of software known as Wajam, these categorical distinctions obscure how adware relies on the same untrustworthy techniques as malicious code.

“Adware applications are generally not considered as much of a threat as malware,” the researchers say, pointing to anti-virus applications that label the code as not-a-virus, riskware, unwanted program or PUP. “After all, displaying ads is not considered a malicious activity. Consequently, adware has received less scrutiny from the malware research community.”

The Canadian boffins argue that needs to change because Wajam, which injects ads into browser traffic, uses techniques employed by malware: browser process injection attacks (man-in-the-browser) seen in the Zeus banking Trojan, anti-analysis and evasion techniques, anti-detection features seen in rootkits, security policy downgrading and data leakage.

Also, over the past four years, the code has contained flaws that expose people using it to arbitrary content injection, man-in-the-middle (MITM) attacks, and remote code execution (RCE). Yet security companies remain reticent to apply the term malware too liberally because companies making dubious software have a history of suing. Recall in 2005 how spyware biz Zango, now defunct, sued Zone Labs for calling its software what it was.

“The line between adware and malware is a gray area,” said de Carné de Carnavalet, a doctoral candidate in information and systems engineering at Concordia University in an email to The Register on Friday.

“Actually, the terminology has evolved in the past 15 years. Invasive adware was also considered as spyware, because of all the personal and sensitive data they collect. This was not the taste of adware vendors who filed lawsuits against antivirus companies. Those companies now simply use the terms ‘adware’ or ‘potentially unwanted application.'”

“As a result, both antivirus companies and researchers rank the adware problem as a lower priority than, let’s say, ransomware, and even tend to leave it out. We hope to bring back the focus on this issue. It is still there, and it now has even more impact than before.”

He adds that his paper also touches on vulnerabilities in adware. “It can have serious vulnerabilities, and nobody has incentives to report or fix them,” he said.

Waja doin’ with that sample?

Working with professor Mohammad Mannan, de Carné de Carnavalet collected 52 samples of the ad injector Wajam – which has gone by different names over the years – spanning from 2013 through 2018 in order to study its chronological evolution. The samples contain more sophisticated anti-analysis and rootkit-like features than would be typically found in the most advanced malware.

Wajam, created by Montreal-based Wajam Internet Technologies, was first released in October 2011, the paper explains, and was rebranded as Social2Search in May 2016, then renamed SearchAwesome in August 2017.

In 2016 and 2017, the Office of the Privacy Commissioner (OPC) of Canada investigated the company and its software and found multiple violations of the Canadian Personal Information Protection and Electronic Documents Act (PIPEDA). It made a series of recommendations to remediate violations, only to have the company sell its assets to Hong Kong-based Iron Mountain Technology Limited.

In a statement emailed to The Register, a spokesperson for Canada’s OPC said the agency is aware of the Wajam research paper and its analysis of the software.

“Our investigation looked at the matter through a more narrow privacy lens,” the OPC spokesperson said. “During our investigation, we found the functionality had more to do with adware than enabling social media searching. In other words, the intent of the software was to serve ads to the user which is not, in itself, contrary to PIPEDA provided it is done in accordance with certain legal principles.”

“On the other hand, we generally view malware as malicious software that can be harmful to computer users and their devices,” the OPC spokesperson added. “It can include various types of program which may install computer viruses, spyware, ransomware, can recruit computers into botnets or lead to crypto-currency mining (to name a few examples).”

The OPC spokesperson said several of Wajam’s privacy practices contravene PIPEDA, such as the company’s failure to obtain meaningful content to the installation of the software, which resulted in the collection and use of personal information. The OPC also found the company prevented users from withdrawing their consent by making the software difficult to uninstall and by failing to take measures to safeguard users’ personal information.

It’s not going away

Despite these findings, eight years on, Wajam lives on, under an assumed name and a different legal jurisdiction. The Register emailed Iron Mountain Technology in the hope of discussing the software but we’ve not heard back.

“Advertising is not inherently bad, nor malicious,” said de Carné de Carnavalet. “The ads displayed by Wajam are not known to be malicious either. However, Wajam could be considered as malicious due to the personal data it collects, insecurely, from users, including their browsing and download histories, and all search queries that the user makes.”

He notes that it’s doubtful users of Wajam, Social2Search or SearchAwesome would allow the software to operate as it does if they understood how it works and how it collects information.

malware

Just Android things: 150m phones, gadgets installed ‘adware-ridden’ mobe simulator games

READ MORE

In a phone interview with The Register, Andrew Crocker, senior staff attorney at the Electronic Frontier Foundation, said some of his colleagues have been arguing that sneaky software, now commonly employed by governments in addition to the marketers and cyber criminals, should be looked for common behavior rather than separated by prefixes like adware or ransomware.

“If you install software against the users wishes or without the users’ knowledge, that’s the behavior of malware,” he said, pointing to the Computer Fraud and Abuse Act, the Wiretap Act, and the Electronic Communications Privacy Act as potential avenues for legal challenges.

There have been a few high-profile cases involving adware, most recently Lenovo’s $7.3m settlement last year that it distributed Superfish adware on its PCs. But law enforcement authorities don’t go after browser history thieves with the same passion as credit card thieves or raiders of government databases.

To mitigate the threat posed by adware, de Carné de Carnavalet argues more effort should be made to warn people attempting to install adware and that desktop platforms should adopt some of the same permission disclosures presented to mobile device users.

“You can’t stop someone from writing an ‘unwanted program,'” he said. “But such programs can be more seriously considered and better detected.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/20/wajam_malware_claims/

Black Hat Q&A: Bruce Schneier Calls For Public-Interest Technologists

Ahead of his 2019 Black Hat USA talk, cybersecurity luminary Bruce Schneier explains why it’s so important for tech experts to be actively involved in setting public policy.

Veteran security researcher, cryptographer, and author Bruce Schneier is one of the many cybersecurity experts who will be speaking at Black Hat USA in Las Vegas this August.

He’s presenting Information Security in the Public Interest, a 50-minute Briefing about why it’s so important for public policy discussions to include technologists with practical understanding of how today’s tech can be used and abused.

Schneier has become a vocal advocate for more public-minded technologists, noting in a recent interview with Dark Reading that “in a major law firm, you are expected to do some percentage of pro bono work. I’d love to have the same thing happen in technology.”

He recently took time to chat with us via email about what he’s hoping to accomplish at Black Hat USA this year, and why he thinks Black Hat attendees are well-suited to serving the greater good as public-interest technologists.

Q. Hey Bruce, thanks for taking the time to chat. Can you tell us a bit about your recent work? 

A. I’m a security technologist. I write, speak, work, and teach at the intersection of security, technology, and people. My latest book is about the security implications of physically capable computers, with the arresting title of Click Here to Kill Everybody. It’s a book about technology, but it’s also a book about public policy; the last two-thirds discusses policy solutions to the technical problems of an Internet-connected world.

I’m not optimistic about the solutions, though. I spend four chapters laying out the different government interventions that can improve cybersecurity in the face of some pretty severe market failures. They’re complex, and involve laws, regulations, international agreements, and judicial action. The subsequent chapter is titled “Plan B,” because I know that nothing in those four chapters will happen anytime soon. And I don’t even think my Plan B ideas will come to pass.

There are a lot of reasons for this, but I think the primary one is that technologists and policy makers don’t understand each other. For the most part, they can’t understand each other. They speak different languages. They make different assumptions. They approach problem solving differently. Give technologists a problem, and they’ll try the best solution they can think of with the idea that if it doesn’t work they’ll try another — failure is how you learn. Explain that to a policy maker, and they’ll freak. Failure is how you never get to try again.

Solving this requires a fundamental change in how we view tech policy. It requires public-interest technologists. So that’s what I have been evangelizing. I wrote about it for IEEE Security Privacy magazine. I spoke about it at the RSA Conference in March, and I also hosted a one-day mini-track where I invited eighteen other public-interest technologists to talk about what they do. I maintain a public-interest tech resources page that lists what other people are writing — and doing — in this space.

Q. You’ve written that having a computer science degree is not a requirement to be an effective public-interest technologist, so what is?

Public-interest tech is the intersection of technology and public policy. It’s technologists working in public policy, either in or outside government. It’s technologists working on projects that serve the public interest: working at an NGO, or working on socially minded tech tools. And while it requires an understanding of both tech and public policy, everyone doesn’t need to have the same balance of those two disciplines — and everyone certainly doesn’t need a CS degree. What’s required is an ability to bridge the two worlds: to understand the policy implications of technology, and the technological implications of policy.

I’ve met public-interest technologists who are hard-core hackers, either degreed or not. But I’ve also met public-interest technologists who come from a public policy background, or from a social science background. Since effectiveness requires blending expertise from different areas, it matters less which one came first.

Q. Why is Black Hat a place you’ve chosen to speak about this, and what do you hope to accomplish?

One place where public-interest technologists are needed is security. Networked computers are pervasive in our lives, and the security implications of that are profound. The problems that result require public policy solutions. And just as we can’t expect the government to effectively regulate social media when it can’t even understand how Facebook makes money, we can’t expect the government to effectively navigate the complex socio-technical problems resulting from poor cybersecurity.

The Black Hat community is uniquely qualified to learn, understand, and then advocate for effective cybersecurity policy. They’re cybersecurity experts, but they have a hacker mindset. My goal is to show people that they are not only qualified to do this, but that there are paths for them to do it effectively.

Q: Power in the tech industry appears to concentrate along lines of money and privilege, as it does in politics. If we do see more people working as public-interest technologists in some capacity, what should be done to ensure they advocate for policies and solutions which benefit the public at large, without overlooking vulnerable or marginalized groups?

Ha — welcome to politics. Preventing the already wealthy and powerful from accreting even more wealth and power is one of the oldest problems we have, and it’s one of those foundational problems that underlies everything else. Technology actually seems to exacerbate this sort of inequality, allowing corporations to amass extraordinary wealth and power at the expense of everyone else. I don’t have a solution, but I know that society needs to figure out a solution. And that the solution will involve understanding the technologies involved, and how they can be shaped to decrease inequity across a wide variety of dimensions.

Take algorithmic decision making as an example. Here is a technology that, if deployed correctly, can result in systems that are fair and equitable. But deployed incorrectly, it can both magnify existing bias and create new ones. There has been an enormous amount written about this, both in understanding current harms and in preventing future ones. Figuring out proper government policy around these technologies will require people who understand those technologies.

Q: Can you share a recent example how public interest technologists might be able to help with a policy problem?

Right now, I’m thinking a lot about social media and propaganda. It’s clear that the same technologies that enable free expression and the rapid exchange of ideas can be weaponized in ways that harm democracy.

I think there is value in thinking of democracy as an information system, and using information-security techniques to model attacks and defenses. It doesn’t lead to an obvious solution — that would be too easy — but it’s a new way to conceptualize the problem and create a taxonomy of countermeasures. Clearly we can’t let surveillance capitalism destroy democracy — and it’s up to people who understand both technology and public policy to figure out a way forward.

It’s like that across the board. All the major problems of the 21st century are technological at their core, and will require solutions that blend technology and public policy: climate change, synthetic biology, artificial intelligence and robotics, the future of work. These are our problems to solve; we need to get on with it.

For more information about Schneier’s Briefing and other talks, see the Black Hat USA Briefings page, which is regularly updated with new content. Black Hat USA returns to the Mandalay Bay in Las Vegas August 3-8, 2019. For more information on what’s happening at the event and how to register, visit the Black Hat website.

Article source: https://www.darkreading.com/careers-and-people/black-hat-qanda-bruce-schneier-calls-for-public-interest-technologists/d/d-id/1334758?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple