STE WILLIAMS

FBI: Reported Internet Crimes Topped $1.4 Billion Last Year

Business email compromise (BEC) campaigns outnumbered ransomware cases.

The FBI’s Internet Crime Complaint Center (IC3) logged more than 301,000 complaints of Internet crimes last year that resulted in losses of more than $1.4 billion — and that’s just from consumers and businesses that reported incidents to the bureau.

Victims who reported attacks to the IC3 mostly were hit with so-called non-payment/non-delivery scams, personal data breaches, and phishing attacks. The most-costly types of Internet crimes reported to the FBI were business email compromise (BEC), confidence/romance fraud, and non-payment/non-delivery scams.

Ransomware, the darling of the cybercrime and nation-state hacker scenes, actually declined in 2017, according to the FBI IC3 data, from 2,673 complaints in 2016 to 1,783 last year. Victims of ransomware lost more than $2.3 million last year, versus $2.4 million in 2016.

But that data may well be skewed by shifts in ransomware targets: the drop in ransomware reports to the FBI could be due to the rise in those attacks on more lucrative targets than consumers — larger organizations. It also could reflect larger organizations not reporting their attacks to IC3.

“The FBI IC3 only reports on what is given to them. In the case of ransomware specifically, victims seem even less inclined to report — to the point where it hampers our ability to go after the bad guy,” says John Bambenek, vice president of security research and intelligence at ThreatSTOP.

Bambenek says many large enterprises may have the resources in-house to handle the incident without law enforcement, and some corporate counsels may advise reporting a ransomware attack if it’s not required. In addition, he notes, cryptojacking’s recent rise also has contributed to lower ransomware numbers as attackers have found a simpler and more lucrative way to make money than relying on ransom payments.

BEC
Meanwhile, reported BEC and email account compromise (EAC) attacks netted the most losses to victims last year, with 15,690 BEC complaints and losses of more than $675 million. These types of attacks dupe business users into transferring money, purportedly to their supplier or other business partner. EAC is where an attacker wrests control of a legitimate user’s business email account rather than use a phony account.

So-called confidence fraud/romance schemes, where a fraudster poses online as a love interest or other trusted person to squeeze money from the target, netted attackers more than $211 million last year, according to the IC3’s report. Scams where victims paid for an item or service online that never arrived or they didn’t receive payment for an item (non-payment/non-delivery) resulted in more than $141 million in losses to victims.

Another popular scam last year was tech support fraud, which increased by 90% over 2016. The IC3 logged nearly 11,000 complaints of attacks that resulted in losses of $15 million.

The top 10 US states with the most victims of Internet crime were California (41,974), Florida (21,887), Texas (21,852), New York (17,622), Pennsylvania (11,348), Virginia (9,436), Illinois (9,381), Ohio (8,157), Colorado (7,909), and New Jersey (7,657).

“We want to encourage everyone who suspects they have been victimized by online fraudsters to report it to us,” says Donna Gregory, chief of the IC3. “The more data we have, the more effective we can be in raising public awareness, reducing the number of victims who fall prey to these schemes, and increasing the number of criminals who are identified and brought to justice.”

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/fbi-reported-internet-crimes-topped-$14-billion-last-year/d/d-id/1331751?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Compliance Complexity: The (Avoidable) Risks of Not Playing by the Rules

Achieving compliance is a challenging process, but with the right systems and customized data management policy, your organization can stay ahead of the next data breach — and the regulators.

Data protection and privacy regulations affect organizations of every stripe. Whatever your business, if you have customers or employees, you have data that requires protection under some state or federal mandate. Such regulations are intended to ensure that proper precautions have been taken to protect potential victims of digital crimes such as fraud or identity theft stemming from malicious actors gaining access to data through hacking, technical malfunction, or human error.

Alphabet Soup of Laws and Standards
It’s important to note before any discussion of regulatory compliance begins that following the rules doesn’t guarantee your systems and data will remain secure. As the saying goes, “compliance is a floor, not a ceiling,” and so meeting the minimum standards under the law should be regarded as a starting point. Where you take your information security program from there depends on your industry, the kinds of data your organization deals with, and its appetite for risk.

Data security and privacy regulations make up an expanding landscape made up of a long, overlapping, and often confusing alphabet soup of laws and standards like HIPAA, SOX, FCRA, GLBA, PCI DSS, GDPR, PIPEDA, and others. Security and risk management decision makers must understand the nature of these laws and set security strategies accordingly or suffer the consequences of falling short of their demands. It’s not an easy task, but it is a manageable one when broken into its parts.

The first step of that process involves recognizing the ways (apart from blatantly ignoring the regulations) an organization might inadvertently fall outside the bounds of compliance.

Common Conditions That Can Compromise Compliance
The three most common conditions that can compromise a compliance program are the use and proliferation of so-called shadow IT (technologies that operate within the enterprise outside the purview of IT management); a failure to document compliance processes or enforce existing processes; and a lack of visibility into the means of collecting, managing, and storing data.

Certainly, there will likely be gaps in even the most rigorous of compliance programs, especially since compliance is a dynamic, ever-evolving endeavor. Laws change, technology changes, and the threat environment changes, so processes must change in response. Data management that includes security policies, training and awareness programs, technology maintenance, and regular systems and response testing is required. “Set it and forget it” is not a real option.

Consequences of Non-Compliance
Believe it or not, compliance saves money! According to a recent study from Ponemon and Globalscape, “The True Cost of Compliance with Data Protection Regulations,” the cost of non-compliance to businesses now runs an average of $14.8 million annually, a 45% increase since 2011. The cost of compliance, on the other hand, was found to average $5.5 million, up 43% from 2011. It’s clear that non-compliance puts your organization at greater risk of a data breach, and a data breach is certain to come with a steep financial cost as evidenced by the rash of well publicized data breaches since 2017 alone. Here are six ways a non-compliant organization might suffer in the event of a data breach:

Lawsuits
A data breach doesn’t only affect the breached organization but may also put at risk the associated employees, consumers, customers, partners, and service providers — any of which may decide to take legal action seeking justice and protection. Win or lose, a lawsuit can be an expensive proposition.

Bank Fines
If credit card data is affected, banks may end up reissuing new cards to their customers. When that happens and the banks incur associated costs, they will likely seek to recoup those costs from the organization whose breach prompted the action by levying fines or added fees.

Governmental Audits
Any egregious breach of consumer data risks action by the Federal Trade Commission (FTC) acting on behalf of US consumers. If the organization was found to be out of compliance and negligent, the FTC may not only fine the company but also require expensive annual compliance audits for years following the negligent behavior. In April of this year, the Securities and Exchange slapped Yahoo with a $35 million fine for waiting two years to disclose its massive 2014 data breach in which Russian hackers stole personal information on approximately 1 billion user accounts.

Compensation and Remediation Costs
Among the many costs involved with a security failure are those associated with forensic investigations to determine the source and cause of the breach, fix the gaps that were exploited, and address any residual risk to consumers and others. Someone has to pay for free credit monitoring services, after all.

When Nothing Is Safe
A data breach may cause consumers to lose trust in the affected organization. When that happens there’s a good chance that they will take their business elsewhere. Consider the number of retail security breaches in 2017, online or in stores, including Sears, Kmart (twice), Delta, Best Buy, Saks Fifth Avenue and Lord and Taylor (parent company Hudson’s Bay), Under Armour, Panera Bread, Forever 21, Sonic, Whole Foods, Gamestop, and Arby’s. What’s more, who can forget when cybercriminals hacked Equifax and stole the personal data of 145 million people, including Social Security numbers, not to mention Shadow Brokers, WannaCry, NotPetya, Bad Rabbit, and more.

Lost Reputation
When word of a data breach gets out, loss of reputation soon follows. To mend fences with all affected parties, organizations will incur costs associated with increased marketing, communications, and public relations campaigns. As the saying goes, a good reputation takes years to gain — but a moment to lose.

Data Management Matters
Given the risk of failure, it’s important to implement a strong data management program as a part of an organization’s security and compliance strategy. If you don’t know what data you have, where it’s stored, who has access, and how it is used, it’s impossible to keep it secure — and to prove compliance. Data management provides a framework for understanding how information moves through the enterprise. It helps with security and compliance in three primary ways:

1. Workflow and Process Automation
Human error continues to be one of the weakest links in the security chain. Workflow and process automation remove the human factor from many tasks that might otherwise be vulnerable. Automating processes associated with vital applications and services, and doing so while the organization’s security and compliance functions operate in the background, lets users focus on their jobs while giving management greater peace of mind.

2. Centralized Control and Visibility
Not knowing what’s happening in your network is unsettling — and can mean the enterprise is at risk of a breach. As networks grow more complex and as perimeters expand to include mobile devices, the cloud, and more, IT administrators need even greater levels of transparency into the network in order to gain a top-down view of the infrastructure that’s required to achieve compliance and mitigate other security and performance risks.

3. Custom Compliance Profiles and Reporting
Every organization has its own set of regulatory expectations and challenges based on industry, size, risk appetite, and a thousand other factors. One-size-fits-all doesn’t apply; specialized compliance tools offering customized data workflows and configurations ensure that, whether facing PCI DSS, HIPAA, SOX, or some combination of these and other regulations, a tailored profile and reporting structure is needed.

Related Content:

Peter Merkulov serves as chief technology officer at Globalscape. He is responsible for leading product strategy, product management, product marketing, technology alliances, engineering and quality assurance teams. Merkulov has more than 16 years of experience in the IT … View Full Bio

Article source: https://www.darkreading.com/cloud/compliance-complexity-the-(avoidable)-risks-of-not-playing-by-the-rules/a/d-id/1331704?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Calculating Cloud Cost: 8 Factors to Watch

If you’re not careful and don’t regularly assess the impact of your usage, moving to the cloud could have a negative impact on your bottom line.PreviousNext

(Image: Tanatat via Shutterstock)

(Image: Tanatat via Shutterstock)

The decision to embrace cloud should not be purely based on cost. After all, there are many factors driving cloud adoption, and most aren’t based on financial savings but agility, says JP Morgenthal, CTO of Application Services at DXC Technology.

“It’s more of an operational concern than a cost concern,” he explains, noting that, as you adopt cloud-based infrastructure and applications, what should be top-of-mind is not the price tag but the myriad ways in which the change will affect business processes. People are moving to cloud for improved agility, productivity, and quick access to sophisticated tools, he says, not to save money.

“A few years ago, I would have said most are enabling their business to get to the cloud, trying to save money,” adds Robert LaMagna-Reiter, director of information security at First National Technology Solutions. “I think they quickly learned that’s not the case.”

Moving to the cloud is a process that affects the entire enterprise. So, while cost should not be the sole factor driving adoption, it’s something all technology and business leaders will want to keep in mind. Cloud is also complicated with several factors driving the overall price. Consequently, cost reduction is no longer a top driver for cloud adoption. But as you prepare to embrace the cloud, you’ll want to know how the change will affect your overall budget, and why.

Let’s take a close look at eight common and often unforeseen costs that businesses face as they adopt, and operate in, cloud-based environments. Were there any unexpected costs you encountered as your company shifted to the cloud? Feel free to share in the comments.

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/cloud/calculating-cloud-cost-8-factors-to-watch-/d/d-id/1331735?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Millennials, Women May Bridge Cyber Talent Gap

Younger generations, particularly women, could be the answer to a cybersecurity skill shortage expected to reach 1.8 million unfilled roles by 2020.

A survey of millennials and post-millennials in the US gives some optimism about the cybersecurity talent gap, which seems doomed to worsen due to perception challenges about industry careers, poor access to early training, and unrealistic job requirements.

Enterprise Strategy Group (ESG) polled 524 millennials and post-millennials in the US to learn their perspectives on the skill shortage. Data shows 68% consider themselves either a tech innovator (27%) or early adopter (41%). Technology drives this generation’s education choices: 48% were part of a STEM program during their K-12 years and 82% plan to attend college after high school. Of the college hopefuls, 23% plan to study computer science and technology.

Part of the challenge in getting young people into cybersecurity is making them aware of the field. Nearly 70% have never taken a security class in school, and 65% said their school never offered a security course. “Don’t know enough about this field/career path” was the most popular reason cited among those who were not interested in cybersecurity. Other reasons for poor interest included a lack of technical aptitude and level of education required in security.

Researchers wanted to learn how women will play a role in cybersecurity’s future, so they parsed their data according to respondents’ gender. What they found at first seems discouraging. Twice as many men than women plan to study engineering in college, twice as many men will pursue computer science, and twice as many men are considering IT careers.

However, some key nuggets indicate women could change the game in security. Female respondents showed quicker and higher rates of adoption for new technologies, with 52% of women stating this compared with 42% of men. More women have advanced tech such as virtual reality in their households, and more women indicated they have spent time using (and would spend more time using) these technologies. Ten percent more women plan to enroll in college.

It’s worth noting two tech-related career fields equally interest men and women: video game development and, yes, cybersecurity. Women are more excited by security than men, with 57% of female millennials expressing excitement compared with 40% of male millennials.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/careers-and-people/millennials-women-may-bridge-cyber-talent-gap/d/d-id/1331754?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

20 Signs You Are Heading for a Retention Problem

If you don’t invest in your best security talent, they will look to burnish their resumes elsewhere. Here’s why.

Anyone who has tried to recruit information security professionals in recent years knows how hard it can be to find qualified people. Unfortunately, while there has been quite a bit of dialogue around recruiting, there has been far too little around retention. Tragically, retention is most often overlooked, even though it is arguably more important than recruiting.

Over the course of my career, I’ve seen organizations do a variety of things that cost them their best security talent. There are some circumstances that are simply unavoidable. But in many cases, talent leaves for reasons that are all too preventable. Isn’t a valuable resource that you’ve invested time and money in worth more to you than one that you haven’t yet invested in?

It is in this spirit that I present to you 20 signs you are heading for a retention problem.

Problem 1: No board support: Retention success starts at the top. Talented security professionals have lots of choices when it comes to where they work. Who wants to work in an environment whose value is constantly questioned, that is constantly underfunded, and where one’s existence needs to be constantly justified?

Problem 2: No executive support: If senior leadership doesn’t believe that security is important to the organization, how can those working in the security organization be expected to see a future for themselves there?

Problem 3: Not enough funding: Security is hard enough when adequately resourced but when it is inadequately resourced, it becomes an unwinnable battle. Good people want to work, not wage war.

Problem 4: Lack of vision: The most successful security programs have a clear and concise vision. The best security professionals like to know in which direction they’re headed. It helps them focus and perform to their full potential.

Problem 5: Bad boss: Studies have shown repeatedly that the boss is the most important factor when it comes to retention. Have an idiot or a jerk in charge of things? Kiss that security talent goodbye.

Problem 6: Lack of qualified team members: No one enjoys pulling five times the weight of everyone else. The more team members there are that aren’t up to par, the harder it becomes to retain the top performers.

Problem 7: Failing technology: There are few things more frustrating than fighting with inadequate technology. Knowing exactly what needs to be done and how to do it only to find yourself held back by technology can quickly put top talent in a foul mood.

Problem 8: No collaboration between operations and engineering: The best security solutions are those that meet the needs of the operators. If there is no communication between those who deploy and those who operate, what hope is there for long-term success? The impact of this point on retention is greater than most people realize.

Problem 9: Micromanaging: As management, it is expected that you will communicate what you need from your staff. That’s your job. But don’t try and tell highly skilled professionals how to do what you need them to do. That’s their job.

Problem 10: Not approaching security operations strategically: There is a limit to how much of a “Wild West” approach to security operations top performers can take. After a while, if there isn’t some order to the chaos, they will lose their patience.

Problem 11: Failure to take incident response seriously: Sooner or later, every organization will face a serious or critical incident. Seasoned security pros know this, and thus each day that goes by without a serious approach to incident response makes their blood boil a bit more. At some point, they may conclude that the organization will never get serious about incident response and run for the hills.

Problem 12: Unpreparedness: No one likes getting caught with their pants down professionally. Concern about this is a big reason people move on to greener pastures.

Problem 13: More PowerPoint than PowerShell: Well-run security programs allow their staff to spend more time working and less time explaining what they’re doing to others. If your best people end up spending more than half of their time explaining what they do to others, I think it’s safe to say that their days with you are numbered.

Problem 14: Butts in seats: If you measure productivity by time spent in the office rather than by output, say goodbye to your best employees.

Problem 15: Warm bodies: Sometimes, employees need certain accommodations to allow them to balance work and life. For example, family commitments in another geographic area may prohibit them from being physically present all of the time. If you’re not open to alternative arrangements, retention becomes that much harder.

Problem 16: Say one thing, do another: I have seen time and time again that people seek genuineness first and foremost. If a security organization preaches one thing and practices another, it hurts retention.

Problem 17: Lack of respect on the inside: If the security organization does not have the respect of other areas of the business, it can have a big impact on the morale of each employee. This, in turn, hurts retention.

Problem 18: Lack of respect on the outside: Security is an industry built on trust and respect. If an organization does not have the respect of its peer organizations, that matters to many security professionals.

Problem 19: Penny wise, dollar foolish: “How is there budget to fly management around the world 25 times, but I can’t get a few days of training each year?” This line of thinking is all too common among security professionals with one foot out of the door.

Problem 20: Failure to invest in human resources: It is true that when you invest in your people, you allow them to improve their resumes. But, perhaps ironically, when people are in a constructive environment that allows them to grow professionally and sharpen their skills, they don’t look to leave. Conversely, if you don’t invest in them, they will look to improve their resumes elsewhere.

Related Content:

Josh (Twitter: @ananalytical) is an experienced information security leader with broad experience building and running Security Operations Centers (SOCs). Josh is currently co-founder and chief product officer at IDRRA and also serves as security advisor to ExtraHop. Prior to … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/20-signs-you-are-heading-for-a-retention-problem/a/d-id/1331749?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Could this be the end of password re-use?

It’s password security’s Achilles heel: too many people make life easy for cybercriminals by re-using the same ones over and over.

The traditional solution is to implore users to set unique ones, preferably using a password manager. However, only a small minority pay any attention.

But what if there were a way for websites to compare notes on whether a password (or similar password) has been set by a user elsewhere?

According to two University of North Carolina researchers it could be possible using a framework specially designed for websites to check password similarity without ruining privacy, security, and performance.

Their suggestion is the ‘private set-membership-test’ protocol, based on the seeming magic of homomorphic encryption invented by IBM a decade ago to process encrypted cloud data without needing to decrypt it first.

It sounds simple enough: the user would select a password at a site (the requester), which would be checked against the passwords selected by the same user at other sites (the responders).

If the password was the same as, or similar to, the one being entered, the user would be asked to make a different choice.

Of course, to be useful it would need to be used by lots of sites, the very thing that might reduce performance. There would also need to be a reliable way of identifying users across numerous websites.

Cleverly, the researchers bypass the performance issue by pointing out that the protocol would only need to be used by a core of up to 20 big providers (Google, Facebook, Yahoo, et al) to eliminate most of the password reuse problem.

For identification, they reckon (probably correctly) that the vast majority of users rely on email addresses tied to a single domain from within this select group.

As for security and privacy (the problem of querying sites without creating the potential for leakage), the principles of homomorphic encryption would take care of this, they say.

If that sounds like a bit of an assumption, the research description goes into plenty of depth about the immense challenges of preserving security and why this kind of encryption is up to the job.

The authors are at least realistic about how users might react:

We are under no illusions that our design, were it deployed, will be met with anything but contempt (at least temporarily) by the many users who currently reuse passwords at multiple websites.

As solutions go, this one seems like it might be a bit tricky to implement – getting together a core of big providers to implement a homomorphic encryption protocol might take years.

Would the problem be better addressed either by better integrating password managers or simply abandoning the password as a primary means of authentication?

If the password reuse problem is really about large numbers of users deploying the same password across a small number of core sites, then the sooner they change that architecture the better.

But perhaps what the researchers have come up with is really a brilliant way to check not re-used passwords but the credential stuffing attacks themselves.

Various schemes have been suggested for doing this, but none has yet made it past the research stage. The application may be different but the problem of detecting the similarity of entered data in different places is fundamentally the same thing in another guise.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ixMsmCRD1xs/

Google cracks down on election meddling advertisers

The political shake-up season is fast coming up: in November, the mid-term elections will determine the fate of hundreds of US politicians.

Thinking about taking out an election ad on Google?

If so, get ready to prove you’re one of us and not some outsider tinkering with our elections, particularly not one with all those bots that swarmed the internet in the 2016 US presidential election.

On Friday, Kent Walker, a Google senior vice president, said in a post that in order to make political advertising more transparent, the company’s introducing new policies for US election ads across its platforms.

First step: advertisers have to prove they’re US citizens or lawful permanent residents, as required by law. That means providing a government-issued ID and “other key information,” Walker said. Advertisers will also be required to disclose who’s paying for election ads.

Over the coming months, Google will also release a new transparency report specifically focused on election ads. The report will describe who’s buying ​election-related ​ads ​on ​Google’s platforms ​and ​how ​much ​money ​they’re spending. The company is also building a publicly available, searchable library so that people can find election ads purchased on Google and who paid for them.

The ID requirements only pertain to US elections, at this point. Walker said that Google wants to expand the new, more stringent control to a wider range of elections, though.

With this move, Google’s in step with its internet-giant brethren: both Facebook and Twitter have been working on boosting transparency around who buys electoral and political issue-based ads.

There’s also legislation in the works that would strip away the mystery of who’s behind all the jarring, divisive political and social content that’s been clogging up the interweb and which has often proved to be sponsored by foreign parties, via bots.

Last month, a California bill was introduced that would force Twitter and Facebook, et al., to identify bots, while New York put forth a bill that would require transparency on who pays for political ads on social media.

Proposed legislation at the Federal level includes the bipartisan-supported Honest Ads Act, a proposal to regulate online political ads the same way as television, radio and print, with disclaimers from sponsors.

Will Google’s plan to require identity documentation help to keep the mid-terms from being tinkered with by foreign concerns?

We’ll have to wait and see. Ditto for Facebook’s plan: In February, it said it was going to verify election ad buyers by snail mail.

Nothing like nice, flat, analog paper to try to keep Russians from meddling in the 2018 election, eh? That’s what state election officials around the country are hoping, at any rate: they’re planning to use paper ballots for voting, to evade the Russian hacking attacks aimed at the US e-voting system.

Unfortunately, the election tinkering hasn’t necessarily contained itself to political ads. As the BBC reports, Facebook has said that Kremlin-linked ads, of which there were about 3,000, didn’t support a particular candidate. Rather, the ads have instead been about sensitive topics such as immigration.

There’s no snail-mail postcard or show-us-your-papers requirement that can keep out divisive, non-political ads, unless platforms such as Google, Facebook and Twitter decide that transparency about inflammatory posts is part of protecting elections.

That, in fact, is what Facebook has done: in October, it announced that only authorized advertisers could run electoral ads on Facebook or Instagram. In April, it extended that requirement to anyone that wants to show “issue ads”: those hot-button topics being argued around the country. The company said it’s working with third parties to develop a list of key issues.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/e6EfBDfuOIU/

Uber car software detected woman before fatal crash but failed to stop

In March, 49-year-old Elaine Herzberg became what’s believed to be the first pedestrian killed by a self-driving car.

It was one of Uber’s prototypes that struck Herzberg as she walked her bicycle across a street in Tempe, Arizona on a Saturday night. There was a human test driver behind the wheel, but video from the car’s dash cam published by SF Chronicle shows that they were looking down, not at the road, in the seconds leading up to the crash.

Police say that the car didn’t try to avoid hitting the woman.

The SF Chronicle reports that Uber’s self-driving car was equipped with sensors, including video cameras, radar and lidar, a laser form of radar. Given that Herzberg was dressed in dark clothes, at night, the video cameras might have had a tough time: they work better with more light. But the other sensors should have functioned well during the nighttime test.

But now, Uber has reportedly discovered that the fatal crash was likely caused by a software bug in its self-driving car technology, according to what two anonymous sources told The Information.

Uber’s autonomous programming detects objects in the road. Its sensitivity can be fine-tuned to ensure that the car only responds to true threats and ignores the rest – for example, a plastic bag blowing across the road would be considered a false flag, not something to slow down or brake to avoid.

The sources who talked to The Information said that Uber’s sensors did, in fact, detect Herzberg, but the software incorrectly identified her as a “false positive” and concluded that the car did not need to stop for her.

The Information’s Amir Efrati on Monday reported that self-driving car technologies have to make a trade-off: either you can have a car that rides slow and jerky as it slows down or slams on the brakes to avoid objects that aren’t a real threat, or you have a smoother ride that runs the risk of having the software dismiss objects, potentially leading to the catastrophic decision that pedestrians aren’t actual objects.

Efrati pointed to GM’s Cruise self-driving cars as being prone to falling on the overly cautious end of the spectrum, as they “frequently swerve and hesitate.”

[Cruise cars] sometimes slow down or stop if they see a bush on the side of a street or a lane-dividing pole, mistaking it for an object in their path.

In March, Uber settled with Herzberg’s family, avoiding a civil suit and thereby sidestepping questions about liability in the case of self-driving cars, particularly after they’re out of the test phase and operated by private citizens.

Arizona halted all of Uber’s self-driving tests following the crash. Other companies, including Toyota and Nvidia, voluntarily suspended autonomous vehicle tests in the wake of Herzberg’s death, while Boston asked local self-driving car companies to halt ongoing testing in the Seaport District.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Jmu_csFZKAM/

Critical bug in 7-Zip – make sure you’re up to date!

Two months ago, a cybersecurity researcher who calls himself LANDAVE, or just Dave for short, found a security vulnerability in the handy, popular, free utility 7-Zip.

7-Zip is a sort-of Swiss Army Officer’s Knife of file decompression tools that many users install as one of their main add-on Windows apps.

It not only supports its own brand of mega-compressed archive files with the extension .7z, but also knows how to extract data from most other archive formats, too.

Conventional ZIPs, gzip and bzip2 files, Unix tar and cpio archives, Windows CAB and MSI files, Macintosh DMG files, CD images (ISOs), and many more, along with an optional two-pane file management interface that’s perfect for old-school fans of Midnight Commander.

7-Zip also includes support for RAR files, and that’s where the vulnerability came from, apparently inherited from open source code from the standalone UnRAR utility.

Now that 7-Zip has been patched against this bug, dubbed CVE-2018-10115, LANDAVE has published the details of how he found it, and what was involved in figuring out how severe the bug might be.

According to Dave, the problem arose from an all-too-common conflict between complexity and security.

The UnRAR code is complex because it supports many different varieties of compression level and format, including a special sort of compression system that strings multiple files together before compressing them, which often squeezes more bytes out of the compressed data than squashing each file independently.

The RAR file format includes this so-called solid option because it can improve compression by allowing repeated strings of characters to be matched even if they’re in two or more different files, instead of restricting all repeated data fragments to one file. When you have many small but similar files in an archive, for example, this often results in many more repeated string matches being found, thus boosting the compression ratio.

What Dave discovered is that the UnRAR decompression code, as used by 7-Zip, didn’t bother to configure itself safely when you started using it – in other words, your software could innocently lead to a catastrophic failure in the RAR code itself.

That’s a bit like a car rental company hiring out its vehicles in a completely unknown condition each time, relying instead on every driver doing a full and complete check of their own, and correctly fixing any faults, before driving off.

Simply put, a raft of uninitialised variables in the UnRAR code opened the door to the possibility of creating a booby-trapped archive file that would trick the UnRAR code into executing code hidden in the data part of the booby-trapped file.

Remote code execution

Code that sneaks in by masqeurading as data is known as shellcode.

Bugs that allow shellcode to be executed are known as remote code execution vulnerabilities (RCEs), because a crook can use a malicious file, sent in from outside, to run malware on your computer even if all you do is to open the booby-trapped file and look at it.

No download dialogs, no pop-up warnings, no “Are you sure?” prompts.

To cut a long story short, Dave didn’t just figure out a vulnerability that was theoretically exploitable, he also created a proof-of-concept (PoC) exploit that showed how to create a RAR file that, when opened, would sneakily and unexpectedy launch the Calculator app.

Generally speaking, if a PoC can pop up CALC.EXE without asking, it could be modified to run any other command, including malware, invisibly to the user.

ASLR would have helped

Dave’s task of building a working exploit was made much easier because the apps that ship with the 7-Zip software had been created without support for address space layout randomisation (ASLR).

That means the 7-Zip tools would always load into the same memory addresses, simplifying exploits because attackers could predict in advance which handy fragments of executable code would already be loaded, and where, every time you ran the software.

The good news is that Dave managed to persuade the creator of 7-Zip not only to patch the uninitialised variable vulnerability (CVE-2018-10115) in the product, but also to build the updated version with ASLR enabled.

Those changes came out about a week ago in 7-Zip verion 18.05.

What to do?

  • If you’re a 7-Zip user, make sure you have the latest version installed.
  • If you’re a Windows programmer, don’t ship any software that doesn’t support ASLR. (Using Visual Studio, compile with the /DYNAMICBASE option.)
  • If you’re a programmer of any sort, don’t leave anything to chance when creating new objects – initialise all data fields with safe and sensible values.

For advice on what mitigations to compile into your Windows C and C++ code, take a look at Microsoft’s own Security best practices for C++ (Visual Studio 2015 and later).

Building all the recommended safeguards into your code won’t help you avoid bugs and security vulnerabilities in the first place.

You’ll still need to make every effort to program securely, of course – but the safeguards will help to reduce the exploitability of any vulnerabilities you might overlook.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gxtSyLS95Gc/

Second wave of Spectre-like CPU security flaws won’t be fixed for a while

The new bunch of Spectre-like flaws revealed last week won’t be patched for at least 12 days.

German outlet Heise, which broke news of the eight Spectre-like vulnerabilities last week has now reported that Intel wants disclosure of the flaws delayed until at least May 21.

“Intel is now planning a coordinated release on May 21, 2018. New microcode updates are due to be released on this date”, Jürgen Schmidt reported on May 7.

Last week, Heise noted that one participant in the planned coordinated release would include a Google Project Zero disclosure, which as far as The Register can discern has not yet happened.

Heise added that the bug affects any Core-i (and their Xeon derivatives) processors using microcode written since 2010; and Atom-based processors (including Pentium and Celeron) since 2013.

Spectre logo jazzed up

Fresh fright of data-spilling Spectre CPU design flaws haunt Intel

READ MORE

If disclosure and patches arrive in May, they won’t complete Intel’s response to the bugs, Schmidt reported. Further patches, tentatively scheduled for the third quarter, will be needed to protect VM hosts from attacks launched from guests.

In addition to microcode fixes from Intel, operating system-level patches will also be necessary.

Ever since the original Meltdown and Spectre bugs were confirmed in January, it’s become clear that speculative execution has been of interest to researchers for some time.

We noted in January 2018 that researcher Anders Fogh had written on abusing speculative execution in July 2017, and shortly after the Spectre/Meltdown story blew up in January, researchers Giorgi Maisuradze and Christian Rossow from German research group CISPA published a broad analysis of speculative execution based on 2017 work separate to the Meltdown/Spectre research.

In April, Intel said some Spectre bugs were not fixable in some older architectures.

Vulture South asked Intel to comment on the Heise report, and received a non-response saying it takes security very, very seriously, is working with anyone who can or should help to fix things. “We believe strongly in the value of coordinated disclosure and will share additional details on any potential issues as we finalize mitigations,” the company said. “As a best practice, we continue to encourage everyone to keep their systems up-to-date.”

Thanks for that last bit of advice, Intel. We can’t imagine anyone thought of it before. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/09/spectr_ng_fix_delayed/