STE WILLIAMS

2018: The Year Machine Intelligence Arrived in Cybersecurity

Machine intelligence, in its many forms, began having a significant impact on cybersecurity this year – setting the stage for growing intelligence in security automation for 2019.

Machine intelligence has become a technology player in fields from medical research to financial services. This year it began to make its presence felt in cybersecurity. The initial inroads have been tightly targeted, but some experts say more substantial uses are almost inevitable.

“Intelligence” is a word heavily freighted with meaning in cybersecurity technology because it covers a wide variety of techniques and products. Expert systems, machine learning, deep learning, and artificial intelligence are all represented in the whole, with each being used and promoted by different vendors and service organizations.

Antivirus protection is one of the tasks to which companies are applying intelligence. “Intelligent AV is all about catching more malware, and it really starts with the history of malware detection,” says Corey Nachreider, CTO of WatchGuard Technologies. He describes a series of techniques that look not at code patterns or signatures but at behavioral markers for code that is run in a protected environment. “They can change the way the binary could look, but they can’t change what they have to do on your computer to do their bad thing,” he says.

In looking for behavioral characteristics and matching them with code and other patterns, machine intelligence can discover patterns involving many more factors than a human could reasonably consider. And in doing so, it also finds related vulnerabilities faster. “What machine learning has really given us is the ability to predict patterns before they actually happen,” Nachreider says.

Intelligence is not only being applied to antivirus products, but it is also finding its way into security services, as well. “The best use of AI is to give security admins the ability to deconflict tasks – to know which, out of scores of possibilities, are critical and will have the greatest impact,” says Ann Johnson, corporate vice president in the Microsoft Cybersecurity Solutions Group. She points out the critical requirement for this that comes from the sheer volume of security incidents. “Microsoft sees 6.5 trillion security signals a day. AI helps rationalize them down to a quantity that humans can deal with,” she says.

As for the effectiveness of intelligence in dealing with these threats, Johnson points to the emergence of the Smoke Loader credential stealer. “It was blocked on Azure within milliseconds because the AI saw and recognized the pattern,” she says.

That effectiveness in recognizing and acting on patterns will be used in more products and services in the future, many experts say. “Machines are really good at looking at vast amounts of data and making sense of it all in a statistical way, and humans are not,” says Clarence Chio, CTO and co-founder of Unit21, and author of “Machine Learning Security.”

He points out that the vast majority of intelligence being used in security is “machine learning” rather than “artificial intelligence.” That’s because a defining characteristic of artificial intelligence is that it can produce an output developers never considered, rather than always creating a conclusion within a known range of responses.

“I think the real challenge in industry is not really the maturity in developing such systems, but to really hone the expectations of people using such things,” Chio says.

That expectation will evolve and develop in the coming year, according to many experts. “What it’s good at right now is kind of removing all the noise and the grunt work that security analysts or professionals have to deal with,” Nachreider says. “[Still], we’re a long away from totally automating out the need for some type of security professional that occasionally has to make a decision.”

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/network-and-perimeter-security/2018-the-year-machine-intelligence-arrived-in-cybersecurity/d/d-id/1333556?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Toxic Data: How ‘Deepfakes’ Threaten Cybersecurity

The joining of ‘deep learning’ and ‘fake news’ makes it possible to create audio and video of real people saying words they never spoke or things they never did.

“Fake news” is one of the most widely used phrases of our times. Never has there been such focus on the importance of being able to trust and validate the authenticity of shared information. But its lesser-understood counterpart, “deepfake,” poses a much more insidious threat to the cybersecurity landscape — far more dangerous than a simple hack or data breach.

Deepfake activity was mostly limited to the artificial intelligence (AI) research community until late 2017, when a Reddit user who went by “Deepfakes” — a portmanteau of “deep learning” and “fake” — started posting digitally altered pornographic videos. This machine learning technique makes it possible to create audio and video of real people saying and doing things they never said or did. But Buzzfeed brought more visibility to Deepfakes and the ability to digitally manipulate content when it created a video that supposedly showed President Barack Obama mocking Donald Trump. In reality, deepfake technology had been used to superimpose President Obama’s face onto footage of Jordan Peele, the Hollywood filmmaker.  

This is just one example of a new wave of attacks that are growing quickly. They have the potential to cause significant harm to society overall and to organizations within the private and public sectors because they are hard to detect and equally hard to disprove.

The ability to manipulate content in such unprecedented ways generates a fundamental trust problem for consumers and brands, for decision makers and politicians, and for all media as information providers. The emerging era of AI and deep learning technologies will make the creation of deepfakes easier and more “realistic,” to an extent where a new perceived reality is created. As a result, the potential to undermine trust and spread misinformation increases like never before.

To date, the industry has been focused on the unauthorized access of data. But the motivation behind and the anatomy of an attack has changed. Instead of stealing information or holding it ransom, a new breed of hackers now attempts to modify data while leaving it in place.

One study from Sonatype, a provider of DevOps-native tools, predicts that, by 2020, 50% of organizations will have suffered damage caused by fraudulent data and software. Companies today must safeguard the chain of custody for every digital asset in order to detect and deter data tampering.

The True Cost of Data Manipulation
There are many scenarios in which altered data can serve cybercriminals better than stolen information. One is financial gain: A competitor could tamper with financial account databases using a simple attack to multiply all the company’s account receivables by a small random number. While a seemingly small variability in the data could go unnoticed by a casual observer, it could completely sabotage earnings reporting, which would ruin the company’s relationship with its customers, partners, and investors.

Another motivation is changing perception. Nation-states could intercept news reports that are coming from an event and change those reports before they reach their destination. Intrusions that undercut data integrity have the potential to be a powerful arm of propaganda and misinformation by foreign governments.

Data tampering can also have a very real effect on the lives of individuals, especially within the healthcare and pharmaceutical industries. Attackers could alter information about the medications that patients are prescribed, instructions on how and when to take them, or records detailing allergies.

What do organizations need to consider to ensure that their digital assets remain safe from tampering? First, software developers must focus on building trust into every product, process, and transaction by looking more deeply into the enterprise systems and processes that store and exchange data. In the same way that data is backed up, mirrored, or encrypted, it continually needs to be validated to ensure its authenticity. This is especially critical if that data is being used by AI or machine learning applications to run simulations, to interact with consumers or partners, or for mission-critical decision-making and business operations.

The consequences of deepfake attacks are too large to ignore. It’s no longer enough to install and maintain security systems in order to know that digital assets have been hacked and potentially stolen. The recent hacks on Marriott and Quora are the latest on the growing list of companies that have had their consumer data exposed. Now, companies also need to be able to validate the authenticity of their data, processes, and transactions.

If they can’t, it’s toxic.

Related Content:

Dirk Kanngiesser is the co-founder and CEO of Cryptowerk, a provider of data integrity solutions that make it easy to seal digital assets and prove their authenticity at scale using blockchain technology. With more than 25 years of technology leadership experience, Dirk has … View Full Bio

Article source: https://www.darkreading.com/application-security/toxic-data-how-deepfakes-threaten-cybersecurity-/a/d-id/1333538?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

IoT Bug Grants Access to Home Video Surveillance

Due to a shared Amazon S3 credential, all users of a certain model of the Guardzilla All-In-One Video Security System can view each other’s videos.

A vulnerability in the Guardzilla All-In-One Video Security System, an IoT-enabled home video surveillance system, lets all users view one another’s saved surveillance footage due to the design and implementation of Amazon S3 credentials inside the camera’s firmware.

Security researchers found the bug (CVE-2018-5560) during an event held by 0DayAllDay and reported it to Rapid7 for coordinated disclosure. Rapid7 published the flaw today, 60 days after it first attempted to contact the vendor. Multiple coordination efforts received no response.

This vulnerability is an issue of CWE-798: Use of Hard-coded Credentials, 0DayAllDay researchers report. Guardzilla’s system uses a shared Amazon S3 credential for storing users’ saved videos. When they investigated the access rights given to the embedded S3 credentials, researchers found they provide unlimited access to all S3 buckets provisioned for the account.

As a result, all people who use Guardzilla’s system for home surveillance can view one another’s video data in the cloud. Once the password is known, any unauthenticated person can access and download stored files and videos in buckets linked to the account.

Researchers only tested Model #GZ521W of the Guardzilla Security Video System and do not know whether other models are affected by the same bug, Rapid7 reports. Without a patch, users should ensure that the device’s cloud-based data storage functions are turned off.

Read more details in Rapid7’s blog here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/iot-bug-grants-access-to-home-video-surveillance/d/d-id/1333557?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Attackers Use Google Cloud to Target US, UK Banks

Employees at financial services firms hit with an email attack campaign abusing a Google Cloud storage service.

A malicious email campaign has been found abusing a Google Cloud Storage service to host a payload sent to employees of financial services organizations, Menlo Labs researchers report.

The threat appears to have been active in the US and UK since August 2018. Victims receive emails containing links to archive files; researchers say all instances in this particular campaign have been .zip or .gz files. All cases involve a payload hosted on storage.googleapis.com, which appears to be related to Google’s cloud storage service but is, in fact, a malicious link.

Attackers often use this domain to host payloads because it’s trusted and likely to bypass security controls in commercial threat detection products. These actors may have chosen bad links in lieu of malicious attachments because many email security products are designed to detect files and only pick up on malicious URLs if they’re already in their threat repositories.

The use of a link resembling Google’s cloud storage service is a form of “reputation jacking,” a tactic in which attackers abuse well-known hosting services to evade detection. It’s a growing trend, researchers say: In its annual analysis of the top 100,000 domains as ranked by Alexa, Menlo Labs found 4,600 phishing sites that used legitimate hosting services.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/attackers-use-google-cloud-to-target-us-uk-banks/d/d-id/1333555?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Ways to Anger Attackers on Your Network

Because you can’t hack back without breaking the law, these tactics will frustrate, deceive, and annoy intruders instead.PreviousNext

(Image: ls_design - stock.adobe.com)

(Image: ls_design – stock.adobe.com)

When you see an attacker on your network, it’s understandable to want to give them a taste of their own medicine. But how can you effectively anger intruders when “hacking back” is illegal?

In fact, the biggest legal risks are violations of the Computer Fraud and Abuse Act (CFAA), says Jason Straight, senior vice president and chief privacy officer at UnitedLex. And while businesses are dabbling in illegal activity, he advises against it.

“Make no mistake: It is happening. Companies are hacking back,” he explains, and much of their activity is arguably in violation of the CFAA. That said, he isn’t aware of any prosecutions under CFAA against organizations engaged in what is often called “active defense activities.”

Legal trouble aside, getting into a back-and-forth with attackers is dangerous, Straight cautions. “Even if you’re really, really good and know what you’re doing, the best in the business … will tell you it’s very hard to avoid causing collateral damage,” he explains. Chances are good your adversaries will see your “hack back” and launch a more dangerous attack in response.

The worst thing you can do is go after the wrong party, the wrong network, or the wrong machines, he continues. Most hackers aren’t using their own equipment when they attack.

“There are times when I have really wanted to strike back, but you can’t and you don’t,” says Gene Fredriksen, chief information security strategy for PCSU. You can shut them off, blacklist their IP addresses, and do things to slow them down if your team uses a SIEM system. There are several steps you can take to anger attackers without actively targeting them in response.

The idea is to get the bad guy to think twice, he explains, and let them know you’re serious.

Here, security experts cite the most effective ways they’ve found to frustrate, deceive, and annoy attackers without risking legal consequences. If you have a tactic they didn’t list, please share it in the comments.

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/perimeter/6-ways-to-anger-attackers-on-your-network/d/d-id/1333550?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

3 Steps for Cybersecurity Leaders to Bridge the Gender Equality Gap

By encouraging female participation through education and retaining this interest through an inclusive culture and visible role models, we can begin to close the skill and gender gap in cybersecurity.

Security is increasingly being seen as a business enabler. This has created a wealth of jobs in the cybersecurity industry for technical-minded problem solvers. With the threat and IT landscape changing constantly, a diversity of opinions, perspectives, and skills are crucial to stay ahead of the curve in adopting new productivity tools and developing new cybersecurity strategies.

New Perspectives in Cybersecurity
There are three major trends at play in cybersecurity that are creating jobs and driving the need for personnel with a diverse set of skills.

First, cybersecurity is becoming a leadership priority. Organizations finally understand how detrimental a cyberattack can be for business and how likely these attacks are to occur despite current defense strategies. Organizations are realizing that effective cybersecurity must stem from the C-suite and leadership teams rather than only IT.

Next is the rapid pace of digital transformation, which requires amplified security. To achieve this, security must be repositioned as a business enabler. Consequently, security-focused positions, such as the CISO, have evolved beyond security and into the realm of business enablement. CISO responsibilities now include security, compliance, and the integration of innovative digital systems. They also work with business initiatives to ensure objectives can be achieved with minimal risk and zero hindrance from security requirements. The next generation of cybersecurity leaders must not have only technical skills but soft skills as well, such as strategy, communication, and leadership.

Finally, there is the expansion of Internet of Things/operational technology devices in corporate networks that greatly increase the attack surface while introducing new vulnerabilities and risks. The volume of data brought in through the applications on these devices alone can be overwhelming, and this is compounded by the fact that much of it is encrypted, challenging performance even more. Many of these devices are inherently unsecure, requiring additional security measures. Organizations need problem solvers who can enable the use of these devices without creating security bottlenecks.

The Cybersecurity Skills Gap
As security teams aim to fill the positions required by these trends, they face the cybersecurity skills gap. In 2018, 51% of cybersecurity and IT professionals stated their organization had a problematic shortage of cybersecurity skills. This shortage is especially pronounced when it comes to hiring women into the field. Today, women account for just 11% of the cybersecurity workforce and are five times less likely to hold leadership positions than men.

To combat the skills gap, organizations need to focus on promoting the cybersecurity field to underrepresented groups, such as women, who can offer a new perspective on how to solve problems.

As organizations train the next generation of cybersecurity professionals, they have an opportunity to lessen the industry’s gender gap by expanding their training offerings, opportunities, and strategies, creating cultures that value women in IT and encouraging women already in the IT field to stay.

There are several steps organizations can take to promote the cybersecurity industry across various demographics and train the next generation of talent.

Engage and teach students: Organizations should support, fund, and offer STEM (science, technology, engineering, and math) training programs to students, starting at a young age, with inclusive language geared to attract both young men and women. Making these programs available in primary and secondary schools, and educating school counselors and advisers on the need for women in this field, can alert female students to STEM opportunities.

These programs and initiatives should continue throughout higher education. Cybersecurity professionals, especially female leaders, should attend career talks and presentations should encourage women to participate in IT. Organizations can also create internship opportunities for students. These opportunities should not just target computer science and IT majors but business majors and other programs where women represent a higher proportion of students. The analytical, strategic, and soft skills they learn in these programs are applicable in the IT and cybersecurity fields.

Professional mentorship: For women already in the IT and business space, organizations should offer mentorships that allow women to work with other women enjoying success in their careers. Major industry conferences must likewise make a concerted effort to include women in panels and keynotes, create opportunities for women to network with male and female executives and industry leaders, and support women-led sessions that provide the examples and advice that help women navigate and succeed in the cybersecurity industry.

Enable career changes: Organizations should also offer opportunities for women looking to switch careers. For example, there are thousands of women who have served in the military, often in technical or leadership roles, who are now looking for new careers as civilians.

Final Thoughts
As organizations set out to train the next generation of security leaders to fill the skills gap and enable digital transformation, they also have the opportunity to tap into a largely under-represented talent pool that already has many of the skills required of today’s IT and cybersecurity leaders.

By encouraging female participation through education and retaining this interest through an inclusive culture and visible role models, we can begin to close the skill and gender gap in cybersecurity.

Related Content:

Renee Tarun is the Vice President of Information Security at Fortinet. Previously, she served for nine years as a manager of the National Security Agency (NSA). She received her master’s degree in computer/information technology administration and management from the … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/3-steps-for-cybersecurity-leaders-to-bridge-the-gender-equality-gap/a/d-id/1333525?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Spending Spree: What’s on Security Investors’ Minds for 2019

Cybersecurity threats, technology, and investment trends that are poised to dictate venture capital funding in 2019.

The new year will bring waves of consolidation and innovation to the cybersecurity market as investors decide which startups will provide the strongest defenses to businesses in need of them.

Global spending on security products and services will close out the year in excess of $114 billion, marking a 12.4% increase from 2017, Gartner research indicates. Next year, the security market is expected to grow 8.7% and hit $124 billion as security leaders aim to use technology to help organizations become more competitive, addressing a broad landscape of risks and varying corporate needs.

As we look to 2019, investors are weighing these risks and needs as they allocate funds toward the companies and technologies holding the most promise for next year. But before we think about the year ahead, let’s first recap the year we’re leaving behind.

A Look Back: 2018 in Hindsight
According to Hank Thomas, CEO and partner at Strategic Cyber Ventures (SCV), 2018 “was really about people playing catch-up with the attack surface that had gotten out of control.” 

The top questions companies were asking this past year: “Where is my data?” “What is my most important data?” “Where does my network begin and end?” “What do I need to protect?” “What does my rapidly expanding attack surface look like, and how do I protect it?”

Security was top-of-mind for private equity firms, which spent 2018 building out their infosec portfolios. Thoma Bravo, for example, in May took a majority stake in LogRhythm, a security information and event management (SIEM) company. It later bought security firm Imperva for $2.1 billion in October, which was followed by a $950 million acquisition of Veracode the next month.

The trend affected both large and early-stage companies as private equity players were willing to consider startups in their B or C funding rounds and bring them into the fold, explains Jeff Pollard, Forrester vice president and principal analyst serving security and risk professionals.

“It definitely appears the private equity firms … they’ve figured out a way to make money off cybersecurity,” he explains. While their end game is still “a bit up in the air,” he also expects the trend of private equity cybersecurity investment to continue into 2019.

This year also saw security startups exit as bigger firms snapped them up. Automation and analytics were hot technologies for giants including Microsoft and Amazon, neither of which are traditional security firms but are interested in integrating analytics into their feature sets. Other traditional firms invested to address weak spots like identity, says Pollard: Cisco’s purchase of Duo Security for $2.35 billion was one of the giant’s largest security deals to date.

Investors will be watching as larger firms aim to shore up defenses. Cloud security, for example, is a top priority for Palo Alto Networks, which in March acquired Evident.io for $300 million  to strengthen the cloud. Later this year, it doubled its efforts with a $173 million purchase of RedLock.

Future Funding: What’s Coming in 2019
Thinking about next year, Pollard expects “a wave of innovation and consolidation” as startups founded to build specific solutions see their technologies integrated into broader platforms.

“Whenever you have a flurry of startup activity, what you find is a lot of vendors trying to solve very similar problems,” he explains. What happens in the enterprise is these capabilities make more sense as features of bigger products. The endpoint space, for example, has a wealth of advanced technology and has experienced much consolidation as firms aim to offer a suite instead of a single tool.

Which technologies are investors thinking about in 2019? Unlike in years past, artificial intelligence (AI) and machine learning will not set startups apart, Pollard says. In 2018 we saw “a bit of a swerve,” and much of the allure of AI and machine learning disappeared as both became expected features in other technologies. They’re not nice-to-have, but must-have, additions.

“It’s not that machine learning and artificial intelligence will go away – it’s just a default expectation,” he explains. “You’re not going to be funded because you do cool artificial intelligence and machine learning for security. The people who make more sophisticated use of that and show how it makes a solution will be the organizations that can power forward.”

SCV’s Thomas foresees the rise of different up-and-coming security products that aren’t specifically built for security but have many applications in the space. Computer vision technology, a form of AI, is one example and has varying use cases, from facial recognition to collaboration tools. It can also be used to identify “deep fake” videos that can be used to spread disinformation.

This is an area SVC has been closely considering, Thomas says. Deep fake videos are realistic videos that circulate online and can prompt corporations to ask security teams to react. He describes it as similar to fake news but in the form of an incident that could affect a major organization’s security posture. A hacker group that wanted to add a layer of obfuscation and hide their activity could use a deep fake video to distract security teams from their work.

Threats are “potentially catastrophic” and could have major security implications, Thomas adds. SVC has been looking at tech that can confirm with high probability whether content is fake and untangle the “spiderweb of disinformation” online. Corporate America might have to get into the business of identifying fake news as it pertains to network threat activity, he explains.

“A Fortune 100 company could save a lot of money on a threat that’s not real,” Thomas says. “It’s going to be important they have a capability to confirm or deny these threats if it’s gonna be in the public domain.”

He also expects identity and access management (IAM) will reach a new level in 2019, with different forms of multifactor authentication. The single sign-on password “is mostly dead” in the business world, Thomas continues, and new forms of authentication will surface. A number of companies have started to use computer vision for facial recognition on-premise, he adds.

Pollard anticipates investment in tools designed to bridge the gap between security and business teams. New solutions will emerge to provide security leaders with metrics, dashboards, and visualizations so they can better present security-related data to stakeholders and help enterprise employees view security in a different way. He also expects a growth in services, which he says used to be less attractive to investors but have since seen positive growth.

“It definitely looks like security budgets, and people buying security technologies, are definitely going up,” he says. “That’s also leading to the investment side going up as well.”

New Solutions for New (and Old) Problems
As security budgets rise, so will investments, Thomas says. Many companies still don’t know what they need to defend, and their networks are expanding as a result of new trends such as the Internet of Things. Reality will set in during the upcoming year, he adds.

“They have been forced to expand in areas they didn’t want to go into, [and] now they’re forced to defend more territory than they ever planned on defending,” Thomas explains.

Still, the security industry continues to deal with the same problems it dealt with a decade ago, says Pollard, and big security players haven’t sufficiently done their jobs to solve them.

“We need innovation,” he admits. The market needs new people and talent, he continues, and there is both ample funding and investor interest to bring new ideas to fruition. “If you have an idea for security, start it,” Pollard emphasizes. “There’s an appetite for this.”

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/spending-spree-whats-on-security-investors-minds-for-2019/d/d-id/1333554?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Could you speak up a bit? I didn’t catch your password

Something for the Weekend, Sir? I want to be your backdoor man. Or so asserted Robert Plant at the end of Whole Lotta Love. Hey ho.

Security Australia shutterstock

Wow, what a lovely early Christmas present for Australians: A crypto-busting super-snoop law passes just in time

READ MORE

As a blissfully unaware child, I would sing along to these lyrics – emerging from behind the sofa once track 1’s “scary bit” was over – and never bothered to consider the full import of Bob’s proposition until I was much older. Much later, and perhaps similarly pondering the career implications of doing so, Leona Lewis failed to lay claim to any such backdoor rights during the closing ceremony of the Beijing Olympics in 2008.

No matter. The original spirit of 1969 Led Zeppelin has just been revived by antipodean lawmakers. It seems the Australian government wants to be your backdoor man too.

While the rest of the free world (oh, you know what I mean) has managed to beat back repeated looming legislative threats to end-to-end encryption, Oz parliamentarians on both sides of the political divide rushed it through, deliberately without deliberation, in time for Christmas. Hurrah for the western forces of good, globally renowned for its honest politicians, restrained security services and incorruptible police!

It is surely unnecessary for me to outline the problems with such a shortsighted law to readers such as yourselves. Come to think of it, the very fact that you can read at all suggests that you are already smart enough to understand that doors are structurally and intrinsically less secure than solid walls.

Even a politician with the IQ of a preschooler and who believes in preposterous computer fairy tales such as ‘artificial intelligence’ will be familiar with Ali Baba’s method of gaining entrance to the thieves’ den, i.e. entering the poetic equivalent of password1234. Building a backdoor into encryption means anyone can be a backdoor man, not just Australian civil servants with CompTIA Security+.

Earlier this year, Outpost24 surveyed 155 IT professionals during the RSA Conference in San Francisco and found that 71 per cent were quite sure they could successfully hack any organisation via social engineering, insecure web apps, mobile devices or a public cloud. The other 29 per cent correctly pretended that they couldn’t.

Top secret door code

All this comes as the result of self-conflict. We keep thinking up better methods of keeping data secure. But security makes the data awkward to access so we invent ways of circumventing it. Then we have to brick up the circumventions with more layers of security. This eventually gets so annoying to sidestep that we give up on the workarounds altogether and demand instant backdoors instead.

And if you thought fighting off security breaches were already a pain in the arse, just wait until a government opens your backdoor.

We’ve all heard the arguments about anti-terrorism and how the security services want to enhance our secure systems by building insecurity into them. Yes, I know the cliché that justifies making private data easier to hack by claiming “if it saves just one life, it’ll be worth it”.

Great, so let’s ban all cars and lorries and planes and trains and boats. Let’s remove stairs from buildings and forbid the use of ladders. In fact, any type of construction work or operating heavy machinery looks risky. Ban factories. Ban farming. Ban fishing. Ban sport. Ban going outside when it’s sunny. If this saves just one life, it’ll be worth it, right?

I am being reductive, of course. The Australian government isn’t banning anything: it is merely breaking encryption in the name of anti-terrorism. So let’s confine ourselves to breaking stuff. Perhaps the US authorities could consider breaking blockchain in order to trace IDs behind this month’s bomb-scare Bitcoin accounts. Or Britain’s MI6 might want to break its own secret communications systems so that anyone can listen in, just in case they suspect a sleeper agent is misusing them.

Better still, let’s go full circle and dispense with security altogether. All that matters is that something looks secure to the general public and legislators despite being about as robust as rice paper in a typhoon.

Shame about yer face

My favourite example of this mad approach put into real-world practice is Android’s facial recognition, which eventually found its way into my smartphone handset this year. I haven’t seen the official specification that the Android developers were asked to work on but I assume it must have said: “Create a half-arsed piece of crap with as little effort as possible in order to please our stupider Huawei customers.”

Unlike Apple’s Face ID, the facial recognition security on my phone works if I wave an enlarged passport photo in front of the camera, and that photo doesn’t even have to be of me. In fact, I thought I’d have some fun trying photos of various famous people through the history of British entertainment to see if the software still thought it recognised yours truly. This went flatteringly well at first (Daniel Craig) but I lost interest when it started getting stupid (Norman Wisdom).

It makes you wonder how the facial recognition works at international arrivals for certain UK airports since there’s no Face ID-like depth analysis data on file with which to compare your real-life mug. My assumption is that, yet again, it’s not really a security system at all but is simply designed to look like one. In fact, it’s a delaying system that takes a mugshot, checks your passport isn’t fake and alerts plod, all while you’re trapped between electronic gates and blinded by 10000W lamps.

I suggest these so-called biometric gates could also be used as dementia detectors. Given that you have to watch the same how-to-use-the-biometric-gates video tutorial looped about 50 times while you’re queueing at the gates for half an hour or more, it’s amazing how many loopy fellow passengers have forgotten what to do when it’s their turn. They will inevitably place their passports face-up on the face-down scanner, attempt to scan a blank visas page, stand behind / in front of / next to the bootprints painted on the floor, and face absolutely everywhere except towards the screen that prompts them to look in that specific direction.

Nah, I’m being silly. All that’s happening there is that it catches out anyone who finds themselves utterly baffled by technology.

Evidently, they are politician detectors.

Youtube Video

Alistair Dabbs
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. He says he can’t let 2018 go without a tip of the beanie to Pete Shelley who died in early December. For most, Shelley will be remembered for founding 1970s punk band The Buzzcocks. For Dabbsy, he remains the only artist to feature a Commodore PET in a charting pop video (1981’s Homosapien) and the only one to include Sinclair ZX Spectrum program code on an album (1983’s XL-1). Oh, and for managing to get a song about gay love into Shrek 2. Merry Christmas, everybody. @alidabbs

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/25/could_you_speak_up_a_bit_i_didnt_catch_your_password/

Your two-minute infosec roundup: Drone arrests, Alexa bot hack, Windows zero-day, and more

Roundup If you’re reading this while on-call for IT support, network security, or what have you, then we salute you. If you’re reading this to avoid Christmas present wrapping or hobnobbing with awkward relatives, or similar, then, well, let us shake your hand.

For you, here’s a rapid-fire roundup of infosec-related news to close out this week, in no particular order.

Gatwick drone arrests: Two people have been arrested by cops probing the “criminal use of drones” that caused chaos at Gatwick Airport for 100,000-plus air travelers this week.

No Russian vote hack: The US Director of National Intelligence, Dan Coats, concluded on Friday that no hackers “prevented voting, changed vote counts, or disrupted the ability to tally votes,” although Russia, China and Iran “conducted influence activities and messaging campaigns … to promote their strategic interests.”

Amazon Alexa all bot and bothered: Since 2016, Amazon has dangled hundreds of thousands of dollars in prizes in front of computer-science students to encourage them to develop conversational bots, accessed via its voice-controlled Alexa personal assistant. When people want to talk to one of these experimental chat bots, they ask Alexa to put them in touch with the software, the bot is loaded up, Alexa takes a back seat, and the chat code starts nattering with the user.

Well, it’s emerged those bots have been caught telling folks to “kill your foster parents,” discussed sex acts, and so on, after the software went awry. At point, Amazon CEO Jeff Bezos ordered one of the errant bots to be shut down because it was too embarrassing, it is claimed.

Worse, hackers in China were able to break into one of the students’ bots and extract transcripts of people’s conversations sans usernames, according to Reuters.

“These instances are quite rare especially given the fact that millions of customers have interacted with the socialbots,” an Amazon spokesperson said of the chat gaffes.

Windows zero-day drop: A bug-hunter this week popped online a proof-of-concept exploit for another Windows zero-day, in that there is no patch available for this hole. This one allows any user to read the contents of a file that a system administrator can access, and abuses a bug in the operating system’s Readfile API call. It is confirmed to be a legit exploit.

The FBI is also apparently sniffing around the researcher, requesting her Google account records, perhaps in relation to earlier zero-day disclosures, or perhaps due to a tweeted threat against the US President.

Huawai, ZTE? Czech, mate: Software and hardware from Chinese tech giants Huawei and ZTE are a security threat, the Czech government has warned. “Issue of this warning concludes our findings and those of our allies and security partners,” the Euro nation’s cyber-agency added.

Mass phone snooping – EFF’ing hell: Following a long-running freedom-of-information lawsuit, the EFF has succeeded in obtaining documents detailing project Hemisphere: a system America’s drug cops used to “tap into trillions of of phone records going back decades,” with the help of ATT.

FBI wanted poster of Zhu Hua and Zhang Shilong

Uncle Sam fingers two Chinese men for hacking tech, aerospace, defense biz on behalf of Beijing

READ MORE

Denial-of-service-for-hire denied: On Thursday, US prosecutors seized and shut down 15 websites that offered to launch distributed-denial-of-service attacks against companies and networks in exchange for dosh. These so-called booter for-hire sites would be tapped up by miscreants to take down stuff like gaming platforms and online retailers. Clobbering these booters means there’s less chance scumbags will use them to cause mischief or chaos over Christmas.

In relation to this, charges have also been brought against Matthew Gatrel, 30, of St Charles, Illinois, and Juan Martinez, 25, of Pasadena, California, for allegedly conspiring to violate the Computer Fraud and Abuse Act, and against David Bukoski, 23, of Hanover Township, Pennsylvania, for allegedly aiding and abetting computer intrusions.

Mac spyware goes undetected: A strain of document-stealing macOS malware, dubbed Windshift, was detected by just two antivirus packages – Kaspersky and ZoneAlarm – four months after the lid was blown off the software nasty, according to Objective-See’s Patrick Wardle. The spyware is being thrown at targets in the Middle East, but mind how you go, Apple fans elsewhere.

Hey Uncle Sam, reveal your device snoop rules: Rights warriors the ACLU and Privacy International, and friends, are suing the US government to discover the rules and procedures in place for agents authorized to hack into targets’ computers, phones, and other devices.

“The lawsuit demands that the agencies disclose which hacking tools and methods they use, how often they use them, the legal basis for employing these methods, and any internal rules that govern them,” law student Alex Betschen explained. “We are also seeking any internal audits or investigations related to their use.”

‘White hat’ hacker’s Nest beg: A bloke in Arizona, USA, says a hacker broke into his Nest security camera from afar and spoke to him, through the device, warning the fella his equipment was insecure. It appears the chap, Gregg, had secured his Nest using a password that he had reused on another website or service that had been hacked or spilled its credentials. Therefore, miscreants could reuse the leaked password to break into his Nest.

In short, use a unique password for your IoT gear, and activate two-step authentication where possible.

Hands off, St Jules: The UK government should let Wikileaks supremo Julian Assange walk free from Ecuador’s London embassy without arrest or extradition, UN human rights gurus urged on Friday. The UN is essentially reiterating its 2016 declaration that Assange is being effectively detained unlawfully in Blighty.

Big trouble in big China: Watch out, if you’re using ThinkPHP. Roughly 50,000 Chinese websites have been attacked after a proof-of-concept remote-code execution exploit for ThinkPHP versions 5.0.23 and 5.1.31 was made public this month.

And finally… Be aware that it is possible to phish your multi-factor authentication tokens, as phishing targets in the Middle East and North Africa found out. Make sure you’re visiting the real website of your email provider, bank, and so on, rather than something dodgy like protonemail dot-com that will pretend to be a legit site, automatically snaffling your username, password, and token, as you enter them.

Keybase has fixed a local privilege escalation bug in its Linux software. And the Go programming language maintainers have patched a bunch of vulnerabilities – one allowed remote-code execution via go get -u.

Take care out there, and merry Christmas. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/22/security_roundup/

More phishing attacks on Yahoo and Gmail SMS 2FA

The second report in a week has analysed phishing attacks that are attempting – and probably succeeding – in bypassing older forms of two-factor authentication (2FA).

The latest is from campaign group Amnesty International, which said it had detected two campaigns sending bogus account alerts targeting around 1,000 human rights defenders in and around the Middle East and Africa.

The organisation has its theories about who is behind the attacks but what will matter most to Naked Security readers are the methods being employed to defeat authentication.

Only days ago, researchers at Certfa reported on what they believed were targeted attacks against influential people with US connections which were able to beat 2FA.

Those targeted Gmail and Yahoo accounts secured using either SMS-based 2FA (where a one-time code is sent to a user’s mobile device), or generated by an authenticator app, also using an OTP-based protocol.

Likewise, the attacks detected by Amnesty also targeted Google and Yahoo’s 2FA, although this probably reflects their popularity rather than any specific weakness in implementation.

Phishing 2FA

As with Certfa, Amnesty’s evidence comes from analysis of a server used by the attackers to store credentials from stolen accounts.

This appears to include references to phished OTP 2FA codes but with an interesting twist – once they’d gained access to the account, the attackers also set up a third-party app password to maintain persistence.

This would mean that even if a phished individual realised they’d been hacked and regained access to their account, the attackers would have created a sneaky backdoor that wouldn’t be immediately obvious to many users.

Says the report:

App passwords are perfect for an attacker to maintain persistent access to the victim’s account, as they will not be further required to perform any additional two-factor authentication when accessing it.

In a second technique, the attackers appeared to have connected hacked accounts to migration services such as Shuttlecloud as a way of quietly monitoring activity in a clone account.

ProtonMail and Tutanota

Interestingly, the campaigns also targeted more specialised email services such as ProtonMail and Tutanota which are marketed as offering a higher level of security and privacy by default.

For example, even without authentication turned on, ProtonMail users must enter not only a username and password but an encryption code to decrypt the contents of their inbox. All messages sent between users of the service are end-to-end encrypted and users can see logs of all account accesses.

And, of course, users can turn on OTP-based 2FA which, given that ProtonMail is intended to raise the bar for attackers, one would imagine the majority of users would do.

But encryption keys and OTP codes are no different from usernames and passwords – in principle they can be phished if the attackers are able to jump through a few extra hoops.

According to Amnesty, in the case of Tutanota the phishing campaign was able to use a similar-looking domain, tutanota.org (the correct domain being tutanota.com).

To boost verisimilitude, the attacks added baubles such as an HTTPS connection/padlock, and a carefully-cloned replica of the real site.

Did the attacks succeed?

The evidence isn’t conclusive, but it appears that Yahoo and perhaps Gmail SMS 2FA was successfully targeted on some occasions.  No evidence is presented regarding any compromise of ProtonMail or Tutanota accounts.

The question is where this leaves 2FA authentication that’s based on sending or generating codes.

It’s worth stressing that while man-in-the middle attacks on this form of authentication have been possible for years, it is not as easy as phishing a username and password.

To succeed, the attacker must grab the code within the 30-second window before it is replaced by a new code, which under real-world conditions must probably be done in less than half that time. This might explain why SIM swap fraud (where attackers receive SMS codes direct) has become another popular technique.

To be convincing, they might also have to know the target’s phone number because SMS authentication pages often list the last two digits as an authenticity check.

The message here is that while code-based 2FA is better than a plain old password, phishing attackers are now going after it with gusto. Rather than fall back on assumptions and probabilities, anyone who feels they might be a high-value target should consider moving to something more secure.

At some point we’ll all have to do the same. For the tech industry – and its users – the warning lights are flashing red.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/w44flcAS798/