STE WILLIAMS

Raising Security Awareness: Why Tools Can’t Replace People

I’ve worked in information security for over two decades, and I can tell you firsthand that instilling a culture that puts security first in all organizations, not just the ones that traditionally have a role in security, is a challenge. There are, however, a number of leadership techniques that will raise security awareness in any organization of any size. Here are four tried and true strategies.

Strategy 1: Team Building
Your immediate priority is getting to know your current security team and scaling it quickly but pragmatically. One of the biggest dangers in any new job is to move too quickly and “make a big entrance.” Not me. By looking at the superb internal talent, my team has been able to swiftly and strategically build out our security organization while doubling down on the best practices that were already working.

Part of my philosophy is that it’s imperative to show value in a security team quickly. We are not a revenue-generating team in the usual sense, but we do provide a valuable service to our customers, both internal and external. At the end of the day, the most important asset in any company is the employee base. It’s critical that everyone in your company understands their role in security. Employees are the strongest link, and also the weakest. Clear, concise communication is vital to making your security program successful.

Strategy 2: Extend a Security Mindset
Whether you are a developer, an HR expert or a lawyer, it’s important that each employee understands their role in the security world. Conversely, trying to force change by lecturing and shaming people on their security or lack thereof will rarely elicit the results you want. Instead, make security a shared focus by inviting all departments into the security organization.

At MongoDB, I am building a security champion program. We have volunteers from many teams, globally, who are willing to become the “security champion” for their group. This includes the opportunity to meet directly with security leadership on best practices and to incorporate those security practices within their particular business unit. These volunteers already have an interest in security and their outside perspective helps diversify the security organization. They can act as a conduit between internal teams to help break down silos while shifting security to a shared goal.

Strategy 3: Learn — Continuously!
It’s important to maintain a sense of curiosity as a security leader. Everyone on your security team should attend at least one training class a year, either internal or external. My team currently attends seminars throughout the year taught by third-party experts on topics such as cloud security, authentication, and container security. Our less-experienced security personnel have the opportunity to learn a new skill and grow in their role. To help with this, we offer an outstanding program called New Hire Technical Training. This is a week-long intensive training class attended by all engineering staff, including a pre-program containing approximately 100 hours of homework.

I am also working with our team behind MongoDB University, a free online training platform on MongoDB best practices, to enhance the existing security content for the class as well. To get an entire organization prioritizing security it is critical to provide a number of low-friction channels to educate and train your employees.

It’s also important to recognize that many people from nontraditional backgrounds have the critical thinking skills to be successful security practitioners. It’s our job, as CISOs, to identify those with a natural aptitude for security work and give them the opportunity to expand on that skillset with formal and internal, peer to peer training. Stepping outside of your infosec bubble to listen to and understand underrepresented perspectives will help raise the bar for security in your organization.

Strategy 4: Measure Success
To give our customers peace of mind that our technology is built securely from the ground up we communicate by third-party validation. We’ve prioritized documenting our internal processes and work for audits to attain certifications for SOC2, ISO27001, PCI/ DSS, and more. Following months of hard work, I found we had great processes already in place for many security and compliance issues that we could clearly demonstrate and communicate to partners and other third parties.

For example, the NIST CSF is helpful with measurements around people, processes and technology. So, if I were to plan a phishing exercise rollout in January, and 30% of people “click the link,” then I know I have training awareness shortfalls. That would give me solid data to launch training classes in security awareness with phishing. Two months is a decent reset before trying a new phishing exercise to see if that click rate is down significantly. I know this isn’t an exact science, and it is very dependent on the phishing topic, but you get my point.

Bottom line: Investing in training your people and continuously building relationships outside of the security world provides a greater impact than any other investment a security organization can make.

Article source: https://www.darkreading.com/operations/raising-security-awareness-why-tools-cant-replace-people/a/d-id/1336189?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

40 million emoji-addicted keyboard app users left with $18m bill – after malware sneaks into Play Store yet again

Malicious code slipped into a popular Android keyboard app racked up millions of dollars in fraudulent charges for unlucky punters.

The Secure-D research team with mobile security specialist Upstream Systems reports this week that as much as $18m in bogus fees were run up by ai.type, an on-screen keyboard replacement that has an estimated 40 million downloads through the official Android Play Store – where it has since been removed – and other third-party stores.

Secure-D claims the app, which pitches itself as a customizable emoji keyboard, contains hidden code that covertly makes premium content purchases without any user notification or permission. In addition to the bill cramming, the app engages in ad and click fraud, we’re told, in some cases disguising its traffic as coming from other legitimate Android applications.

“The app has been delivering millions of invisible ads and fake clicks, while delivering genuine user data about real views, clicks and purchases to ad networks,” Secure-D says of the rogue app. “Ai.type carries out some of its activity hiding under other identities, including disguising itself to spoof popular apps such as Soundcloud.”

The Register has reached out to ai.type’s developers for comment, and has yet to hear back.

Shutterstock of a woman using the iOS app store

iBye, bad guy: Apple yanks 18 iOS store apps that sheltered advert-mashing malware

READ MORE

Interestingly, Secure-D says that most of the fraudulent charges occurred in July after the app was removed from the Play Store in June – though at the time it remained in third-party souks and installed on millions of devices – suggesting the people behind the malware decided to cash in while they still could.

According to the researchers, the components responsible for the bogus charges are not part of the keyboard itself, but rather are in software development frameworks bundled into the app. Those kits activate and click on ads to sign users up for the premium services and generate fake traffic with the aim of collecting commissions.

“These SDKs [software development kits] navigate to the ads via a series of redirections and automatically perform clicks to trigger the subscriptions. This is committed in the background so that normal users will not realize it is taking place,” explained Secure-D head Dimitris Maniatis.

“In addition, the SDKs obfuscate the relevant links and download additional code from external sources to complicate detection even from sophisticated analysis techniques.”

Anyone who is using the ai.type keyboard would be well advised to delete it ASAP. As it is no longer in the Play Store there is no risk of new infections there, but anyone using third-party services should avoid downloading the keyboard if they see it. ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/11/01/aitype_keyboard_malware_alert/

Hunt or be hunted: Get top advice and training from SANS on how to track’n’thwart hackers

Promo No matter how thorough your security preparations, chances are that hidden threats already lurk inside your organisation’s networks. Even the most advanced security and monitoring tools can’t be solely relied upon on to keep persistent adversaries out of your systems.

SANS Institute’s Threat Hunting and Incident Response Summit event taking place in London, UK, provides advice and in-depth training on how to track down attackers and prevent them from targeting your networks.

The one-day summit on 13 January brings together industry leaders and security experts to talk about successful threat hunting techniques and tools, plus present illustrative case histories.

The summit is followed by six days of immersive security training courses and workshops starting on 14 January. All attendees are promised they will return to work fully armed with effective defensive skills ready to combat real-world threats.

Choose between these courses:

Advanced incident response, threat hunting and digital forensics

A new course focusing on detecting attacks that get past security systems. The key is to catch intrusions in progress, identify compromised systems, perform damage assessments, and determine what was stolen. Building up threat intelligence helps stop future intrusions.

Hacker tools, techniques, exploits, and incident handling

Cyber attacks are increasing in viciousness and stealth. Learn the criminals’ tactics, and gain hands-on experience in finding vulnerabilities. Legal issues include employee monitoring, working with law enforcement, and handling evidence.

Defeating advanced adversaries: purple Team tactics and kill chain defences

Enterprises of all sizes are at risk of ransomware attacks. Learn how to defend against them from real-world examples and hands-on practice in more than 20 labs. Finish with a full-day Defend-the-Flag exercise.

Advanced network forensics: threat hunting, analysis, and incident response

Network evidence often provides the best view of a security incident. The focus is on the skills needed to examine network communications in investigative work, with numerous use cases.

Cyber threat intelligence

Proper analysis of an adversary’s intent and opportunity to do harm is key to cyber threat intelligence. Learn how to collect and classify adversaries’ methods and increase your preparedness with each intrusion.

Reverse-engineering malware: malware analysis tools and techniques

Turn malware inside out and acquire the practical skills to examine malicious programs that target Windows systems. The course uses various monitoring utilities, a disassembler, a debugger, and other freely available tools.

Full details on the summit event, and how to register, are right here.

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/11/01/sans_threat_hunting/

Radio nerd who sipped NHS pager messages then streamed them via webcam may have committed a crime

A radio electronics geek has been caught eavesdropping on NHS medics’ pager messages, translating the signals into text while broadcasting them on the internet via a publicly available webcam stream – possibly committing a crime in the process.

Security researcher Daley Borda said he found the video stream by chance. The webcam was pointed at a computer monitor displaying decoded pager messages containing “details of calls” made by NHS and ambulance service dispatchers to on-call medics.

Accidentally stumbling across the frequency on which the BBC is broadcasting The Zoe Ball Breakfast Show is not a criminal offence, even if, as a listener, you feel like you are being punished for something…

“You can see details of calls coming in — their name, address, and injury,” Borda told Techcrunch on Wednesday.

The radio eavesdropper had set up, what’s believed to be, a software-defined radio rig to receive and display the unnamed NHS trust’s pager messages, exploiting the fact that the antiquated technology behind the UK’s remaining pager deployments sends messages without any encryption at all.

We’re told the nerd’s ISP had alerted him to his unsecured internet-facing webcam, accessible via a public IP address, which he then shut down.

Regardless of whether he’s broadcasting it online or not, what the radio snooper was doing is illegal. Airband scanners are not in themselves illegal, and listening to published frequencies (like Radio 4 or Classic FM, or other light entertainment stations) is perfectly legal. Using tech to turn machine-readable messages into human-readable messages is a grey area depending on who you’re listening to. It is, however, a criminal offence under both the Wireless Telegraphy Act 2006 and the Snoopers’ Charter (aka the Investigatory Powers Act 2016, or IPA) to eavesdrop on messages that are not intended either for the public or for you personally.

Tech lawyer Neil Brown of decoded.legal told The Register: “It seems unlikely that the person who did this has the right to control the operation or use of the system, or had the consent of the person who had that right, so the defence under section 3(2) [of the IPA] would not apply.”

As for illegally intercepting messages, Brown told us the criminal offence “includes ‘monitoring transmissions made by wireless telegraphy to or from apparatus that is part of the system’ while the communication is being transmitted, to make the content of the communication available to someone who is neither sender nor recipient”, summarising by saying: “From the screenshots in the TC article, it looks like content is being made available to anyone viewing the webcam stream.”

Just to reassure those unfortunate members of society who do not listen to Classic FM or Radio 4, Brown added: “Accidentally stumbling across the frequency on which the BBC is broadcasting The Zoe Ball Breakfast Show is not a criminal offence, even if, as a listener, you feel like you are being punished for something.”

Ofcom, despite gentle prodding, refused to comment, saying only that nobody had complained about this particular act when The Register rang up to ask. Despite its curious silence, the spectrum regulator admits on its website that it is responsible for this area of law and policy and even states that “using radio equipment to listen in is an offence, regardless of whether the information is passed on.”

The Radio Society of Great Britain, which represents amateur radio hobbyists, did not respond to a request for comment. The society encourages all its members to abide by the International Amateur Radio Union’s ethics and operating procedure document. Among many other things, that states: “In most countries the authorities do not care in detail how hams behave on their [radio] bands, providing that they operate according to the rules laid down by the authorities.” ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/31/nhs_pagers_eavesdropping_law/

A stranger’s TV went on spending spree with my Amazon account – and web giant did nothing about it for months

A fraudster exploited a bizarre weakness in Amazon’s handling of customer devices to hijack a netizen’s account and go on multiple spending sprees with their bank cards, we’re told.

If you have weird fraudulent activity on your Amazon account, this may be why.

In short, it is possible to add a non-Amazon device to your Amazon customer account and it won’t show up in the list of gadgets associated with the profile. This device can quietly use the account even if the password is changed, or two-factor authentication is enabled.

Thus if someone can get into your account, and add their own gizmo to your profile, they can potentially persistently retain this access and continue ordering stuff using your payment cards, even if you seemingly remove all devices from your account, and change your login credentials.

Theft

Redditor fidelisoris this week shared their experience of this security hole, and how it appeared to be exploited by a crook to buy gift cards using their account’s payment information. The Reg got in touch with the netizen and Amazon to dig into the fraud.

Rewind a few months, and our protagonist discovered unauthorized purchases on their account. They swiftly protected the profile: removed computers and other devices from the account, changed passwords, refreshed the multi-factor login, and so on. They also got the charges on their card reversed.

“I immediately did what any professional IT/IS guy does: I began the lockdown. All associated devices get removed from the account,” fidelisoris, who asked us to use their internet handle, recounted.

“All active sessions get killed. I wipe browser cache. I do a full security scan of the system. I change my email password. I change my Amazon password. I even swapped my 2FA authenticator service. Then, out of increasing paranoia, I change the password on every associated site and service I can think of, including my banks and credit cards.”

Normally, this would be more than enough to stop the fraudulent activity dead. Unfortunately, fidelisoris found the fraud continuing over the next few months, with the mystery thief getting back in each time to make more purchases.

Here is where the hardware comes in. Amazon allows customers to link their Android gadgets and gizmos to accounts, allowing them to make purchases, view content, and so on. So, in this case, it’s an easy enough to fix, right? Just go into the online account settings, and unlink the offending unauthorized device and stop the fraud.

Unfortunately, our protagonist claimed, it wasn’t that easy. It seems that while the website lists Amazon-made connected products, other devices, such as TVs, games consoles, and set-top boxes, may not be visible in the account online settings nor to much of Amazon’s tech support staff.

In fact, according to fidelisoris, it took repeated calls to the support desk before they could finally find a staffer, on the Kindle team, who could use some specific internal software that allowed them to spot the mystery device – a rogue smart TV – that was being used to make the bogus purchases.

Here’s how the netizen put it on Reddit on Wednesday:

I contact Amazon. I get the first representative on the phone, and I try to explain through my frustration what happened, and the history I mentioned. This time was odd; she seemed to hesitate when reviewing the account, placing me on hold to “talk to her resources”, and then mumbling about policy and what she can and can’t say.

Ultimately, she forwards me over to the “Kindle technical department” (I don’t own a Kindle, mind you…) and I speak to another offshore gentleman. After another round of codes and account verification, I tell the tale again. However, this time, this guy pulls out a magic tool and tells me where the purchases were made — I could jump for joy with some actual evidence being presented — and he tells me it came from a smart TV called a “Samsung Huawei.”

It wasn’t my TV. In fact, I’ve never owned an Android device, or anything made by Huawei.

And then the penny dropped:

And the crucial point – more people may be bitten by this security oversight:

How many people have rogue devices fraudulently attached to their account without their knowledge, waiting to be exploited? How did they get there in the first place? Old exploit? Unknown backdoor in a smart device app? Who’s to say? And if they were added before OTP enhanced security made it’s way to that particular platform, they can circumvent all 2FA requirements perpetually until removed and re-added. That alone is a serious security problem at Amazon.

It is not clear how the scumbag got into fidelisoris’ account in the first place – possibly by stolen credentials, or a bug in an application, or similar. For now, though, we’re told Amazon tech support removed the malicious telly from their account. It’s hoped that will staunch the fraud, though Amazon can’t even confirm the equipment was the conduit for the fraudulent purchases in the first place.

The Register asked the cyber-souk for some clarification on the matter. “We take information security seriously and are investigating these claims,” an Amazon spinner said.

fidelisoris told The Register the tech titan provided them similarly mealymouthed answers.

Whispering in an ear

Amazon is saying nothing about the DDoS attack that took down AWS, but others are

READ MORE

For now, it certainly looks as though there is a glaring shortcoming in Amazon’s customer service and its platform security that leaves punters potentially open to sustained fraud without any easy means of stopping it.

Meanwhile, fidelisoris says they have gone from victim to detective in this matter, and are leaving the account open for now in hope of uncovering an even greater issue: that there may be a hole through which crooks can add unauthorized devices to strangers’ accounts without the need for any credentials.

“For those who suggested that the account should be abandoned and a new one created, I agree that is certainly the best move for security purposes. But now my inner-sleuth has come out,” they said.

“Logic would assume that, now that all devices have been deactivated and no longer have the authority to access or purchase on my account… if another incident occurs, can we then suggest there is a greater possibility that a loophole exploit is still uncaught on one of these ‘non-Amazon’ device apps’ code?”

If you or someone you know has experienced similar frustrations with Amazon or another retailer, please let us know. ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/31/amazon_account_hacking/

Slow Retreat from Python 2 Threatens Code Security

The end of life is near for Python 2, and there will be no rising from the grave this time. So why are some companies and developers risking a lack of security patches to stay with the old version of the programming language?

At least one in 10 Python developers and data scientists continue to use the legacy version of the popular programming language as their primary development tool, despite a looming deadline of Jan. 1, 2020, the official “end of life” for Python 2.

The death of the programming language means companies that continue to use the technology — often to support legacy programs — will be at risk, experts say. While vulnerabilities in the core methods of the programming language are uncommon, many Python 2 packages will also be left with no — or dramatically less — support, likely leaving legacy programs unpatched.

The result is that maintainers will no longer step in to fix even serious vulnerabilities, says Jeff Rouse, vice president of product at ActiveState, a software tools maker. 

“The primary security risk is that vulnerabilities will arise and then there is not a core team to get those things fixed in a timely fashion,” he says. “And that is not just talking about code language, but the package and ecosystem as well.”

As the clock counts down on Python 2, some security professionals have warned that companies that fail to move from the older version of the programming language will put their software in the crosshairs of hackers when a vulnerability is found. As of May, 13% of Python programmers still used version 2 as their primary development language. While still high, that share is half of the 25% who were using the language at the end of 2017, according to JetBrains, a market research firm.

In August, the UK’s watchdog for cybersecurity concerns, the National Cyber Security Centre, warned Python programmers that they should move to the latest version of Python.

“[I]f you’re still using 2.x, it’s time to port your code to Python 3,” the NCSC wrote. “If you continue to use unsupported modules, you are risking the security of your organisation and data, as vulnerabilities will sooner or later appear which nobody is fixing.”

The Python Software Foundation has made it clear that Python 2 users will find themselves without patches starting in January.

“If people find catastrophic security problems in Python 2, or in software written in Python 2, then most volunteers will not help fix them,” the group wrote in an alert on the sunsetting of Python 2.

The move from Python 2 has been more than a decade in the works. The Python Software Foundation released Python 2 in 2000 and, realizing there were many improvements the core maintainers could add to the programming language, released Python 3 in 2006. Yet developers did not move from Python 2, so a couple of years later, the Python Software Foundation announced its volunteers would stop supporting the previous major version of its increasingly popular programming language, beginning in 2015. 

Developers very slowly — half were still using Python 3 in 2013 — started moving from Python 2, but too many remained. The year before the deadline, the project leadership recognized that programmers were not cooperating, so they pushed back the deadline to Jan. 1, 2020.

Now it’s time, the group said. Python 2 has been sapping too many resources for too long, the group chastised.

“If you need help with Python 2 software, then many volunteers will not help you, and over time fewer and fewer volunteers will be able to help you,” the group said in a blog post. “You will lose chances to use good tools because they will only run on Python 3, and you will slow down people who depend on you and work with you.”

The main problem for companies is that Python 3 is not backward-compatible with Python 2. Too many changes were made to the language. Because of those issues, it took Dropbox — a company whose services run widely on Python and that had employed the creator of Python until he retired this month — three years to convert all of its software and infrastructure from Python 2.

“Python 3 adoption has long been a subject of debate in the Python community,” Dropbox  said in 2018. “This is still somewhat true, though it’s now reached widespread support, with some very popular projects such as Django dropping Python 2 support entirely.”

The number of packages downloaded for Python 2 continues to be in the millions per month, with almost two-thirds of downloads for the URL resource module urllib and half of the downloads of Web library requests continuing to be for Python 2.

It does not help that Python 2.7 continues to be the default version installed on Mac OS X, even on Catalina, the latest version of Apple’s operating system. Other operating systems have fallen into line, however. In 2018, Ubuntu upgraded to Python 3.6 as the default in 18.04 LTS, also known as Bionic Beaver, and Red Hat dropped support for Python 2 in Red Hat Enterprise Linux 8. Python does not ship by default with Windows.

In addition, many major open source libraries have committed to dropping Python 2 in favor of Python 3 by 2020.

For companies that will not make the deadline, some software firms, such as ActiveState, are offering to extend support for security patches for Python 2.

“It is amazing that even with the amount of notice that the core team and [the Python Software Foundation] has given that enterprises have very large codebases, and they don’t have the time or inclination to get off those applications when they still provide value to them,” Rouse says. “Some of them are migrating but have not gotten around to it yet, while others don’t plan to migrate, but they want someone to have their back. It is a situation where a lot of companies knew it was coming.”

Related Content

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/application-security/slow-retreat-from-python-2-threatens-code-security/d/d-id/1336236?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

32,000+ WiFi Routers Potentially Exposed to New Gafgyt Variant

Researchers detect an updated Gafgyt variant that targets flaws in small office and home wireless routers from Zyxel, Huawei, and Realtek.

A newly discovered variant of the Gafgyt Internet of Things (IoT) botnet is attempting to infect connected devices, specifically small office and home wireless routers from brands that include Zyxel, Huawei, and Realtek.

Gafgyt was first detected in 2014. Since then, it has become known for large-scale distributed denial-of-service attacks, and its many variants have grown to target a range of businesses across industries. Starting in 2016, researchers with Unit 42 (formerly Zingbox security research) noticed wireless routers are among the most common IoT devices in all organizations and prime targets for IoT botnets.

When a botnet strikes, it can degrade the production network and reputation of a company’s IP addresses. Botnets gain access to connected devices by using exploits instead of attempting to log in via unsecured services. As a result, a botnet can more easily spread through IoT devices even if a business’s admins have disabled unsecured services and use strong login credentials.

The new Gafgyt variant, detected in September, is a competitor of the JenX botnet. JenX also leverages remote code execution exploits to access and recruit botnets to attack gaming servers, especially those running the Valve Source engine, and launch a denial-of-service (DoS) attack. This Gafgyt variant targets vulnerabilities in three wireless router models, two of which it has in common with JenX. The two share CVE-2017-17215 (in Huawei HG532) and CVE-2014-8361 (in Realtek’s RTL81XX chipset). CVE-2017-18368 (in Zyxel P660HN-T1A) is a new addition to Gafgyt.

“Gafgyt was developed off JenX botnet code, which just highlights how much interest there is when it comes to building botnets within that community,” says Jen Miller-Osborn, deputy director of threat intelligence at Unit 42. This evolution of Gafgyt indicates a dedicated group of people is working to update these botnets and make them more dangerous, she notes. Most of the time when a botnet is updated, it typically means a new CVE has been added to its lineup.

“The difference with this one is the developers added a new vulnerability to it that wasn’t present in the previous one,” Miller-Osborn says. “That added to its potential reach.” Shodan scans indicate at least 32,000 Wi-Fi routers are potentially vulnerable to these exploits.

Gafgyt uses three “scanners” in an attempt to exploit known remote code execution bugs in the aforementioned routers. These scanners replace the typical “dictionary” attacks employed by other IoT botnets, which typically aim to breach connected devices through unsecured services.

The exploits are designed to work as binary droppers, which pull a corresponding binary from a malicious server depending on the type of device it’s trying to infect. The new Gafgyt variant is capable of conducting different types of DoS attacks at the same time, depending on the commands it receives from the command-and-control server, Unit 42 researchers say in a blog post on the findings.

Gafgyt Sets Sights on Gamers
One of the DoS attacks this Gafgyt variant can perform is VSE, which contains a payload to attack game servers running the Valve Source Engine. This is the engine that runs games like Half-Life, Team Fortress 2, and others. Researchers emphasize this isn’t an attack on Valve, as anyone can run a server for the games on their own network. This attack targets the servers. 

With the rest of the DoS attack methods, operators are targeting other servers hosting popular games such as Fortnite, Unit 42 found. Miller-Osborn says the purpose in targeting gaming servers is mostly to be an annoyance. “They’re not going to make a lot of money doing it,” she adds.

While gaming servers have become popular victims, the diversity of IoT devices targeted in these attacks has grown, researchers say. These is nothing about these routers that makes them more likely to be owned by gamers; home users and small businesses are also at risk.

“Once they’re compromised, they’re used to do malicious activity,” Miller-Osborn explains. “The routers themselves could be owned by anyone. The biggest thing, especially with all these IoT malware families, is for people to keep in mind this is probably just going to get worse.”

An attack on gaming servers is one thing, she says. It’s typically a DoS incident and people aren’t getting hurt. However, if an attacker can effectively compromise a router, they can also move into the network and conduct more nefarious activity — for example, data theft.

These attacks highlight the fact that there are a lot of devices, especially routers, active on the Internet and vulnerable to a number of CVEs. The new Gafgyt variant, for example, targets two router vulnerabilities from 2017 and one from 2014, Miller-Osborn points out. “When it comes to routers, you don’t necessarily see them getting patched,” she notes. Outside the security community, few people will know when they should update their routers or if they’ve been hit by a botnet — unless, of course, their Internet service provider tells them.

Instagram: New Botnet Market
Cybercriminals are also finding new ways to sell botnets, researchers report. Once an activity limited to the Dark Web, the buying and selling of malware has surfaced to social networks.

In one attack analyzed, the new Gafgyt variant looks for competing botnets on the same device and tries to kill them. It does this by looking for certain keywords and binary names present in other IoT botnet variants. Researchers noticed some strings related to other IoT botnets (Mirai, Hakai, Miori, Satori) and some corresponded to Instagram usernames. The team built some fake profiles and reached out, only to find they’re selling botnets in their Instagram profiles.

(Image: Unit 42)

(Image: Unit 42)

Attackers offered the researchers source code for botnets. Unit 42 has contacted Instagram to report these profiles; it also reported malicious sites being used to handle botnet subscriptions. It’s “pretty common” for these sales to happen on social media, says Miller-Osborn, and a constant fight for social networks to take down malicious accounts.

“People want to market their devices and services, and one of the easiest ways to do that is on social media,” she explains. While it makes things simple for attackers, removing the accounts is “a constant game of whack-a-mole” for social media companies.

Related Content:

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/iot/32000+-wifi-routers-potentially-exposed-to-new-gafgyt-variant/d/d-id/1336238?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Radio ham who sipped NHS pager messages then streamed them via webcam may have committed a crime

A radio electronics geek has been caught eavesdropping on NHS medics’ pager messages, translating the signals into text while broadcasting them on the internet via a publicly available webcam stream – possibly committing a crime in the process.

Security researcher Daley Borda said he found the video stream by chance. The webcam was pointed at a computer monitor displaying decoded pager messages containing “details of calls” made by NHS and ambulance service dispatchers to on-call medics.

Accidentally stumbling across the frequency on which the BBC is broadcasting The Zoe Ball Breakfast Show is not a criminal offence, even if, as a listener, you feel like you are being punished for something…

“You can see details of calls coming in — their name, address, and injury,” Borda told Techcrunch on Wednesday.

The radio eavesdropper had set up, what’s believed to be, a software-defined radio rig to receive and display the unnamed NHS trust’s pager messages, exploiting the fact that the antiquated technology behind the UK’s remaining pager deployments sends messages without any encryption at all.

We’re told the nerd’s ISP had alerted him to his unsecured internet-facing webcam, accessible via a public IP address, which he then shut down.

Regardless of whether he’s broadcasting it online or not, what the radio snooper was doing is illegal. Airband scanners are not in themselves illegal, and listening to published frequencies (like Radio 4 or Classic FM, or other light entertainment stations) is perfectly legal. Using tech to turn machine-readable messages into human-readable messages is a grey area depending on who you’re listening to. It is, however, a criminal offence under both the Wireless Telegraphy Act 2006 and the Snoopers’ Charter (aka the Investigatory Powers Act 2016, or IPA) to eavesdrop on messages that are not intended either for the public or for you personally.

Tech lawyer Neil Brown of decoded.legal told The Register: “It seems unlikely that the person who did this has the right to control the operation or use of the system, or had the consent of the person who had that right, so the defence under section 3(2) [of the IPA] would not apply.”

As for illegally intercepting messages, Brown told us the criminal offence “includes ‘monitoring transmissions made by wireless telegraphy to or from apparatus that is part of the system’ while the communication is being transmitted, to make the content of the communication available to someone who is neither sender nor recipient”, summarising by saying: “From the screenshots in the TC article, it looks like content is being made available to anyone viewing the webcam stream.”

Just to reassure those unfortunate members of society who do not listen to Classic FM or Radio 4, Brown added: “Accidentally stumbling across the frequency on which the BBC is broadcasting The Zoe Ball Breakfast Show is not a criminal offence, even if, as a listener, you feel like you are being punished for something.”

Ofcom, despite gentle prodding, refused to comment, saying only that nobody had complained about this particular act when The Register rang up to ask. Despite its curious silence, the spectrum regulator admits on its website that it is responsible for this area of law and policy and even states that “using radio equipment to listen in is an offence, regardless of whether the information is passed on.”

The Radio Society of Great Britain, which represents amateur radio hobbyists, did not respond to a request for comment. The society encourages all its members to abide by the International Amateur Radio Union’s ethics and operating procedure document. Among many other things, that states: “In most countries the authorities do not care in detail how hams behave on their [radio] bands, providing that they operate according to the rules laid down by the authorities.” ®

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/31/nhs_pagers_eavesdropping_law/

ProtonMail shoves its iOS app’s source code on GitHub for world+dog to rummage around in

Encrypted email biz ProtonMail has open-sourced the code for its iOS app, having paid for a code audit that says there’s nothing wrong with it.

Having touted itself for years as the choice of political activists, journalists, dissidents and all the other types of people who make the world a better place, ProtonMail is throwing some of its virtual doors open to convince a largely sceptical world to get with the programme.

This is in no way related to its denials back in May that it was providing voluntary real-time surveillance access to state agencies.

“Most apps,” the firm intoned in a statement today, “do not protect data in situations where the device or phone itself has been infected,” going on to claim that it is capable of protecting one’s emails even in situations where the device has been compromised by malware, which is a bold claim to make.

Andy Yen, founder and chief exec, grandly declared in a canned quote: “We have a responsibility to protect our users and we constantly improve our protections to keep them safe from the latest malware developments. We hope that through documenting and open sourcing our iOS code, the techniques to defend against attacks can be more widely known and utilized, contributing to a safer mobile ecosystem.”

ProtonMail said the code dump, visible on GitHub, has been pre-audited by Austrian infosec bods SEC Consult.

The company added that its “Appkey” tech is the secret sauce that encrypts iOS users’ emails. This and the open-sourcing was said to be inspired by the so-called Poison Carp malware, which targeted Tibetan dissidents in a similar manner to how Chinese state authorities had been using malware to steal data from the devices of the Xinjiang region’s persecuted Uyghur ethnic minority.

Whether or not you trust ProtonMail’s tech, the firm doesn’t shy away from pissing off state authorities in countries that see freedom as a threat. Earlier this year Russia shut off access to the service from its shores, alleging it was being used by “terrorists” whose main aim was to send each other disparaging messages about a Russian university sports competition.

Last year the current Turkish regime also blocked ProtonMail, ineptly enough for locals to get around it by simply using a VPN. ®

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/31/protonmail_ios_app_open_source/

9 Ways Data Vampires Are Bleeding Your Sensitive Information

Pull a Van Helsing on those sucking the lifeblood from your data and intellectual property.

Vampires do exist — in the workplace, that is. They bleed your company of customer data, confidential information, and intellectual property (IP) — the lifeblood of any organization. These shadowy figures exist in every enterprise and take the form of malicious insiders looking to benefit from the theft of company information, as well as negligent insiders who inadvertently put data, IP, and the entire enterprise at risk.

While this sounds like a good old-fashioned horror movie, data vampires pose a serious threat. Data proves that IP theft has affected even the biggest and most sophisticated companies, as well as the government and private sector. No one is safe: Every industry has a horrific example to share of falling victim to nation-state espionage.

It’s important to note that the impact from mistakes is just as dangerous, with human error accounting for 90% of data breaches in the cloud, according to some reports. Collaboration tools designed to empower data sharing also have the unintended consequence of making data theft and accidental loss (or sharing) widespread.

Here are nine of the most common data loss scenarios keeping organizations up at night:

Vampires on the Hunt

  • A once dedicated employee has accepted a job offer with a competitor that pays better and has a shorter commute. Before flying off into the moonlight, he plans to download copies of all client contacts, internal communications on planned product improvements, and anything else that will help him succeed at the new company.
  • One of your employees with access to customer personally identifiable information and payment data conjures a scheme to use that information for personal profit and downloads/copies it to carry out the crime.
  • Your senior developer steals research and code on your latest innovation and then leaves to start her own company and launch your product before you do.
  • An employee is bribed by a third party, maybe a competitor or even a nation-state, to download and steal your IP. The “other” plans to market it as their own technology, and perhaps in another country that has less stringent copyright and trade protection laws.

Inadvertent Bloodsuckers

  • An employee accidentally shared a sensitive file with the wrong individual or group, inadvertently sharing all company salaries with the entire staff instead of just the executive team.
  • “Oops! That wasn’t meant for you.” — that’s the co-worker who mistakenly shares the wrong file either by Dropbox or email attachment.
  • The individual who shares a sensitive file with a setting that is too open, mistakenly allowing recipients to then share it with others (e.g., the “anyone with the link can view” setting).
  • “I’m not supposed to send you this but…” Someone who has shared sensitive data with another who should not have access to it (but, well, you know how that goes).
  • Allowing end users, instead of IT, to create file shares, resulting in wrongly configured sharing settings (e.g., files are accidentally open to the public Internet or open internal access).

Put a Stake in Data Vampires with Four Critical Steps
If you think these culprits are works of fiction, think again: Many are ripped right from recent headlines. However, you can stop data vampires and protect business-critical information without hampering collaboration, or losing too much sleep, by channeling your inner Van Helsing to act:

  1. Clean-out the skeletons in your closets. Research shows that 60% of companies admit more than half of their organizations’ data is dark (they don’t know where their sensitive data lives). To start, scan all of your content collaboration systems to identify sensitive information, then classify and secure it before it comes back to haunt you. 
  2. Swap your wooden stake for a data-centric approach. Traditional security tools weren’t built to protect today’s diverse collaboration channels and all the data that comes with them. Instead look for data-centric solutions that use both file content and user context to augment security depending on parameters such as the document sensitivity, role, time of day, location, and device to determine if content can be accessed by the user and what can be done with it. 
  3. Ward off mistakes with automation. The best way to protect against unknowing offenders and accidental breaches is to stop bogging users down with complex rules for data sharing and security that are easy to forget or circumvent using shadow IT. Take advantage of technology that can apply restrictions, such as preventing the emailing, sharing or downloading of sensitive content based on document sensitivity, to prevent unwanted actions and consequences. 
  4. Sharpen your tracking skills. Track the life cycle of sensitive data so you can see who has accessed it and how it has been used or shared to provide a full audit trail. Be sure to have a process in place to notify managers and stakeholders of potential violations. 

This Halloween and year-round, remember that technology is your friend when hunting down data vampires — no matter what form they take, malicious or negligent.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “What Has Cybersecurity Pros So Stressed — And Why It’s Everyone’s Problem.”

Steve Marsh is a Vice President at Nucleus Cyber and brings more than 20 years of product experience from Microsoft, Metalogix, startups, and academia. He drives product management/marketing to deliver first-class customer experiences and strategic product road maps. Steve … View Full Bio

Article source: https://www.darkreading.com/operations/9-ways-data-vampires-are-bleeding-your-sensitive-information/a/d-id/1336154?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple