STE WILLIAMS

Nigerian National Convicted for Phishing US Universities

Olayinka Olaniyi and his co-conspirator targeted the University of Virginia, Georgia Tech, and other educational institutions.

Nigerian citizen Olayinka Olaniyi has been convicted for targeting colleges and universities in the United States with phishing attacks, the US Department of Justice reported late last week.

Olaniyi, along with co-conspirator Damilola Solomon Ibiwoye, sent phishing emails to employees at the Georgia Institute of Technology, University of Virginia, and other US institutions while living in Kuala Lumpur, Malaysia. Once they accessed employee credentials, they changed the destination for payroll deposits so money was directly sent to them. They also filed fake tax returns with data from employee W-2 forms, the DoJ says.

All in all, their attempted theft totaled over $6 million. Olaniyi was charged with conspiracy to commit wire fraud, computer fraud, and aggravated identity theft. Ibiwoye pleaded guilty to similar charges and was sentenced to 3 years and 3 months in prison on January 31, 2018.

Olaniyi will be sentenced on October 22, 2018. Read more details here.

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/nigerian-national-convicted-for-phishing-us-universities/d/d-id/1332539?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘Hack the Marine Corps’ Bug Bounty Event Held in Vegas

$80K in payouts went to handpicked hackers in nine-hour event during DEF CON in Las Vegas.

The US Marine Corps yesterday in Las Vegas held a live hacking event focused on its public-facing websites and enterprise services, and it paid out $80,000 in total to researchers for 75 new vulnerabilities that they found.

Hack the Marines, part of the US Department of Defense’s Hack the Pentagon program, operated as a hackathon of sorts, with a limited-time bounty payout; researchers also can report any flaws they find through the HackerOne-managed Marine Corps vulnerability disclosure program until August 26, 2018, but without earning a bounty.

This represents the sixth bug bounty sponsored by the DoD and managed by HackerOne, following the flagship Hack the Pentagon program in 2016, and bug bounties for the Army, Air Force, and the DoD’s travel system.

Around 100 researchers selected by HackerOne and the Marines competed in the bug bounty event, which ran for nine hours on Sunday, August 12. HackerOne and the Marines would not divulge details on the newly found vulnerabilities, but the bugs included the usual website flaw suspects, including authentication flaws and cross-site scripting, according to Martin Mickos, CEO of HackerOne.

The Marine Corps Cyberspace Command’s red and blue teams were on hand as well to observe and interact with the hacker competitors as well as to decide on the winning bounties. “They key goal of these live hacking events is to have this collegial and social [atmosphere], although it’s also a competition,” Mickos says. “They may give advice … ‘don’t go there, look here'” to the competitors, while the hackers also can give the military feedback as well, he says.

“Hack the Marine Corps allows us to leverage the talents of the global ethical hacker community to take an honest, hard look at our current cybersecurity posture,” said Maj. Gen. Matthew Glavy, Commander, US Marine Corps Forces Cyberspace Command in a statement. “What we learn from this program will assist the Marine Corps in improving our warfighting platform, the Marine Corps Enterprise Network. Working with the ethical hacker community provides us with a large return on investment to identify and mitigate current critical vulnerabilities, reduce attack surfaces, and minimize future vulnerabilities. It will make us more combat ready.”

In all, the Hack the Pentagon program itself has resulted in over 5,000 discovered vulnerabilities by researchers.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/hack-the-marine-corps-bug-bounty-event-held-in-vegas-/d/d-id/1332541?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Vulnerability Disclosures in 2018 So Far Outpacing Previous Years’

Nearly 17% of 10,644 vulnerabilities disclosed so far this year have been critical, according to new report from Risk Based Security.

There appears to be little relief in sight for organizations hoping for some respite from patching. A new report from Risk Based Security released today reveals that the number of vulnerabilities discovered in software products shows no signs of abating.

Between January 1 and June 30 of this year, a total of 10,644 vulnerabilities were published compared to 9,690 in the same period in 2017. The trend so far this year suggests that the total number of disclosed vulnerabilities in 2018 will comfortably exceed the 20,832 vulnerabilities that Risk Based Security published during 2017 — which itself represented a 31% increase over 2016.

About 17% of the reported flaws this year were deemed critical and had a severity rating of between 9.0 and 10.0 on the CVSS rating scale. That number is smaller than the 21.1% of flaws overall that garnered the same rating in Risk Based Security’s report for the first half of 2017.

Somewhat expectedly a plurality of 2018 vulnerabilities – 46.3% – were Web-related flaws, and half of all reported vulns were remotely exploitable. Nearly one-third of the vulnerabilities so far this year in Risk Based Security’s database have public exploits, but 73 have a documented solution.

A majority of the vulnerabilities are based on processing user or attacker-supplied input, and the software not properly-sanitizing that input, says Brian Martin, vice president of vulnerability intelligence for Risk Based Security. “We classify them as input manipulation issues that impact the integrity of the software,” he notes.

Not All in the CVE NVD

Significantly, Risk Based Security’s vulnerability database contained more than 3,275 vulnerabilities that were not published in MITRE’s CVE and the National Vulnerability Database (NVD) in the first half of 2018. Of these, more than 23% had a CVSS score between 9.0 and 10.0.

In other words, organizations relying purely on the CVS/NVD vulnerability data would likely not have been aware of more than 750 other critical vulnerabilities that were published elsewhere.

“The biggest takeaway is that the number of vulnerabilities being disclosed continues to rise, and will continue to do so for the foreseeable future,” Martin says.

More importantly, the data shows that organizations cannot rely solely on the CVE database for their vulnerability data, he says.

His firm uses over 2,000 sources for its vulnerability data including mail lists such as Bugtraq and Full Disclosure, exploit websites such as ExploitDB and Packetstorm, and vendor resources such as customer forums. Other sources include formal advisories and knowledge base articles, and developer resources such as changelog, bug-tracking systems, and code commits, Martin says.

Risk Based Systems typically aggregates and processes newly disclosed vulnerabilities in less than 24 hours, depending on the disclosure and if additional analysis is needed. For some security vulnerabilities, the vendor discloses at roughly the same time as CVE and for others, it is weeks and even months, ahead of them, he notes.

Not So Fast

While Risk Based Security’s statistics might suggest that software is becoming increasingly buggy, the reality appears a little more nuanced. According to Martin, there are likely many reasons why more flaws are being discovered in software products these days despite the heightened awareness and attention being paid to application security.

Among them is the fact that there are a lot more security researchers looking for and reporting security vulnerabilities these days compared to a few years ago. Tools for finding security vulnerabilities have improved as well and have become faster and more reliable than before, too.

And organizations that monitor and aggregate vulnerabilities are also improving their processes and software vendors themselves have become better at disclosing vulnerabilities reported to them, Martin notes.

Related Content:

  

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/vulnerability-disclosures-in-2018-so-far-outpacing-previous-years/d/d-id/1332545?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Social Engineers Show Off Their Tricks

Experts in deception shared tricks of the trade and showed their skills at Black Hat and DEF CON 2018.

It’s not every day you hear or see social engineers in action – well, knowingly, anyway – but that’s exactly what the crowd did at Black Hat and DEF CON 2018 held last week in Las Vegas.

Traditional methods of social engineering and phishing attacks are mostly well-understood and remain successful, explained Matt Wixey, technical research leader for PwC’s UK cybersecurity practice. Still, attackers are finding new and more advanced ways to manipulate their victims.

Wixey detailed their efforts in a Black Hat presentation on Remote Online Social Engineering (ROSE), his name for long-term campaigns in which actors leverage false personae and highly detailed reconnaissance to compromise target networks. By building a relationship with their targets, attackers can persuade employees to send data and assist in corporate hacking.

Why go to the trouble of social engineering when simple phishing attacks are just as effective?

“A big reason would be to bypass technical controls, and bypass the effects of user education and awareness,” Wixey explained. Social engineers want to do more than slip past firewalls. They must also deceive a human’s threshold for which behavior is suspicious and which isn’t.

“Because [an attack] is designed to target a specific individual, it can be designed specifically to bypass that person’s filters,” he continued. We all have different standards for what constitutes phishy behavior, all of which vary depending on personality, upbringing, and other factors.

Getting to Know the Victim

A ROSE attack starts with an in-depth analysis of the target: their online activity, how they communicate, responses to good and bad news, linguistic styles, and their motivations for taking particular actions. They learn where they went to school, where they previously worked and which roles they held, interests and hobbies, names of family members and friends.

The attacker can use this information to craft a profile before reaching out to the target. Their fake profile may include similar interests, a shared educational background, or another trait to facilitate an opening for conversation. Their profile photo may not be stolen but may be altered or concealed behind a paywall from a private source to conceal the attacker’s identity, he said.

They may keep up this charade for a while to build credibility and, over time, they may automatically post content and/or alter their fake profile to reflect changes in employment, interests, styles, and politics. When working toward direct contact, the attacker may “like” content from their target’s friends or related to their interests to make themselves known.

Finally, they go in for the hook. An attacker can ping their victim with a request for help or proposal for a business relationship. All the while, they’ll use their earlier research to inform their conversation and pursue more frequent contact to build rapport and trust.

Social engineers rely on several techniques to make their interactions more believable, said Wixey. Lies often include more negative emotions and fewer sensory details. Liars often use cognitive details and keep things simple so there are fewer details to recall in the future.

“Liars may ask more questions, perhaps in an attempt to shift the focus from them onto the person they’re trying to device,” Wixey added.

Dial-in Deception: Capture the Flag 2018

In his presentation, Wixey referenced a study stating people lie in 14% of emails, 27% of face-to-face interaction, and 37% of calls. We saw the final stat live during DEF CON’s Social Engineering Capture the Flag competition, in which competitors call corporate targets and use social engineering tactics to get its employees to provide different pieces of data (“flags”).

Participants are assigned target organizations a few weeks before DEF CON and prepare by collecting open-source intelligence on the company, its employees, and other characteristics. They prepare a game plan: who their fake persona is, why they’re calling, and how they might leverage social engineering techniques to persuade the target to hand over information.

This year’s winner, Whitney Maxwell, directly called employees at service centers for the company she was assigned to target. She was doing an audit, she explained, and she just needed the answers to a couple of questions. By using techniques to establish legitimacy with the employee – saying they have the same name for example – she got some key data.

One conversation yielded information including the company’s version of Windows (XP), whether they used wireless Internet, building security, type of computer and desk phone, and whether they used Outlook and Adobe. She confirmed the center’s location and, in one instance, was able to convince an employee to enter a bit.ly URL into the browser.

“If you can do that over the phone, you can compromise a whole network,” said Chris Hadnagy, president and CEO of Social-Engineer, Inc. and organizer of the DEF CON event.

Challenges in Defense

Much of the time it’s difficult to tell when the person on the other end of a phone call, email, or social media message is malicious. Wixey pointed to a few techniques businesses can use to stay safe as cybercriminals get stealthier.

To limit the amount of available information online, he advises setting a Google alert for your full name so you know when someone has searched for you. Conduct reverse image searches on new contact requests and research the people who want to join your network. If you’re unsure about someone, check their account for early auto-posting and inconsistencies.

If a stranger pings you with a question or collaboration opportunity, second-guess their motives. Why might they ask you to do this, and how might they benefit? If they contact your corporate email address, how did they find it? Do they avoid face-to-face or video interaction?

“We lie all the time,” said Wixey. “Everyone lies to each other, all day, every day.” The challenge for businesses is determining where the malicious intent is.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/social-engineers-show-off-their-tricks/d/d-id/1332544?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

In-flight satellite comms vulnerable to remote attack, researcher finds

IOActive’s researcher Ruben Santamarta is the sort of person anyone interested in computer security would probably enjoy sitting next to on a long flight.

Take the journey he made last November between Madrid and Copenhagen on Norwegian during which (naturally) he decided to use Wireshark to study the aircraft’s in-flight Wi-Fi.

As well as finding that Telnet, FTP and web were available for certain IPs, it turned out that an interface page for a Hughes aircraft satellite communication (SATCOM) router could also be accessed without authentication.

This is the system used by Norwegian that connects a plane to the ground to provide internet connectivity. (Icelandair and Southwest are customers too.)

In a Black Hat show paper last week, Last call for SATCOM Security, Santamarta and his colleagues published details of how this simple discovery put them on the trail of a string of larger security flaws that build on IOActive SATCOM vulnerability research dating back to 2014.

His pre-show claim was a startling one – he was, he believed, the first researcher to figure out how, in some cases, to access in-plane systems without having to be on a plane at the time.

The vulnerabilities have not been explained in detail for security reasons but included a disturbing mix of backdoors, the interception and manipulation of data traffic to and from aircraft (i.e. monitoring passenger web visits), using Telnet to execute code, and potentially interfering with firmware.

It might even be possible to launch attacks against individual devices belonging to passengers or crew connected via the SATCOM router.

Extraordinarily, the team discovered that an IoT botnet had attempted brute-force attacks against SATCOM equipment without necessarily targeting aircraft systems specifically. Although not deliberate…

The astonishing fact is that this botnet was, inadvertently, performing brute-force attacks against SATCOM modems located onboard an in-flight aircraft.

Because SATCOM systems are used on maritime vessels, as well as by the military and space industry, they too might be vulnerable to some of the issues, said Santamarta.

None of the vulnerabilities researched would have given an attacker access to avionics systems used by pilots but celebrating this might be to miss the point that the state of SATCOM router security is not what it should be.

All the flaws have been passed on to the manufacturers concerned as well as aviation security body, the Aviation Information Sharing and Analysis Center (A-ISAC), although Santamarta said that the level of collaboration hadn’t been what might have been expected in some cases given the security implications.

The in-plane flaws had, however, been closed:

We can confirm that the affected airlines are no longer exposing their fleets to the internet.

This will be reassuring news for anyone who plans to take a flight on an airline using one of these SATCOM systems and might find themselves using the onboard Wi-Fi.

Luckily – this time at least – a researcher boarded one of those planes last year and decided for all our sakes not to take the communications security on offer at face value.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ecdX216fc-E/

Feds indict 12 for allegedly buying iPhones on other people’s dimes

The Feds have indicted a dozen people for allegedly using hacked cell phone accounts to “upgrade” to nice, shiny new iPhones and other pricey gadgets, waltzing into stores to pay the small upgrade fees, sticking victims with the rest of the costs, selling the loot for full purchase price, and pocketing the profit.

The US Department of Justice (DOJ) announced the indictments on Thursday.

Geoffrey S. Berman, the US Attorney for the Southern District of New York, and Angel M. Melendez, a special agent with the New York office of the Immigration and Customs Enforcement’s (ICE’s) Homeland Security Investigations (HSI), said they’ve got seven suspects – six were arrested in southern New York, and one in Ohio – while another five are still on the loose.

They stand accused of improperly accessing more than 3,300 customers’ cellphone accounts and defrauding those accounts of the cost of more than 1,200 cellphones, causing losses of more than $1 million.

Berman said that the fraud ring pulled off the heists, which were carried out nationwide, by first allegedly buying their victims’ account details off the dark web, then allegedly hacking into their accounts.

Melendez said that the fraud network was operating out of New York – most particularly in the Bronx, which is where they sold many of the iPhones, iPads, tablets and watches they bilked people out of. It was also operating out of the Dominican Republic; from other, unspecified places; and on the dark web, he said.

According to the indictment, defendants allegedly traveled to 30 states to get the phones, then often brought them back to the Bronx to sell through fencing operations. The cellphone carriers absorbed the financial losses, but the victims suffered the theft of their identities and/or had their accounts accessed without authorization.

Besides charging the vast majority of the devices’ fees to existing customers’ accounts, the fraudsters sometimes created new, bogus accounts, the indictment says. Over time, they changed tactics to stay ahead of law enforcement.

Their techniques included:

  • Using Bitcoin to buy cellphone customers’ personally identifiable information (PII) on the dark web, then using that information to convince stores that sell cellphones that they were legitimate account holders.
  • Phishing account details out of victims with emails that were laced with rigged links.
  • Using bogus IDs to convince store owners that they were someone else.
  • Buying phones with their real names and fake Social Security numbers that appeared to (and sometimes did) match the spelling of their real names. The taxpayer IDs really belonged to other people, and those people had their credit damaged as a result of the fraud.

HSI officer George Murphy Whalen said in the indictment that at the time of a police raid carried out on 15 August 2017, investigators believed the hub of the operation to be in Mt. Vernon, New York. That’s where they traced two IP addresses used to get into at least 3,300 victims’ cellphone accounts.

During that raid, they arrested six of the defendants: Mario Diaz, Tomas Guillen, Jose Argelis Diaz, Jonathan Diaz, Eddy Morrobel, Rayniel Robles, and Ronnie De Leon. The five suspects who remain at large are Isaac Concepcion Aquino, Joel Pena, Ruddy Sanchez, Michael Roque, and Joandra Tejada Gonzalez.

All of the suspects have been charged with conspiracy to commit wire fraud and aggravated identity theft.

According to the criminal complaint, a former gang member who’d previously been convicted of a felony cooperated with investigators to get a lighter sentence. As part of the deal, he ratted out the fraud ring by giving the investigators details about the Mt. Vernon residence.

During the raid, police seized 12 computers, five iPads, receipts from Western Union and MoneyGram transactions, evidence of Bitcoin and bank transactions, and several SIM cards.

As we noted earlier this month, when an alleged SIM-swap scammer was nabbed for allegedly stealing $5m in Bitcoin and other cryptocurrencies, SIM cards are at the heart of some serious, big-buck rip offs. Just one of the scammer’s alleged victims, a cryptocurrency investor, allegedly lost nearly $1.5m that he had crowdfunded in an Initial Coin Offering (ICO). It was one of at least three attacks during Coindesk’s Consensus conference.

On one of the computers, they found a 15-minute video, in Spanish, on how to commit cellphone fraud, the Feds allege. They say that investigators also found Google searches that reveal an interest in phone fraud on the part of whoever was using the seized devices.

They used a license plate reader to track one of the defendants, Ronnie De Leon, on a 2 December 2017 trip from Wisconsin to Bloomington, Minnesota. According to the complaint, investigators recorded him as being within a 20-minute drive from Roseville, Minnesota, where a fraudulent iPhone purchase was made that same day under De Leon’s name and a fraud victim’s mobile account.

When police arrested De Leon on 5 December, police saw, without needing to unlock the account, mobile account change notifications linked to a fraud victim’s account on his home screen, the complaint alleges.

The suspects are facing a maximum penalty of 20 years for conspiracy to commit wire fraud and two years for aggravated identity theft, though maximum sentences are rarely handed out.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KAdCmGOVXA4/

Siri is listening to you, but she’s NOT spying, says Apple

Are our iPhones eavesdropping on us? How else would Siri hear us say “Hey, Siri” other than if she were constantly listening?

That’s what Congress wondered, and it wanted Apple to explain. It also wanted to know about how much location data iPhones are storing and handing over about us.

So the US House of Representatives Energy and Commerce Committee sent a letter to Apple CEO Tim Cook on the matter of Apple having recently cracked down on developers whose apps share location data in violation of its policies.

The letter posed a slew of questions about how Apple has represented all this third-party access to consumer data, about its collection and use of audio recording data, and about location data that comes from iPhones.

On Tuesday, Apple responded.

Much of the response letter translates into “We Are Not Google! We Are Not Facebook!” As in, Apple’s business model is different from those of other data-hoovering Silicon Valley companies that rely on selling consumer information to advertisers:

The customer is not our product, and our business model does not depend on collecting vast amounts of personally identifiable information to enrich targeted profiles marketed to advertising.

Timothy Powderly, Apple’s director of federal government affairs, emphasized in the letter that Apple minimizes collection of data and anonymizes what it does collect:

We believe privacy is a fundamental human right and purposely design our products and services to minimize our collection of customer data. When we do collect data, we’re transparent about it and work to disassociate it from the user.

And no, Siri is not eavesdropping. The letter went into specifics about how iPhones can respond to voice commands without actually eavesdropping. It has to do with locally stored, short buffers that only wake up Siri if there’s a high probability that what it hears is the “Hey, Siri” cue.

A buffer is a chunk of audio that’s continually recorded over and thus, by definition, isn’t archived. In short, “always listening” is pretty restricted: an iPhone has only a short amount of recorded audio at any time. That audio is only used to identify the trigger phrase “Hey Siri,” and it’s only stored locally.

Once actual recording takes place after the “Hey, Siri” phrase is uttered, the recording that’s sent to Apple is attached to an anonymous identification number that isn’t tied to an individual’s Apple ID. Users can reset that identification number at any time.

Similar services store voice recordings in ways that are associated with an individual user, Apple said. In other words, in ways that can be linked to an individual who can then be target-marketed.

Third-party apps

When Siri’s listening, an iOS device gives the user a visual indicator. Apple’s Developer Guidelines require that developers display that visual indicator when their apps are recording audio information. Third-party apps are required to obtain explicit user consent when collecting microphone data, as well.

iOS conditions state that third-party apps have to get user permission before accessing the microphone, camera, or location data. They also have to tell users what they’re going to do with that access or information. iOS apps also have to show the visual cue that they’re listening, just as they’re required to do with Siri.

Users can change the settings at any time, Apple said.

Consistent with Apple’s view that privacy is a fundamental human right, we impose significant privacy-related restrictions on apps. Notwithstanding the developer’s responsibilities and direct relationship with customers, Apple requires developers to adhere to privacy principles.

The upshot: if an app is compliant with Apple’s terms, it has to give a visual cue that it’s got access to the microphone, even after a user has granted permission to do so.

But the fact of the matter is that Apple doesn’t constantly monitor apps to make sure they’re always compliant. All apps go through the App Review Process for privacy compliance before getting approved, but that doesn’t equate to Apple keeping an eagle eye on them to make sure they don’t misbehave down the line. At a certain point, what happens to user data comes down to whatever a user has signed off on when agreeing to an app’s terms. From the letter:

Apple does not and cannot monitor what developers do with the customer data they have collected, or prevent the onward transfer of that data, nor do we have the ability to ensure a developer’s compliance with their own privacy policies or local law.

When we have credible information that developer is not acting in accordance with the PLA or App Store Review Guidelines or otherwise violates privacy laws, we will investigate to the extent possible.

In other words, Apple does its damnedest to make sure iPhones aren’t eavesdropping on us, including through privacy policies, short buffer windows, local storage, and app review.

Does any of this ease your worries about eavesdropping iPhones, if you had any such worries to begin with? Please do let us know if you’re still looking at Siri with a hairy eyeball, and if so, why?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VZL5OPqYoEY/

How a cryptocurrency-destroying bug almost didn’t get reported

A researcher recently revealed how he found a bug that could have brought the fourth largest cryptocurrency to its knees – and how he struggled to report it.

Cory Fields, who works as a developer at MIT Media Labs’ Digital Currency Initiative, found the bug in Bitcoin Cash, which is an alternative cryptocurrency to Bitcoin based on software called Bitcoin ABC. A group of activists in the Bitcoin community introduced the software after becoming unhappy with the direction that the developers of the original Bitcoin software (known as Bitcoin Core) were taking.

When people began using Bitcoin ABC, they created a hard fork of the Bitcoin blockchain. This is a separate blockchain – a new ledger of transactions that split off from the original Bitcoin blockchain and is incompataible with it. It’s akin to one community in a town leaving and setting up their own town with its own rules.

Since then, the Bitcoin Cash blockchain has existed as an alternative to the original, and various members of its community have proclaimed it as the ‘real’ Bitcoin. At the time of writing, it had the fourth biggest market capitalization of any cryptocurrency at almost $10bn.

Fields, who is a Bitcoin Core developer, discovered a bug in Bitcoin Cash that could have allowed attackers to create their own involuntary split in the Bitcoin Cash blockchain. According to his Medium post, someone in the Bitcoin Cash developer community updated the rules in the software that verifies Bitcoin Cash transactions before including them on the blockchain.

A flaw in the code made it possible for an attacker to introduce transactions that the buggy version of the software would accept, but all previous versions of the software would reject. If the attacker timed the introduction of this ‘poison pill’ transaction properly, releasing it when around half of the community had updated to the new software, it would effectively cause the blockchain to split in two. This would introduce two incompatible types of Bitcoin Cash that would be very difficult to reconcile, given that the split would be pretty much 50:50.

The bug itself doesn’t pose a risk now, because Fields disclosed it privately to the Bitcoin Cash developers in April, and it was fixed and then publicly disclosed in May.

But there are nevertheless two lessons to be learned from the story, pointed out by Fields in his Medium post and in a follow-up by Neha Narula at the MIT Digital Currency Initiative.

The first lesson is that the cryptocurrency community needs to get better at developing and maintaining the code that operates a blockchain, thus avoiding bugs in the first place.

The fact that the Bitcoin Cash community allowed such important changes into its codebase after just two reviewers looked at it briefly shows the dangers of open source software without effective governance.

The second lesson is that Fields couldn’t easily find out how to tell the Bitcoin Cash team about the problem he found, thus making it hard to get the bug fixed at all.

Disclosing the bug

When Fields found the bug, the only guidance from the Bitcoin Cash was to “contact people privately”. He not only had to jump through many hoops to report the bug but also to figure out which hoops to jump through in the first place.

In the end, he tracked down a member of the Bitcoin Cash development team, got hold of an encryption key to disclose the bug anonymously and safely, and then checked to see that they had actioned it.

Fortunately, Fields had the willpower to do all of this rather than just leaving the bug for a malicious someone else to find later – or, worse still, going public with it in frustration.

Fields isn’t the only researcher to hit a brick wall when trying to inform people about a bug.

Natalie Silvanovich, a security researcher at Project Zero, recently complained about disclosure problems, citing issues with Samsung’s bug reporting procedures in which she had to wade through a sea of legalese, half of which was in Korean.

One of the clauses that was in English worried her, because it prevented her from disclosing the bug at all, ever, without Samsung’s approval. In other words, if Samsung ended up doing nothing, the bug would neither get fixed nor reported, and would simply be left around indefinitely. (Samsung has since resolved the problems, she said.)

What to do?

Other cryptocurrency efforts are cutting through the whole tangled mess and issuing bug bounties.

EOS, for example, the recently-released blockchain application platform designed to take on Ethereum, has paid out $417,000 since May thanks to an active bug reporting program conducted with Hacker One, and there are other cryptocurrency companies taking the same approach.

That’s a laudable effort – one that both encourages and rewards responsible bug hunting from the start.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yjKW3Frii3I/

US voting systems: Full of holes, loaded with pop music, and hacked by an 11-year-old

DEF CON Hackers of all ages have been investigating America’s voting machine tech and the results aren’t good. One enterprising 11-year-old named Emmet managed to hack a simulated Secretary of State election results page in 10 minutes.

The Vote Hacking Village, one of the most packed-out locations at this year’s DEF CON hacking conference in Las Vegas, saw many of the most commonly used US voting machines easily hijacked using a variety of wireless and wired attacks, and election websites proved so poorly constructed that they were thought too boring for adults and left to youngsters to infiltrate.

The first day saw 39 of these smart kids, ranging in age from six to 17, try to crack into replicas of government election results websites, developed by former White House technology advisor Brian Markus. All but four managed to get an exploit running within the allotted three-hour contest.

child

DEF CON plans to show US election hacking is so easy kids can do it

READ MORE

The children were able to change vote tallies so that they numbered 12 billion, and rewrite party names as well as the names of candidates. Kids being kids, these latter changes included “Bob Da Builder” or “Richard Nixon’s Head” – we spotted the Futurama fan there.

On the adult side, Premier/Diebold’s* TSX voting machines were found to be using SSL certificates that were five years old, and one person managed to upload a Linux operating system to the device and use it to play music, although that hack took a little more time than you’d get while voting.

Diebold’s Express Poll 5000 machines were even easier to crack, thanks to having an easily accessible memory card, which you could swap out while voting, containing supervisor passwords in plain text. An attacker could physically access and tamper with these cards, which also hold the unencoded personal records for all voters including the last four digits of their social security numbers, addresses, and driver’s license numbers.

Hackers found that by inserting specially programmed memory cards when no election official is looking, they could change voting tallies and voter registration information. And take a guess what the root password was? Yes, “Password” – again stored in plain text.

More bizarrely, voting machine manufacturer WinVote’s VoteActive device was found to contain pop music. The machine, which was running Windows XP, could be hacked wirelessly in seconds, and had a music player and CD ripper program built in. It is believed this music stuff was left lying around in unused and unallocated space on the disk.

The village also hosted an mock election between George Washington and Benedict Arnold, which was predictably hacked. Of the 133 ballots cast, America’s first POTUS scored 26 votes, as did infamous traitor Arnold, but the winner was an unplanned candidate: DEF CON’s founder Dark Tangent, aka Jeff Moss, with 61 votes.

The machine’s software had been tampered with to insert Moss into the running, and make him win with fake votes. This could be done by infecting an election official’s PC so that when the ballot box is programmed from that computer, the voting software is silently altered to later change votes.

It’s the second year DEF CON has hosted the village, and once again voting machines didn’t make the grade. There just isn’t enough builtin security to stop people physically meddling with machines at the booths, or before and after polling day. There is little or no verification of the authenticity and legitimacy of the code running on the boxes. Anti-tamper seals on the cases have been shown to be ineffective, too.

And the final numbers on government websites may not be accurate, either. An error regarding the number of registered voters, thus suggesting more people voted than were allowed, on the US state of Georgia’s website sparked confusion this month.

With the November elections due, it looks as though, once more, American voters will just have to hope no one is hacking their vote. But some in government have taken an interest.

“It’s been incredible the response we’ve received,” said village cofounder and University of Pennsylvania professor Matt Blaze. “We’ve had over 100 election officials come through here and they expressed over and over again how much they have appreciated learning from this opportunity.”

Fresh from his keynote, former NSA top hacker and White House cyber czar Rob Joyce popped in to chat as well. He praised the work done by those involved, which had been criticised indignantly by some manufacturers before and during the show.

hacking

Microsoft: The Kremlin’s hackers are already sniffing, probing around America’s 2018 elections

READ MORE

“Believe me, there are people who are going to attempt to find flaws in those [election] machines whether we do it here publicly or not,” he said “So, I think it’s much more important that we get out, look at those things, and pull on it.”

Incidentally, on Wednesday, US Republican senators shot down $250m in emergency election security funding proposed by Senator Patrick Leahy (D-VT) – a figure that Hacking Village cofounder Jake Braun told The Register was too small by a factor of 10 if the November elections were to be anywhere close to secure. Cost concerns were cited by the ruling party as a key factor in that decision.

A few days later the President of the Senate, Mike Pence, announced plans for a new super-duper Space Force for orbital warfighting, something the Air Force Space Command already has a firm grip on. The up-in-the-air scheme has an estimated cost of $8bn. ®

* Diebold Nixdorf sold off the US Elections systems Premier division of its business several years ago.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/13/defcon_election_vote_hacking/

The Data Security Landscape Is Shifting: Is Your Company Prepared?

New ways to steal your data (and profits) keep cropping up. These best practices can help keep your organization safer.

The world around online data is changing, and with it the landscape of business is facing an irreversible shift. Not only in terms of regulations — with the European Union’s General Data Privacy Regulation enacted — but in the way businesses actually use and have access to data. An increasing number of businesses are moving their data to the cloud, which brings a different set of security issues.

Through the cloud, hackers can shut down your business for weeks — or longer. They can steal not only your data but your resources. Companies often don’t take this threat seriously enough. The cloud feels invisible, so we make the incorrect assumption that it’s inaccessible. I’ve seen many powerful companies neglect simple steps — like properly training their staff, having a backup in place, or limiting employees’ access — which can make the difference between powerful profits and destruction due to cybercrime.

Thankfully, there are steps you can take to protect your company, and they aren’t that complicated. Implement these five best practices to keep your data (and profits) out of the hands of hackers.

1. Make sure your IT staff is highly trained and available. Make sure your IT staff is highly trained and available. Too many companies relegate their IT crew to a dusty office in the back, with no supervision and little training. They’re treated as outsiders, usually because they have a different skill set and a different goal than the rest of the team — they’re there to support the company, rather than expand the company like sales or product development. Because of this, IT faces the same kind of problems people in other supportive departments, like HR, face: they’re taken for granted, denied the resources they need, and sometimes even unfairly blamed when something related to their department goes wrong.

But these supportive departments are vital to your company’s functioning, and you neglect them at your peril. The IT world is constantly shifting, so the training for your employees needs to be updated constantly if you want them to succeed. Provide them with the right resources and ever-updated training, and you’ll find yourself with an IT team ready and able to support your employees and protect your data. Proper training means your IT staff will see the incoming problems before they arrive — and, for the problems that do come, your IT team will be well equipped to handle them.

2. Educate your regular staff and develop a company-wide policy. Once your IT staffers are highly trained, you can rely on them to develop a company-wide security policy that you can implement and enforce. Many details will depend on your company and industry, but there are a few basic practices that every company should employ.

  • Create a process everyone knows how to follow, including a two-factor authentication system, strong passwords, and access to a private network.
  • Don’t let your employees use their personal phones for company work. But if you do, have the right certificates installed on their devices.
  • Put up all walls to protect employees from hacking, and make them a standard part of your company policy — one that your employees understand, are trained in, and can implement. Having the best security tech in the world will mean nothing if your staff isn’t taking it seriously.

3. Everything is on a need-to-know basis. Your employees do not need access to everything — they only need access to what’s relevant to them. Policy comes into play here again: Make sure each level of access is protected by two-factor authentication and strong passwords, and work with your IT team to see that everyone has the access they need — but no more.

4. Back everything up. Yes, this step may seem basic, but I’ve seen plenty of otherwise savvy executives avoid it. They don’t want to deal with the work of creating a backup to all their data, all their code, all their important financial info — it feels like a hassle, but it’s absolutely essential. You’re at risk of losing an incredible amount of work if you don’t have a backup to turn to in case of an emergency. What’s more, make sure you back it up somewhere private. Don’t simply rely on Google Drive! On top of all this, charge your IT team with having their own backups and regularly taking snapshots of their work. There are plenty of tools available to make this simple; just make sure your IT staff really is using them. Then, of course, make sure all those backups are private and insulated extensively from attack.

5. Prepare for the worst. No matter how much you prepare, there will always be risk involved with any kind of online data storage. Inherent risk means you must be inherently prepared. Have a disaster recovery plan in place, ready to go if a hacker destroys your data. Recovery from this kind of attack is all about speed — the longer your company is down, the greater the damage will be long-term. No insurance or reparations can make up for the potential business your company loses when it’s out of commission — which means a speedy recovery, more than anything else, determines whether a company will be able to bounce back. Use the backup you developed to rebuild and relaunch your programs; make it automated if possible. Set a plan in place so that, when the worst happens, you can turn it around quickly, efficiently, and effectively.

If hackers have your company as a target, they’ll either want to attack you and bring your network down or they’re simply after financial gains. Many hackers will try to gain access to your Amazon cloud account, steal the key, and launch a prolific amount of CPU usage, just to mine it for digital currency. Every day they develop more strategies, and every day they find new incentives. But whatever the goal, a cyberattack can mean the loss of profits, or worse, a permanent shutdown of your entire company. As our data and business landscape shifts and changes with developing technology (and corresponding government policies), make sure you and your team are prepared to ride any wave that comes your way.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early-bird rate ends August 31. Click for more info

Francis Dinha is the CEO and co-founder of OpenVPN, a security-focused open source VPN protocol. With more than 50 million downloads, OpenVPN has been in the open source networking space since its founding in 2004. Its Private Tunnel service provides “last mile” security to … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-data-security-landscape-is-shifting-is-your-company-prepared/a/d-id/1332474?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple