STE WILLIAMS

Student charged by FBI for hacking his grades more than 90 times

In college, you can use your time to study. Or then again, you could perhaps rely on the Hand of God.

And when I say “Hand of God,” what I really mean is “keylogger.”

Think of it like the “Nimble Fingers of God.”

“Hand of God” (that makes sense) and “pineapple” (???) are two of the nicknames allegedly used to refer to keyloggers used by a former University of Iowa wrestler and student who was arrested last week on federal computer-hacking charges in a high-tech cheating scheme.

According to the New York Times, Trevor Graves, 22, is accused in an FBI affidavit of working with an unnamed accomplice to secretly plug keyloggers into university computers in classrooms and in labs. The FBI says keyloggers allowed Graves to record whatever his professors typed, including credentials to log into university grading and email systems.

Court documents allege that Graves intercepted exams and test questions in advance and repeatedly changed grades on tests, quizzes and homework assignments. This went on for 21 months – between March 2015 and December 2016. The scheme was discovered when a professor noticed that a number of Graves’ grades had been changed without her authorization. She reported it to campus IT security officials.

The FBI affidavit claims that Graves changed his grades more than 90 times during those 21 months. He also allegedly changed grades on numerous occasions for at least five of his classmates.

Grades were allegedly tweaked in a wide range of classes, including in business, engineering and chemistry.

The FBI said it spoke with one student who told them that Graves shared copies of about a dozen exams before students sat down to take them. According to the FBI, the student said that he/she accepted the stolen exam, given that everybody else was doing it and they didn’t want their grade to suffer in comparison:

He/she knew Graves was providing the copies to other students and did not want the grading curve to negatively impact his/her scores.

When investigators searched Graves’ off-campus apartment in Iowa City in January. They seized keyloggers, cellphones and thumb drives that allegedly contained copies of intercepted exams. The FBI says one of the phones contained a screenshot showing Graves being logged into a professor’s email account. It highlighted an attachment entitled “exam,” according to the FBI affidavit.

Some of the alleged discussions found in text messages on Graves’ phone:

  • Graves instructing a classmate to go to a microeconomics class to confirm that the teacher logged into her account and “that we acquired the info.”
  • Graves and an associate referring to a keylogger as a large tropical fruit. “Pineapple hunter is currently laying in wait in a classroom already,” Graves allegedly wrote.
  • A student identified as A.B. in court documents urged Graves to use the keylogger to steal an upcoming test, saying “I need 100 on final just to get B- at this point.” Graves’ reply: “Or we could use the time to study?”
  • A student identified as Z.B. asked Graves whether he had told a classmate “about the Hand of God on that test.” Graves’ reply: “No. The less people know the better.”

The university told the FBI that the cheating scheme cost the school $68,000 to investigate the breach and to beef up its IT security. Earlier this year, the university warned students that those involved could face expulsion or suspension. Investigators searched the homes and devices of at least two other students, but they haven’t been charged.

I don’t know how much of an altitude boost Graves gave his grades. Not that it matters. Criminal behavior is criminal behavior, whether you’re popping your A up to an A+ or dragging your Fs up to straight As – as did a former Purdue University student who was sentenced to 90 days in jail plus 100 hours of community service for his part in a keylogger scheme.

Is it child’s play to plug in a keylogger? Yes. Literally.

Eleven Southern California kids got kicked out of school for grade hacking with the devices back in 2014.

Keyloggers are cheap, they’re easy, and the targets – schools and universities – too often have paltry budgets for equipment, software and skilled administrators.

You would imagine that it would make sense to use multifactor authentication to protect at least the most grade-hacking-targeted areas of a school’s network – the grading and testing parts of the system. But somehow, even a technology powerhouse like Purdue has been preyed on by keylogger-bearing, ethics-bare students.

Readers, do you have insights into what’s keeping schools from securing themselves? Please do share them in the comments section below.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7498xwICoRI/

US government wants “keys under doormat” approach to encryption

No, Assistant US Attorney General Rod Rosenstein did not call for tech giants like Apple, Google and Microsoft to keep plaintext copies of all your communications lying around just in case the FBI or other law enforcement agencies come calling with a warrant.

Unfortunately, that is the way some reports spun Rosenstein’s latest salvo in the encryption wars, which came at the end of a speech delivered Monday at the North American International Cyber Summit in Detroit.

Unfortunate because Rosenstein could complain that his comments were distorted, which would distract from the fact that what he did say didn’t improve on either the practicality or the privacy implications of what he said several weeks earlier in speeches at a cybersecurity conference in Boston and at the Naval Academy.

The Assistant AG, echoing former FBI director James Comey, had argued then that unbreakable encryption is allowing criminals to “go dark,” preventing law enforcement from doing its job by denying it the ability to detect, prevent and collect evidence of crimes.

This week the message was slightly modified. Rosenstein said encryption serves, “a valuable purpose.” He called it, “a foundational element of data security and essential to safeguarding data against cyber-attacks.”

And he said he supports, “strong and responsible encryption.” Which to him means, “effective, secure encryption, coupled with access capabilities.”

I simply maintain that companies should retain the capability to provide the government unencrypted copies of communications and data stored on devices, when a court orders them to do so… When a court issues a search warrant or wiretap order to collect evidence of crime, the company should be able to help.  The government does not need to hold the key.

According to The Register, this meant Rosenstein was proposing to, “let people send stuff encrypted as normal, but a plaintext copy of everything – from communications to files on devices – must be retained in an unencrypted form for investigators to delve into as needed.”

Not exactly. Companies don’t have to store everything in plaintext. They just need to have a key – which they can keep – to render data in plaintext when law enforcement comes bearing a warrant. Tufts University professor-Susan Landau, writing on the Lawfare blog last week, called this the “Keys Under Doormat” concept.

Two years ago, Landau was among more than a dozen coauthors of a lengthy paper in the Oxford Academic Journal of Cybersecurity by that title, which presents multiple reasons why such a concept won’t work. The very short version is that, “there is no safe way to do this; any system that provides a way in for law enforcement will inevitably be subverted by hackers.”

Still, Rosenstein also argued that tech companies already circumvent encryption to provide access to data for other reasons. He mentioned, as he has in previous speeches:

… systems that include central management of security keys and operating system updates; scanning of content, like your e-mails, for advertising purposes; simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop.  No one calls any of those functions a “backdoor.” In fact, those very capabilities are marketed and sought out.

Bruce Schneier, CTO of IBM Resilient Systems, another coauthor with Landau of the “Keys Under Doormat” paper, mocked that logic several weeks ago. He noted that that it is absurd to think that encryption can be made to, “work well unless there is a certain piece of paper (a warrant) sitting nearby, in which case it should not work.”

Mathematically, of course, this is ridiculous. You don’t get an option where the FBI can break encryption but organized crime can’t. It’s not available technologically.

Schneier said this week that Rosenstein’s latest speech doesn’t improve on that absurdity.

He can get a warrant for any of those less secure services already. He doesn’t want that. He wants access to the more secure services that don’t have that corporate back door.

Why do we care if he modifies his rhetoric in order to more successfully hoodwink his listeners?

Rosenstein also sought to persuade with a collection of alarming statistics (reproduced here so that you know what he said, not as an endorsement of their veracity by Naked Security):

  • DDoS attacks can amount to 18% of a country’s total internet traffic.
  • The global cost of cybercrime is expected to spike from $3 trillion in 2015 to $6 trillion in 2021.
  • Ransomware attacks are up from 1,000 per day in 2015 to 4,000 per day since the start of 2016. The FBI says ransomware infects more than 100,000 computers per day worldwide.
  • The “WannaCry” ransomware, besides infecting hundreds of thousands of computers, paralyzed Britain’s National Health Service.

Privacy advocates have heard all that and more, and say they take seriously the need for law enforcement to have the tools it needs to bring criminals to justice. But they contend that law enforcement already has vast surveillance capabilities – what Schneier has more than once called, “a golden age of surveillance” – and complying with government demands to defeat encryption even in allegedly selective circumstances will damage public safety rather than improve it.

Gary McGraw, vice president of security technology at Synopsys, scoffs at Rosenstein’s claim that everybody’s privacy will be protected as long as companies, not government, hold the key to unlock encrypted data (key escrow). That simply demonstrates that he doesn’t understand encryption. “He’s an idiot,” McGraw said.

Kurt Opsahl, general counsel of the Electronic Frontier Foundation, in a blog post earlier this month, said Rosenstein has it wrong from the start when he contends that society has never before had a system where, “evidence of criminal wrongdoing was totally impervious to detection.”

Rosenstein is apparently unaware of in-person conversations and, until a couple of decades ago, pay phones, he said.

And he is as scornful as McGraw about what he calls Rosenstein’s “magical dream of secure golden keys.”

First, perfect security is an unsolved problem. No one, not even the NSA, knows how to protect information with zero chance of leaks. Second, the security challenge of protecting a signing key, used only to sign software updates, is much less than the challenge of protecting a system which needs access to the keys for communications at the push of a button, for millions of users around the globe.

Opsahl notes that the Department of Justice has called for an “adult conversation” about encryption.

“This is not it,” he said. “The DoJ needs to understand that secure, end-to-end encryption is a responsible security measure that helps protect people.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/UjxaO5KOwLQ/

Hackers abusing digital certs smuggle malware past security scanners

Malware writers are widely abusing stolen digital code-signing certificates, according to new research.

Malware that is signed with compromised certificates creates a means for hackers to bypass system protection mechanisms based on code signing. The tactic extends far beyond high profile cyber-spying ops, such as the Stuxnet attack against Iranian nuclear processing facilities or the recent CCleaner-tainted downloads infection.

Security researchers at the University of Maryland found 72 compromised certificates after analysing field data collected by Symantec on 11 million hosts worldwide. “Most of these cases were not previously known, and two thirds of the malware samples signed with these 72 certificates are still valid, the signature check does not produce any errors,” Tudor Dumitras, one of the researchers, told El Reg.

“Certificate compromise appears to have been common in the wild before Stuxnet, and not restricted to advanced threats developed by nation-states. We also found 27 certificates issued to malicious actors impersonating legitimate companies that do not develop software and have no need for code-signing certificates, like a Korean delivery service.”

Malware creators may not even need to control a code-signing certificate. The Maryland Cybersecurity Centre team found that simply copying an authenticode signature from a legitimate file to a known malware sample — which results in an invalid signature — can cause antivirus products to stop detecting it.

“This flaw affects 34 antivirus products, to varying degrees, and malware samples taking advantage of this are also common in the wild,” Dumitras said.

A paper on the topic, Certified Malware: Measuring Breaches of Trust in the Windows Code-Signing PKI (PDF), is due to be presented at the CCS conference in Dallas, TX, on Wednesday. The researchers plan to release a list of the abusive certificates at signedmalware.org.

A separate study by the Cyber Security Research Institute (CSRI), out this week, uncovered code-signing certificates readily available for purchase on the dark web for up to $1,200 (£902).

Code-signing certificates are used to verify the authenticity and integrity of computer applications and software. Cyber criminals can take advantage of compromised code-signing certificates to install malware on enterprise networks and consumer devices.

“We’ve known for a number of years that cyber criminals actively seek code-signing certificates to distribute malware through computers,” said Peter Warren, chairman of the CSRI. “The proof that there is now a significant criminal market for certificates throws our whole authentication system for the internet into doubt and points to an urgent need for the deployment of technology systems to counter the misuse of digital certificates.”

Code-signing certificates can be sold many times over, according to Venafi, a security firm that specialises in the protection of machine to machine identity protection. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/01/digital_cert_abuse/

America’s 2020 Census systems are a $15bn cyber-security tire fire

Analysis In 2020, America will run its once-a-decade national census, but the results may not reflect reality if hackers manage to have their way.

On Tuesday, the US Senate Homeland Security and Governmental Affairs Committee heard that the 2020 census will be the first to make extensive use of electronic equipment. For example, census workers will be given tablets to interview people who can’t be bothered filling in and sending back their forms.

Crucially, the US Census Bureau must patch vulnerabilities and install strong defenses in the computer systems it has set up to find and tabulate American citizens. With less than three years to go, a little more hustle in that department is needed, it seems.

hacking

A draft US law to secure election computers that isn’t braindead. Well, I’m stunned! I gotta lie down

READ MORE

“The bureau has not addressed several security risks and challenges to secure its systems and data, including making certain that security assessments are completed in a timely manner and that risks are at an acceptable level,” Eugene Dodaro, the US Comptroller General, said in a statement read out during the Senate hearing.

“It is important that the bureau quickly address these challenges.”

Previously, the census was recorded by mailing paper forms to every household in the country, and then dispatching data collectors to quiz citizens who don’t return their completed paperwork. Dodaro reported that “because the nation’s population is growing larger, more diverse, and more reluctant to participate,” response rates were at an historic low: just 63 per cent of households replied by mail in 2010 compared to 78 per cent in 1970.

As a result, the bureau had to recruit a load of temporary workers to manually obtain people’s details. After the 2010 census, someone had the bright idea to make the process more electronic, with workers using fondleslabs to input data.

This was billed as a cost-saving measure but those familiar with large IT projects can see where this is going. The 2010 census cost $12.3bn to carry out, up 31 per cent on the 2000 poll, and the 2020 exercise is expected to cost $15.6bn, and costs may yet rise higher.

Part of the reason for the massive cost increase is the aforementioned use of handheld electronic devices by workers. The data collectors were supposed to use their own phones when out in the field, but this was scrapped once someone, thankfully, thought through the security and compatibility implications.

High tech doesn’t equal secure

Dodaro said the US Government Accountability Office has identified 43 electronic systems that are to be used in the 2020 census. None have undergone the required security certification – and one, the code used to tabulate all the data, won’t even finish development until March 2019 at the earliest. Any assessment and debugging of this software will be rather last minute.

As a result, the GAO has declared the Census Bureau a “high-risk agency,” and wants to conduct a thorough test of all its systems next year. However, some of the electronic systems won’t be ready by then, and those that are ready are already showing problems, and most haven’t undergone penetration testing.

For example, testing this year in three regions of the country revealed problems in transmitting addresses and maps to workers’ devices, and, in seven cases, information was accidentally deleted from the slabs. Census collectors also found problems with cellphone coverage that meant they had to drive to the nearest town to file their results.

The situation is complicated further by staffing turmoil within the bureau. The head of the agency resigned shortly after Trump was elected, and Dodaro reported that as of October last year 60 per cent of positions at the bureau were unfilled.

Add into this the experience in online censuses from Down Under. Last year the Australian census, which was also trialing an IT-heavy approach to gathering data, was taken offline when a distributed-denial-of-service attack knocked down servers.

The other concern is that the data could be manipulated by hackers unknown. And the US census data is vital not only for government planners but also for the politics of the republic itself.

Census data is used to determine congressional districts for voting by assessing how many people live in a certain area. It is also used to devise education and public sector funding so that the needs of the population can be met.

Dodaro said the GAO is keeping a close eye on the systems and will be conducting further security testing – if the code is ready to do so. He said he was hopeful the system could be made secure, but we’re all heard that before. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/01/us_2020_census_insecure/

Virtually everyone in Malaysia pwned in telco, govt data hack spree

The personal data of millions of Malaysians has been swiped by hackers who raided government servers and databases at a dozen telcos in the southeast Asia nation.

Information on 46.2 million cellphone accounts was slurped from Malaysians telecoms providers. To put that in context, the population of Malaysia is 31.2 million; obviously, some people have more than one number.

The stolen telco records include people’s mobile phone numbers, SIM card details, device serial numbers, and home addresses, all of which are useful to identity thieves and scammers. Some 80,000 medical records were also accessed during the hacking spree, and government websites as well as Jobstreet.com were attacked and infiltrated, too, we’re told.

The Malaysian Communications and Multimedia Commission, along with the police, are probing the computer security breaches. DiGi.Com and Celcom Axiata are among the dozen compromised telcos assisting investigators.

The intrusions were first reported by Malaysian news site lowyat.net, which spotted, in the middle of last month, a mystery scumbag trying to flog the stolen data for Bitcoins.

Malaysian officials confirmed this week that nearly 50 million mobile phone account records were accessed by hackers unknown. The authorities also warned that people’s private data was stolen from the Malaysian Medical Council, the Malaysian Medical Association, the Academy of Medicine, the Malaysian Housing Loan Applications body, the Malaysian Dental Association, and the National Specialist Register of Malaysia.

It’s believed the systems were actually hacked as far back as 2014, The Star reported.

Incredible as it may seem there’s at least a couple of precedents for a huge chunk of the population of an entire country getting caught up in a database security breach. The personal records of every pensioner in South Africa spilled online only last month. Almost everyone who had a credit card in South Korea was pwned back in 2014 in another unedifying security cockup. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/01/malaysia_telco_government_hack/

iPhone 7, Samsung Galaxy S8, Others Hacked in Pwn2Own

Researchers participating in the Mobile Pwn2Own 2017 competition developed exploits for the iPhone 7, Samsung Galaxy S8, and others.

Participants in the Mobile Pwn2Own 2017 competition successfully hacked into Apple’s iPhone 7, Samsung’s Galaxy S8, and Huawei’s Mate 9 Pro during the first day of competition, according to event organizer Trend Micro’s Zero Day Initiative (ZDI).

The two-day event offers prize money in excess of $500,000 and the $345,000 was earned during the first day, according to a SecurityWeek report. All vulnerabilities exploited during the competition will be disclosed to the vendors and they will have 90 days to issue a fix before ZDI issues a limited advisory with mitigation suggestions, according to ZDI.

A team from Tencent Keen Security Lab discovered four vulnerabilities in the Apple iPhone 7 running iOS 11.1, that could lead to a remote code execution through a WiFi bug and escalate privileges to persist through a reboot, ZDI says. The Tencent team earned $110,000 for the four bugs.

360 Security, meanwhile, found a bug in the Samsung Internet browser, in which privileges could be escalated in a Samsung app to also persist through a reboot, notes ZDI. 360 Security earned $70,000 with their demonstration.

Learn more about the Mobile Pwn2Own 2017 competition here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/mobile/iphone-7-samsung-galaxy-s8-others-hacked-in-pwn2own/d/d-id/1330296?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Two drones, two crashes in two months: MoD still won’t say why

A damning Ministry of Defence report into the department’s safety oversight systems has revealed when two unmanned aerial vehicles crashed into the sea off Wales.

The Watchkeeper WK450-series drone fleet, built and partially operated by French defence contractor Thales, has been marred by a number of crashes in British service over the past few years.

The MoD has tried to keep the crashes hidden from the public, not admitting they had happened until a chance remark made by an admiral in September disclosed this year’s incidents.

The two drones, tail numbers WK042 and WK043, crashed within seven weeks of each other, in February and March this year. Both “remotely piloted aerial systems” (RPAS) were lost in Cardigan Bay, immediately west of West Wales Airport, Aberporth, causing the remaining 52 drones to be grounded for four months.

The Watchkeeper is undergoing lengthy flight testing with the Army. Initially proposed as a surveillance drone, the programme has achieved relatively little for the 12 years and £1.2bn+ spent on it, though the Bureau of Investigative Journalism found two years ago that the drones had seen a total of 146 hours of active duty – equating to two days’ operational flying each.

WK042 was lost in the sea on February 3, according to the Defence Safety Authority’s latest biannual report published this week. The drone was being flown from its ground station by a combined crew of Thales and UAV Tactical Systems (UAVTS) operators who were testing de-icing equipment. Historical weather data for Aberystwyth, around 20 miles north of Aberporth and also on the edge of Cardigan Bay, shows that temperatures on the day averaged about 9oC.

WK043 was lost on March 24 in the same area while being flown by a combination of Army, Thales and UAVTS operators, while a soldier was being trained to pilot the drone. Watchkeepers cannot be flown in the “stick and rudder” sense; instead, operators select waypoints on a map display and the drone flies itself towards them.

Though the MoD also operates other drones, including Predators bought from America, these seem to crash less often. However, an MQ-9 Reaper, serial number ZZ205, is now the subject of a formal service inquiry following an unspecified incident in August 2016.

We asked the MoD to comment on what type of investigation is being undertaken into the latest Watchkeeper crashes and why it did not come clean about the crashes when they first occurred. A spokesman for the department told us that no injury or loss of life had occurred and added: “We paused Watchkeeper flying for a short period whilst conducting initial investigations, but resumed flight trials in early July. Service inquiries into the specific incidents are ongoing as we look to learn all we can from the events.”

A Watchkeeper operational field trial exercise has been taking place from West Wales Airport since mid-September and is due to end this Sunday, November 5. A company-sized unit from 47 Regiment, Royal Artillery, has been flying its drones from the Aberporth airfield.

A safety and reputational risk in the making

Watchkeeper is a relatively old programme (as this authoritative history by top blogger Think Defence explains) that has absorbed large sums of public money for a return that is largely invisible to the taxpayer.

Previous crashes highlighted Watchkeeper’s poor controllability in marginal weather conditions and poor onboard software, in combination with its use of laser altimeters as a primary height sensor – a strange choice for aircraft operating in the wet and cloudy skies of Wales. While El Reg has sympathy for trainees cocking up a landing, this does not excuse the MoD for hiding behind self-regulation to pretend that all is well.

As Watchkeepers do make flights over the UK mainland as well as the sea, the MoD must be open and transparent about when its unmanned aircraft crash and why. Civil aviation has long grasped this essential point and the Air Accidents Investigation Branch is proactive with publishing interim reports into civil aircraft crashes and malfunctions. In contrast, the Military Aviation Authority says very little on matters of public interest.

There is no justification for the MoD keeping quiet and hoping nobody notices just because these latest crashes were into the sea and didn’t involve human victims. For all the public knows, the next Watchkeeper crash could result in one of these million-pound airframes landing in their back gardens – or even a built-up area. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/01/thales_watchkeeper_crashes/

Punctual as ever, Equifax starts snail-mailing affected Brits about mega-breach

UK financial service regulators only learned of the Equifax mega-breach through media reports.

The admission comes in correspondence from the Financial Conduct Authority (FCA) released by Treasury Committee on Tuesday. A letter from the FCA to Nicky Morgan MP, chair of the Treasury Committee, confirms that the regulator is looking into the credit reference agency’s much-criticised handling of the breach.

Equifax initially said a breach that affected 145 million US consumers also hit 400,000 Brits. That admission came on September 15 but a month later the firm admitted that it had underestimated the impact of the breach on UK customers.

Equifax admitted that a file containing 15.2 million UK records dated between 2011 and 2016 had been exposed as a result of the snafu. Most of these were duplicates or test data, meaning the private details of almost 700,000 had actually been exposed. Equifax said it would be contacting affected UK consumers by post.

Regulatory response

The FCA letter clarifies that these 693,665 customer had their driving licence number and email addresses associated with an Equifax.co.uk account exposed in 2014. The FCA is content to accept Equifax’s figures for the number and details of records exposed but still has questions about how long it took Equifax to come up with these figures.

Regulators back Equifax’s plan to notify affected parties by letter rather than email because of the risk of copycat phishing campaigns.

The breach, which stemmed from an missed Apache Struts patch, was open from May 2017 until it was discovered in July. Equifax had weeks before going public in September but mishandled the breach notification process at almost every turn, as extensively covered in previous Register stories. Issues have included a bespoke breach notification site so shaky that security scanners thought it was a phishing site, attempts by senior management to blame the whole sorry mess on a single unnamed techie and more.

Much more.

Equifax has come under fierce criticism from both consumer rights advocates and infosec experts, not least because it sells identity protection services. Consumers are stuck with Equifax whether they like it or not because its services are used by businesses to check individuals’ creditworthiness.

In its letter to Morgan, the FCA touches on this point without giving away any details of its ongoing investigation.

Credit reference agency firms are subject to the high level principles of the FCA regulatory regime, which include requirements on treating customers fairly and on ensuring adequate risk management, systems and controls. They are also subject to relevant data protection legislation which is enforced by the Information Commissioner’s Office (ICO).

While our investigation is under way, it would be inappropriate at this stage for us to comment publicly on what rules might potentially have been engaged.

The FCA adds that it is working in lockstep with the Information Commissioner’s Office, which is also running a related but separate investigation into Equifax.

Equifax is really, really sorry

The Treasury Committee also released an 11-page letter (PDF) put together by Equifax’s European president, Patricio Remon, in response to questions put to it by Morgan. The chair of the Treasury Committee asked Equifax about the scale of the breach, and what compensation it intended to provide at the same time she wrote to the FCA. Equifax’s UK business is authorised by the FCA, which has the power to either fine Equifax, order it to take remedial action or (in extremis) revoke its right to operate in the UK.

Equifax said it began notifying the customers most exposed by letters posted on October 13. “Consumers who have potentially had their driving licence numbers or Equifax.co.uk membership information impacted have been offered a free comprehensive ID protection service, that will enable them to monitor their personal data,” the company told Morgan.

It went on to officially confirm that it had hired cybersecurity firm Mandiant to handle computer forensics and incident response. Based on this work, Equifax has “established that the UK core consumer credit data (such as balances and debts owed) or credit referencing systems were not impacted” by the breach. The data of UK customers exposed “related to an historic system that was used by some UK business customers to validate consumers’ identities”.

One of the exposed files contained 96,275 records, relating to around 27,000 UK subscribers to Equifax.co.uk services.

Equifax apologised for the breach in the response, which provides a detailed timeline of events from its perspective. Investigators will doubtless be looking closely at this timeline in assessing the credit reference agency’s breach response and considering whether the notification process was delayed without sufficient reason.

Reg reader David kindly forwarded a copy of one such letter, which he received on Tuesday (October 31), Halloween. An excerpt from this four-page letter is published below.

Equifax breach letter to UK customers

In its letter to Morgan, Equifax said it had trouble verifying the addresses of consumers. Even given that Equifax is dealing with a leak of historic data that’s some years old this is still quite an admission from a credit reference agency. The phrase “you had one job” comes to mind…

You had one job...

Equifax address verification uncertainity

Cold comfort

Equifax is keen to portray the mess as increasing the risk from cold-calling scammers and perhaps other types of phishing. “For the majority of impacted UK consumers we have notified, the main risk is unwanted cold calling,” it said.

The UK data leak did not include the information crooks are likely to need in order to pull off successful frauds, the company reasons.

In the US at least, fraud has already been recorded in relation to the breach. One woman from Seattle told local media that her identity has been stolen 15 times. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/01/equifax_breach_uk_notification/

How Wireless Intruders Can Bypass NAC Controls

A researcher at this month’s SecTor conference will demonstrate the dangers of not employing EAP-TLS wireless security.

Organizations using port-based network access control (NAC) devices to contain wireless intruders may be less secure than they assume.

Unless an organization is using the most secure WPA2-EAP authentication, an attacker with an initial foothold on the enterprise wireless network can bypass the protections enabled by NAC appliances and pivot deeper into the enterprise.

That’s according to Gabriel Ryan, security engineer at Gotham Digital Science, who will present a paper on the topic at the upcoming SecTor security conference in Toronto this month.

Ryan’s presentation on the “Black Art of Wireless Post-Exploitation” examines the implications of the practice, by many organizations, to use NAC appliances as a way to try and contain attackers who may have breached the wireless network.

Often, companies employ this method to compensate for the relatively weak perimeter security provided by EAP-TTLS and EAP-PEAP authentication mechanisms, says Ryan. Both protocols have long been susceptible to so-called evil twin attacks for harvesting usernames and passwords. But many enterprises still continue to use TTLS and PEAP because the more secure certificate-based, two-way authentication provided by EAP-TLS is much harder to implement.

Rather than using EAP-TLS to try and prevent wireless breaches from happening, many organizations instead rely on NAC appliances to identify and quarantine any devices that might manage to breach their wireless network protections.

The problem with this approach is that it assumes a wireless device that is quarantined in a VLAN is truly isolated and cannot communicate with other devices on the network when in reality it can.

“On a wired network if you violate a rule imposed by the NAC, the NAC will see you and quarantine you,” Ryan says. The model works because it banks on the assumption that the physical layer is secure.

“In wireless, you cannot keep two radio receivers from working with each other,” Ryan says. “Client isolation is a logical control, not a physical control.”

In a wireless network, WPA2-EAP provides the physical layers of protection. If weak forms of WPA2-EAP are used, an attacker can take control of the physical layer via rogue access point attacks and bypass NAC protections, he says.

At SecTor, Ryan will demonstrate two attacks. One of them is a so-called hostile portal attack to steal Active Directory credentials from a WPA2-EAP network, without network access. The other is what Ryan describes as indirect wireless pivots in which rogue wireless access points are used as mechanisms for bypassing port based access control completely.

Ryan’s hostile portal attack involves the use of a rogue wireless access point to force a client device that is trying to access an enterprise wireless network to connect with the attacker’s device instead so authentication credentials can be obtained. The hostile attack then leverages previously demonstrated techniques to crack the RADIUS passwords needed for the attacker’s device to fully associate with the victim client device.

The indirect wireless pivots method leverages the same technique to get an attacker device that is in a quarantined VLAN to communicate with a victim device in a restricted VLAN segment. The pivot involves forcing the victim device to associate with the attacker’s network via a rogue access point and then relaying traffic from the victim to an SMB share on the attacker’s system in the quarantine VLAN.

Attackers can use the technique to grab the NT LAN Manager hash from the victim device, crack it using previously demonstrated techniques, and eventually associate the victim device to the attacker in the quarantine VLAN segment.

“The takeaway here is that you cannot rely on NAC appliances as a means of compensating for the risk,” of not using EAP-TLS, Ryan says. When designing security mechanism for you network take into account the way that the underling physical layer works, he notes. “Security controls that work on a wired network do not work the same on a wireless network.”

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Related Content:

 

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/how-wireless-intruders-can-bypass-nac-controls/d/d-id/1330289?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How AI Can Help Prevent Data Breaches in 2018 and Beyond

Artificial intelligence startups are tackling four key areas that will help companies avoid becoming the next Equifax.

Equifax’s stunning data breach is a major headache for some 145 million Americans who could face identity theft for the rest of their lives. The breach has forever tarnished Equifax’s business and brand, and it has prompted the company to replace its CEO, CIO, and CSO. However, as we look at the coming year and as new technologies continue to evolve, it’s clear that artificial intelligence (AI) can have a powerful role in helping prevent future data breaches.

If we look at how the Equifax breach occurred, there’s a lot to learn — and even cause for optimism. As we know, it all came down to patching. Patching vulnerable, out-of-date software should be straightforward, but in reality, it never is. Although Equifax could have prevented this disaster, it’s hardly the first company to neglect a critical software patch. A 2016 survey found that 80% of companies that suffered a breach could have avoided it if they had used an available update. So why don’t organizations apply patches?

Sometimes, the delay in patching is simply due to inadequate resources or a lack of solid internal processes that require immediate identification of the vulnerable software, testing of the new patch, and deployment of the fix. Often, firms delay so they can test a patch before applying it — to make sure they aren’t fixing one problem but creating others. Sometimes companies don’t realize they are running software that is vulnerable. Because of the complexity of sprawling applications, popular vulnerability scanning products miss important pieces of the puzzle, leaving holes for attackers to exploit. It appears that a combination of factors played a role in Equifax’s situation. (Here’s another great read on the barriers of patching for reference.)

The bright side? AI is driving exciting advancements in information security. Security professionals must get plugged into new technologies and not rely only on old-school solutions or methods, because traditional tech solutions won’t cut it (e.g., antivirus software). AI will fuel next-generation solutions — whether they’re focused on endpoints, analytics, or behavioral analysis. With the amount and velocity of data, and the sheer number of connections to monitor and manage accelerating at an exponential rate, AI will be a critical component in preventing breaches like the one at Equifax.

How AI Could Help
Problems largely caused by human error specifically lend themselves to AI. Here are four areas that AI startups are investigating — and in some cases, are in the early stages of development:

1. Code development: Whether the software is from open source communities or from companies like Apple or Microsoft, one could ask why these vulnerabilities aren’t being found while the code is being put into production. Why would the Apache Foundation distribute software that has an obvious vulnerability? The reason is, when you’re talking about millions of lines of code and lots of new functionality, sometimes things get lost in the shuffle. There probably was rigorous testing in Equifax’s case, but people tend to look for things they’ve seen before. The existing tools to check for such vulnerabilities are also hard-wired by humans. AI would allow you to think of things a human couldn’t.

2. In-market testing: Once software is released to the market, there are products and service providers that find vulnerabilities in public-facing applications. Clearly, someone caught Equifax’s problem, but it took a long time and the damage was already done. AI would make testing and vulnerability-scanning tools more useful and close the gap between putting something into production that’s unsafe and knowing it’s unsafe.

3. Checking the patches: One reason that organizations (and people) are reluctant to download patches is that they often render old apps inoperable or cause them to lose functionality. Wouldn’t it be great if there was intelligence to look at the code and provide higher confidence that downloading the patch wasn’t going to break your application?

4. Benchmarking: Being a CISO isn’t a very attractive proposition if you’re likely to get fired if and when a major breach occurs. Because no one can prevent attacks 100% of the time, how can you hold security officers accountable in a fair way? One idea is to use AI to look at your industry category (such as banking or retail) and examine the firewalls, endpoint and other security products you’re using and how they are configured in your overall security stack. When you look at this list of complex configurations, you get an inter- and intra-company set of metrics. With AI monitoring and analyzing of this data, you can see how you stack up against your peer group. Even if there were to be a security incident, you could let your board of directors know that you had gone above and beyond what your peers are doing by every other measure, perhaps saving your job.

There are other applications, too. AI could be used to find a personalized way to remind you to install a patch that makes it impossible to ignore, or more precisely find all of the application instances that need to be fixed. The bottom line is that AI is a powerful tool at our disposal to help avoid becoming the next big breach target.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

 

Rick Grinnell is Managing Partner at Glasswing Ventures, an early-stage venture capital firm dedicated to building the next generation of AI technology companies that connect consumers and enterprises and secure the ecosystem. As a venture capitalist and seasoned operator, … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/how-ai-can-help-prevent-data-breaches-in-2018-and-beyond/a/d-id/1330263?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple