STE WILLIAMS

Waiting for Skynet? Don’t hold your breath

Some very smart people have a theory about Artificial Intelligence (AI): someday, if we’re not careful, machines will grow smart enough to wipe us all out.

Richard Clarke and R.P. Eddy devoted a chapter to it in their book Warnings: Finding Cassandras To Stop Catastrophes. Elon Musk, Steven Hawking and Bill Gates have all warned about it, Apple co-founder Steve Wozniak has pondered a future where AI keeps humans around as pets, and computer scientist Stuart Russell has compared the dangers of AI to that of nuclear weapons.

Musk in particular has become strongly associated with the subject. He’s suggested that governments regulate algorithms to thwart a hostile AI takeover and funded research teams via the FLI (Future of Life Institute) as part of a program aimed at “keeping AI robust and beneficial.”

AI run amuck is also part of our culture now and central to the plot of some landmark films and TV like 2001: A Space Odyssey, War GamesThe Matrix and Westworld.

Fears of AI might make for good TV but the warnings about our new robot overlords are overblown, according to Sophos data scientists Madeline Schiappa and Ethan Rudd.

The two explain why an AI takeover is unlikely, in a new article on Sophos News.

They warn against spreading irrational panic and suggest a closer look at just how far machines are from achieving human-level intelligence. There are tasks that artificial neural networks’ (ANNs) just can’t handle, they point out. In particular, ANNs aren’t good at performing multiple heterogeneous tasks at the same time, and probably won’t ever be, unless new learning algorithms are discovered:

An ANN trained for object recognition cannot also recognize speech, drive a car, synthesize speech, or do so many of the thousands of other tasks that we as humans perform quite well. While some work has been conducted on training ANNs to perform well on multiple tasks simultaneously, such approaches tend to work well only when the tasks are closely related (e.g., face identification and face verification).

With assistive technologies like Siri and Alexa infiltrating our homes, they write, the notion of intelligent, robotic assistants seems plausible. But it’s easy to overestimate just how intelligent these systems are. When thinking of intelligence as the general ability to perform many tasks well, they added, ANNs are very unintelligent by general intelligence standards, and there is no obvious solution to overcome this problem.

To read the full article, pop over to our sister site, Sophos News.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/NjqwGf_ipbE/

Have MAC, will hack: iThings have trivial-to-exploit Wi-Fi bug

iThing owners, do not skip iOS 11: it plugs a dead-easy-to-exploit drive-by Wi-Fi bug.

All an attacker needed to own a phone with a vulnerable Broadcom Wi-Fi chip was the target’s MAC address, and exploit code running on a laptop.

As shown in this now-unsealed Google bug thread, this discovery by Gal Beniamini – very like one he warned about in April – was first raised in June as an out-of-bounds write.

The thread says an oversized value can be put in the unvalidated “Channel Number” field in code handling Wi-Fi neighbour responses. It’s the large value that lets an attacker write to an address that should be inaccessible to it.

Beniamini posted his exploit to the still-private discussion on August 23, and the post went public a week after iOS 11 landed.

“The exploit has been tested against the Wi-Fi firmware as present on iOS 10.2 (14C92), but should work on all versions of iOS up to 10.3.3 (included)” the post states. “However, some symbols might need to be adjusted for different versions of iOS, see ‘exploit/symbols.py’ for more information.”

Upon successful execution of the exploit, a backdoor is inserted into the firmware, allowing remote read/write commands to be issued to the firmware via crafted action frames (thus allowing easy remote control over the Wi-Fi chip).

After that, it’s child’s play: “You can interact with the backdoor to gain R/W access to the firmware by calling the “read_dword” and “write_dword” functions, respectively.”

While it’s not the same as the bug Beniamini discovered in April, his subsequent work (in a follow-up also written in April) warned that systems on chips (SoCs) in smartphones are a huge and unaudited attack surface. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/27/ios_11_plugs_wifi_vulnerability/

7 SIEM Situations That Can Sack Security Teams

SIEMS are considered an important tool for incident response, yet a large swath of users find seven major problems when working with SIEMs. PreviousNext

Image Source: Shutterstock

Image Source: Shutterstock

Infosec professionals working with Security Information and Event Management (SIEM) systems may find themselves in a love-hate relationship – they love the concept of the SIEM’s incident response capabilities, but hate their potential fist-full of problems and surprises, according to a presentation this week at the ISC(2) Security Congress convention in Austin, Texas. 

More than half of SIEM users are displeased with the intelligence they glean from the technology, according to a presentation by Cyphort, which sponsored a SIEM survey by the Ponemon Institute and one from Osterman Research. Both surveys collectively represented nearly 1,000 enterprise SIEM users, says Franklyn Jones, Cyphort’s chief marketing officer, who gave the presentation.

Here are seven major problems SIEM users face, according to Cyphort’s presentation and, based on interviews with Dark Reading, solutions offered by a Forrester Research analyst, and various SIEM vendors.

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full BioPreviousNext

Article source: https://www.darkreading.com/analytics/7-siem-situations-that-can-sack-security-teams-/d/d-id/1329976?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Live by the Code of Good Bots

Following these four tenets will show the world that your bot means no harm.

Although my company fights problems caused by malicious bots on the Internet, many bots are doing good things. These beneficial bots may help a site get better exposure, provide better product recommendations, or monitor critical online services. The most famous good bot is the Googlebot, which crawls links to build the search engine many of us use.

To keep their access to the Web open, the makers of good bots must understand how to tell the world about their bots’ intentions. At PerimeterX, we defined a “Code of Good Bots” that provides basic rules of good behavior. If legitimate bot makers follow this code, then websites and security services (like PerimeterX) can easily identify such bots.  

If you’re a bot developer, we recommend following the Code of Good Bots:

1. Declare who you are.
The Internet is awash in spoofed and poorly identified traffic. This includes bad bots that seek to harm sites. To avoid suspicion, a bot developer should make its bot declare its identity in the user-agent HTTP header when communicating with a site. We also recommended that bot developers provide a link in the user-agent header to a page describing the bot, what it’s doing, why a site owner should grant it access, and methods a site owner can use to control the bot.

Googlebot, for example, will always include the word “googlebot” in the user-agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

2. Provide a method to accurately identify your bot.

Good bot builders should provide a defensible method to verify a bot is what it declares itself to be. While declaring a specific user-agent is important, malicious bots can pretend to be legitimate by copying the user-agent header of a beneficial bot (also known as user-agent spoofing). For this reason, validating the good bot’s source IP address is crucial.

The bot maker should provide a list of IP ranges as an XML or JSON file on its website. This list also can be provided as a DNS TXT record in the bot owner’s domain. We recommend an expiration time for the list, to indicate the frequency at which this list should be retrieved.

Another method used by many crawlers was introduced by Google and calls for a sequence of reverse DNS and DNS lookups to validate the source IP address. You can read more about this method here. Although Google’s verification method is common and offers the required safety, it is very inefficient for the validating site. Providing a list of IP ranges enables a much more efficient validation process.

We recommend that bot makers specify the verification method in the URL provided in the user-agent string.

The validation method should be strong enough that bad bots can’t pass this test. Specifically, the method should restrict the IP address ranges to those controlled or owned by the bot operator. For example, suggesting that a site owner verify that the IP address is in the Amazon Web Services (AWS) IP ranges isn’t a good idea. Anyone can purchase an AWS virtual server and use it to send requests across the Web.

3. Follow robots.txt.
The robots.txt file is the de facto standard used by websites to communicate to bots and crawlers the general access policies of the site. The standard specifies how to inform the bot about which areas of the website shouldn’t be crawled or scraped, rate and frequency in which a bot can access the site, and more. Good bots are required to download the robots.txt file from the site before accessing it, parse it, and follow the instructions.

4. Don’t be too aggressive.
This is related to robots.txt and requires some common sense. Overly aggressive behavior can slow down a site or even take it offline. Different websites have different capacities to handle bot traffic. Some are set up to scale up quickly should massive traffic appear; others are not and will choke on even a small amount of additional traffic. While a bot may want to collect data quickly, this desire must be tempered against the realities of what the site can handle. We’ve seen cases where good bots contribute over 90% of the requests coming to a site.

Bots should respect the “Crawl-delay” instruction if specified in robots.txt. For example, site owners could use this to instruct a 10-second delay between requests. Some bots, such as Googlebot and Bingbot, provide more enhanced methods to control their crawlers, and specifically crawling rates.

If a site owner doesn’t provide instructions on crawl speed and crawl access, the bot maker should default to a moderate, generally acceptable crawling rate.

The Code of Good Bots is Good for Us All
The importance of bot makers following the Code of Good Bots grows more urgent because of the rise of sophisticated bot attacks that piggyback on users via malicious browser extensions or on poorly secured Internet of Things devices. Site owners may default to more aggressive anti-bot policies in the effort to defend their user experience, site performance, and site integrity. The Code of Good Bots is critical for making sure that even then we continue to benefit from the good things good bots offer for users and businesses.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Ido Safruti is the founder and CTO at PerimeterX, a provider of behavior-based threat protection technology for the Web, cloud, and mobile apps that protects commerce, media, and enterprise websites from automated or non-human attacks. Previously, Ido headed a product group … View Full Bio

Article source: https://www.darkreading.com/cloud/how-to-live-by-the-code-of-good-bots/a/d-id/1329979?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Sonic Data Breach Potentially Affects Millions

Sonic first heard about the breach when its credit-card processor detected unusual activity on customers’ payment cards.

Fast-food giant Sonic has disclosed a data breach potentially affecting millions of customers. The chain has nearly 3,600 stores across 45 US states but as the investigation is ongoing, it does not yet know how many store payment systems were affected.

KrebsOnSecurity first reported the breach, which Sonic discovered last week when its credit-card processor informed the chain of unusual activity regarding customers’ payment cards. The incident may have led to a “fire sale” of millions of stolen payment cards on the Dark Web. Card data from Sonic’s customers was discovered in a batch of five million credit and debit accounts advertised on an underground credit-card theft bazaar called Joker’s Stash.

It’s unclear whether all cards in the batch belong to Sonic customers; Krebs reports they could potentially be mixed with cards stolen from other outlets by the same attackers. Most range from $25 to $50 per card depending on the type of card, whether it’s credit or debit, and the issuing bank.

Sonic says it’s working with law enforcement and third-party forensic experts, and will continue to update with further information.

Read more details here.

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/sonic-data-breach-potentially-affects-millions/d/d-id/1329996?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Army Black Hawk helicopter damaged in drone crash

New York City drone operators want to know: where can they legally fly their drones?

Short answer: in your dreams. You can only fly in a few designated parks in Brooklyn, Queens and Staten Island.

Regardless of those and other restrictions, a civilian drone crashed into a US Army helicopter last Thursday, striking the left side of the fuselage, damaging one of the copter’s blades, denting the rotor blade in two spots, denting the window, and spitting out a chunk of itself that landed at the bottom of the main rotor system.

The helicopter landed safely at a nearby airport. The pilot was unharmed. The rotor blade will have to be replaced, and the hunt is on to find the pilot who’s responsible for the extremely dangerous collision. The New York Police Department, the Federal Aviation Administration (FAA), the US Secret Service, the FBI and the military are investigating, but no arrests had been made as of Monday evening.

The helicopter was one of two Army UH-60M Black Hawks with the 82nd Airborne out of Fort Bragg, North Carolina. They were in the city to provide security for the annual United Nations General Assembly, which drew world leaders including US President Donald Trump.

The helicopters were flying low along the east shore of Staten Island, 500 feet over Midland Beach, when one of them was struck around 7:30 pm, according to the aviation-focused publication AIN Online.

Lt. Col. Joe Buccino, public affairs officer for the 82nd Airborne, said that the military is now rethinking flights over densely populated residential areas. AIN quotes him:

We traditionally fly [in] restricted airspace or in combat, so this is a new experience. We were obviously flying over a residential area – a municipal area – supporting this mission. We are reviewing the process now should we receive another mission like this.

Besides the fact that the drone was flying over a residential area, and not over a designated flying park, it was also flying in violation of a Temporary Flight Restriction covering Staten Island at the time of the collision.

The Army reports that the drone was also flying above 400 feet, which is the maximum height that recreational drones are allowed to fly. The drone involved in Thursday’s collision wasn’t within five miles of either nearby Newark Liberty International or Linden, New Jersey airports, the Army reports.

Those bad, bad drones

This is only the most recent misbehaving drone story. Drone operators have flown close to UAV-sucking jet engines on passenger planes, police helicopters, and firefighting aircraft. They’ve flown UAVs on to the White House lawn and above playgrounds, concussed at least one person at a parade, and aggravated a homeowner to the extent of “Hey, gadget! Have a taste of birdshot!!!”

Yes, that man did get arrested for shooting down a drone over his property, but a judge later said he had a right to do it. That judge obviously wasn’t presiding over a court in Massachusetts: on Friday, a judge overturned a local law in the town of Newton, Massachusetts, that banned drones flying over private property at or below 400 feet.

Given all the collisions and near-misses, it’s a miracle we don’t see more drone-inflicted injuries.

The Staten Island Black Hawk incident could have been tragic – the helicopter held four crew members. It’s thanks to the skill of them all, including the pilot who reacted and who landed the helicopter safely, that they’re all safe.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VtCc1Nm-PtY/

Campaigner who refused to hand over passwords found guilty

Last November, Muhammad Rabbani was detained and questioned by border police at Heathrow Airport in London, where they demanded he hand over the password and PIN that would unlock his laptop and smartphone.

He refused, citing that the devices contained confidential data connected to the case of a man he’d just met in Qatar and who alleged he’d been tortured while in US custody.

According to Rabbani, who works as international director for campaign group CAGE, he’d been stopped from entering the UK before but had never been asked to give up his PINs. Charged under the Terrorism Act 2000, he later offered this explanation at a hearing in May 2017:

There were around 30,000 (documents) which I was especially uncomfortable handling and I felt an enormous responsibility to try and discharge the trust that was given to me.

Earlier this week, Rabbani was found guilty of obstructing justice, given a conditional discharge and fined £620 ($830) in costs.

Normally, that would be that, but this case is different, starting with the fact that Rabbani’s organisation, CAGE, is controversial for reasons too involved to explore here.

From a privacy and security perspective this incident, and the subsequent court case, may have implications for the thorny issue of when you can be legally compelled to reveal encryption keys in the UK, and perhaps beyond.

In the UK, people can be charged with not providing encryption keys or passwords under one of two pieces of legislation.

General provisions are provided under the Regulation of Investigatory Powers Act 2000 (RIPA), amended to this effect in 2007, with terrorism suspects covered by schedule 7 of the Terrorism Act 2000 used in this case.

Schedule 7 has been deployed before, notably in the 2013 case of David Miranda, who was forced to hand over passwords for devices containing data connected to the Edward Snowden leaks.

Meanwhile, RIPA was used in 2014 to add four months to the sentence of a convicted terrorist who refused to hand over the password for an encrypted USB key.

The USA, by contrast, lacks specific key disclosure laws, although individuals can still be ordered to do so by a judge, for example in the case of a former policeman accused of storing child pornography who is in jail indefinitely, until he lets authorities into his hard drive.

In a similar vein was the 2014 case of Lavabit, which went out of business rather than hand over the private SSL keys of one of its users, reportedly Edward Snowden.

So, what if anything, does Muhammad Rabbani’s fine for withholding his encryption password mean for the average person, say, travelling to or from the USA or UK?

In reality, at a time when the average Android and iPhone ships with strong encryption turned on by default, things remain as they have been for a while now: anyone going to or from the UK, USA, and a growing list of other countries, can be asked for a device password, whether they are suspected of an offence or not.

People entering the USA are now routinely asked to supply passwords to devices and even social media accounts in order to meet Visa requirements or, in some cases, gain admission. Presumably, most people quietly comply for the sake of convenience.

The more secure devices become, the greater their storage, and the deeper the conviction that they might hold data police should be looking at, the more universal these demands become. There is no escaping it. I expect the whole world will be like this soon.

The only way to avoid being asked for an encryption key is not to travel with a device on which such a thing can be used. It’s unsatisfactory but that’s how it is. For people concerned about privacy, this is a depressing choice.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JKtqTP6Ghiw/

Gov contractor nicked on suspicion of Official Secrets Act breach

The Metropolitan Police has announced the arrest of a government contractor after a tip-off.

According to the police force, the unnamed 65-year-old woman was arrested in North London by constables “acting upon intelligence received”. A search of the location where she was arrested is ongoing.

The arrest was on suspicion of an offence under section 1 of the Official Secrets Act 1911. The arrested woman is currently in a police cell somewhere in South London.

The arrest was reportedly made by constables from the Counter Terrorism Command. It is unusual for the Met to announce any information about an arrest before a formal charge is made. Whether the woman is charged with an offence depends upon the Attorney General, who must consent to all OSA prosecutions. Currently the Attorney-General is Jeremy Wright, QC, MP.

A Parliamentary briefing note about the Official Secrets Acts can be read on Parliament’s website. It sets out the distinction between disclosures of information by any current or former member of the security services and “damaging” disclosures made by Crown servants, who comprise civil servants, government ministers, police employees and so on.

It also says: “For government contractors, a disclosure is made with lawful authority only if it is made in accordance with an official authorisation or for purposes of their function as a government contractor and without contravening an official restriction. In any other circumstance, a disclosure is made without lawful authority.”

Section 1 of the OSA 1911 reads as follows:

Penalties for spying.

(1) If any person for any purpose prejudicial to the safety or interests of the State—

  1. approaches, inspects, passes over or is in the neighbourhood of, or enters any prohibited place within the meaning of this Act; or
  2. makes any sketch, plan, model, or note which is calculated to be or might be or is intended to be directly or indirectly useful to an enemy; or
  3. obtains, collects, records, or publishes, or communicates to any other person any secret official code word, or pass word, or any sketch, plan, model, article, or note, or other document or information which is calculated to be or might be or is intended to be directly or indirectly useful to an enemy;

he shall be guilty of felony . . . F2

(2) On a prosecution under this section, it shall not be necessary to show that the accused person was guilty of any particular act tending to show a purpose prejudicial to the safety or interests of the State, and, notwithstanding that no such act is proved against him, he may be convicted if, from the circumstances of the case, or his conduct, or his known character as proved, it appears that his purpose was a purpose prejudicial to the safety or interests of the State; and if any sketch, plan, model, article, note, document, or information relating to or used in any prohibited place within the meaning of this Act, or anything in such a place or any secret official code word or pass word, is made, obtained, collected, recorded, published, or communicated by any person other than a person acting under lawful authority, it shall be deemed to have been made, obtained, collected, recorded, published or communicated for a purpose prejudicial to the safety or interests of the State unless the contrary is proved.

The case is now active, in the legal sense, meaning reporting restrictions are now in force and speculation about the offence must be avoided in order to prevent the woman’s potential trial from being prejudiced. More information on what this means for commentards and social media users alike can be found in our story about the Attorney General’s plan to crack down on “trial by social media”. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/27/official_secrets_act_arrest_london/

TalkTalk once told GCHQ: Cyberattack? We’d act fast to get sports back up

Prior to its disastrous 2015 mega hack, TalkTalk had told GCHQ that should an attack occur, its main focus would be to restore “online sports streaming”, according to the head of operations at the National Cyber Crime Unit.

Speaking at the Cyber Security in Healthcare event in London, Mike Hullett said all the major telcos had been surveyed by the spooks prior to the hack that affected 157,000 TalkTalk customers’ personal details.

“They were all asked what they would need to stand up after an attack,” he said. TalkTalk said its live sports streaming, as it was most concerned about being able to maintain a competitive advantage against BT, he said. “That is a company with its priorities wrong.”

It transpired that just before the hack the company had been advertising for an information security officer.

Former boss Dido Harding later told MPs there was no specific line manager for cyber security as the responsibility cuts across multiple roles in the company.

The company estimated the attack cost it £42m. Since then it said it has “substantially” increased its investment in cyber security, and has appointed a chief information security officer.

Hullett said he did not have the data to hand as to how other companies responded to GCHQ, but said it was important to add that TalkTalk was still a victim.

“The other point to make is that if an attack against a big high profile company happens [people think] it must be high end actors in place, but that is not necessarily the case.”

Earlier this year, Matthew Hanley, 22, and Connor Douglass Allsopp, 20, both from Tamworth, pleaded guilty to the 2015 attack.

Allsopp admitted to police that he had supplied details on the vulnerabilities in TalkTalk’s website that were exploited to get to the customer records.

The Register has asked TalkTalk for a comment. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/27/talktalk_told_gchq_resuming_sports_streaming_main_focus_prior_to_mega_attack/

7 Situations That Can Sack SIEM Security Teams

SIEMS are considered an important tool for incident response, yet a large swath of users find seven major problems when working with SIEMs. PreviousNext

Image Source: Shutterstock

Image Source: Shutterstock

Infosec professionals working with Security Information and Event Management (SIEM) systems may find themselves in a love-hate relationship – they love the concept of the SIEM’s incident response capabilities, but hate their potential fist-full of problems and surprises, according to a presentation this week at the ISC(2) Security Congress convention in Austin, Texas. 

More than half of SIEM users are displeased with the intelligence they glean from the technology, according to a presentation by Cyphort, which sponsored a SIEM survey by the Ponemon Institute and one from Osterman Research. Both surveys collectively represented nearly 1,000 enterprise SIEM users, says Franklyn Jones, Cyphort’s chief marketing officer, who gave the presentation.

Here are seven major problems SIEM users face, according to Cyphort’s presentation and, based on interviews with Dark Reading, solutions offered by a Forrester Research analyst, and various SIEM vendors.

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full BioPreviousNext

Article source: https://www.darkreading.com/analytics/7-situations-that-can-sack-siem-security-teams-/d/d-id/1329976?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple