STE WILLIAMS

4 Steps to Securing Citizen-Developed Apps

Low- and no-code applications can be enormously helpful to businesses, but they pose some security problems.

IT security, at its best, is about enabling the business and its employees to operate securely without unnecessarily straining productivity or the bottom line. However, as more companies push technology-driven initiatives such as digital transformation, many are struggling with balancing the two.

One of the key areas where this is playing out is with applications developed by business users, sometimes referred to as citizen-developed apps. These applications are built by regular employees using little to no code, and deployed through a cloud-based application-platform-as-a-service (aPaaS) model. Citizen-developed apps are growing in popularity; according to Forrester, the low-code market will grow from $1.7 billion in 2015 to more than $15 billion by 2020. Companies use these apps for everything from procurement and order management to tracking who will bring secret Santa gifts for the office Christmas party.

While organizations are realizing massive gains in productivity and cost savings by leveraging low- or no-code development tools, it may come at a cost. Since apps can be built and deployed with limited (or no) involvement of IT, many organizations have concerns over user and data governance. However, the issue is not necessarily about the tools themselves but about how the organization manages them. While that list of holiday gifts doesn’t pose a high security risk, others that contain sensitive corporate information can.

The good news is that security and IT leaders can determine exactly how much power and freedom they want to give citizen developers and, conversely, how much control they need to avoid the perils of shadow IT. Here are some best practices for securing citizen-developed enterprise applications.

Update your policies: While most companies have security policies in place, these policies now need to specifically address use of cloud services, including aPaaS. Beyond documenting the policy, it’s important to educate employees on data and user governance guidelines. What data is OK to store in cloud services and what is not, and with whom can data be shared? Regulatory requirements also affect the types of applications they develop and how they use them. For example, in the US, protected health information is subject to HIPAA, credit card data is governed by PCI, and education records are subject to FERPA. Similarly, policies and guidelines around citizen development are getting more popular. According to Gartner’s Strategic Planning Assumption, by 2020 at least 70% of large enterprises will have established successful citizen development policies, up from 20% in 2010.

Classify and risk-rank your data: The Data Classification Matrix instructs users on sensitivity levels of data: commonly public, internal use, and confidential. Consider the type of data that users intend to upload to applications and adjust security measures from there. “Riskier” data might include personal information such as Social Security numbers, company financial information, details on proprietary technology, and so forth. Instituting data classification schemas can ensure that users know exactly what information is completely confidential, internal only, or open to the public. By classifying information, app builders will be able to handle data in accordance with security policies.

Enforce (and review!) role-based access: In aPaaS, as with other IT services, role-based access is important to utilize. One way to ensure certain applications are accessed only by authorized users is to implement single sign-on and utilize security groups. For example, if the finance team is working on a project involving sensitive financial data, add everyone involved to a finance security group and quickly assign permissions to the finance team via a group. For applications involving sensitive employee data, you might utilize a user group so that only people in the HR department can access that information.

Once you have set permissions and access rights to your applications, review who has access to what at least once a quarter, making sure that new hires, departures, and employees whose roles have changed have the appropriate access.

Review and implement security settings for app access: Security is a balance between usability and security. Finding the right balance for your organization largely depends on what data types will be processed and stored. Review the security enhancements and options available for locking down citizen-developed apps from your aPaaS vendor, including network, authentication, and session settings. You may consider IP filtering such that the aPaaS can be accessible only from your corporate office. Pay particular attention to strengthening authentication controls by requiring two-factor authentication and strong passwords. For applications that aren’t storing sensitive data, you may consider less-strict security measures, which will also keep citizen developers and users as productive as possible.

The rise of citizen development will grow over the next several years, and organizations can realize the potential it offers while maintaining user and data governance. Security oversight may be different for every company, but as long as there is a clear alignment among end users, developers, IT, and security leaders for where along the productivity-control spectrum you want to be, your organization’s needs can be met from both ends.

Related Content:

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.

Mike Lemire is the Compliance and Information Security Officer at Quick Base, the platform for app enabled business. Previously, Mike managed the Information Security and Compliance programs at Yesware, Acquia, Pearson Higher Education, and RiskMetrics, and has held … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/4-steps-to-securing-citizen-developed-apps/a/d-id/1329356?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Solaris, Java, have vulns that let users run riot

Oracle’s emitted its quarterly patch dump. As usual it’s a whopper, with 308 security fixes to consider.

Oracle uses the ten-point Common Vulnerability Scoring System Version 3.0, on which critical bugs score 9.0 or above. The Register counts 30 such bugs in this release.

Not all can be laid at Oracle’s door. For example, a glibc glitch is hardly Oracle’s fault. Nor are the Apache Tomcat and Struts bugs that MySQL users need to squash.

But a few others are Big Red boo-boos, such as CVE-2017-3632, a mess that means a remote user can exploit a flaw in the Solaris CDE Calendar component to gain elevated privileges. Lesser Solaris bugs allow DDOSing and unauthorised data alterations.

Java SE has 10 critical flaws, nine of them rated 9.6. Most allow remote users to do things you’d rather they couldn’t. Oracle says 28 of 32 Java vulnerabilities “may be remotely exploitable without authentication”.

Oracle Retail Customer Insights and Oracle WebLogic also have critical vulns, the latter the only product to earn a perfect 10.0 severity rating for CVE-2017-10137 which allows a remote user to obtain elevated privileges.

We could go on and explore the other 278 patches rated 8.9 or lower, but by now you get the idea: there’s something terrifying for almost every Oracle user because even a bug rated a wimpy 5.3, such as CVE-2017-10244 discovered by Onapsis, means “attackers to exfiltrate sensitive business data without requiring a valid user account” in Oracle E-Business suite.

Next steps? View Oracle’s list here then use your Oracle login to get more details here before figuring out what can be fixed now, what can wait for your next scheduled change window and what needs a new change window scheduled ASAP. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/19/oracle_critical_patch_update_advisory_july_2017/

Google G-Suite spotted erecting stiff member vetting tool

Stung by phishing attacks aimed at G Suite users earlier this year, Google has armored its cloud with extra security layers.

Following recent defenses against the dark arts – security key enforcement, app name vetting, and OAuth whitelisting – the Chocolate Factory has designed some interface signage to warn G Suite users not to accept web apps and Apps Scripts too hastily.

“Beginning today, we’re rolling out an ‘unverified app’ screen for newly created web applications and Apps Scripts that require verification,” said Naveen Agarwal, a member of Google’s Identity team, and Wesley Chun, developer advocate for G Suite, in a blog post. “This new screen replaces the ‘error’ page that developers and users of unverified web apps receive today.”

The “unverified app” screen gets presented before the screen seeking permission to grant a web app access to G Suite data, in order to underscore the risk of consenting to use an app of uncertain provenance. Users may still accept such apps – a flow that requires three affirmative clicks and typing “continue” – but at least they will have been warned.

The “unverified app” screen also helps developers by allowing them to test apps without first going through OAuth verification, a requirement implemented previously in response to abuses.

Apps Script code (by which Google’s apps may be automated) that seeks OAuth access to data or information about users in other domains must also wear the “unverified app” scarlet letter. And Google is presenting additional cautionary language that’s been added to the pre-OAuth alert and below the URL window to encourage G Suite users to think before trusting applications and scripts.

It’s about time. Those interested in app security have been talking about potential Apps Script problems at least since 2014. In February, security engineer Greg Carson posted PoC code to demonstrate how the technology can be abused.

The latest protections apply to newly created web apps and Apps Scripts. In the coming months, Google intends to extend them to existing applications and scripts. This may require developers to revisit the Google Cloud Console to go through the verification process. ®

PS: Google has also launched a recruitment tool called Hire, another service it will presumably shut down in three years.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/18/google_gsuite_member_vetting_scheme/

Iranian duo charged with hacking US missile simulation software biz

Two Iranian nationals have been charged with hacking a US defense technology maker to steal and sell its rocketry simulation software.

The US Department of Justice claims Mohammed Reza Rezakhah and Mohammed Saeed Ajily compromised developer Arrow-Tech to download tools that are restricted from export under America’s International Traffic in Arms Regulations. It’s claimed the pair, with the help of a sidekick, spent at least five years foraging around inside Arrow-Tech, from August 2007 to at least May 2013.

According to prosecutors [PDF], the duo infiltrated Arrow-Tech’s corporate network to grab the company’s Projectile Rocket Ordnance Design and Analysis System (PRODAS) suite, which is used to develop rockets, missiles and similar weapons.

Exports of PRODAS are restricted, meaning anyone wishing to sell the software to a customer outside the US must get approval from Uncle Sam – which, of course, is unlikely to green-light any effort to send such technology to Iran.

Ajily, a 35-year-old businessman, wanted to tout the software to Iranians and other foreigners without America’s approval – so he recruited Rezakhah, a 39-year-old alleged hacker, to steal the code, it is claimed. Rezakhah was also instructed to break the program’s anti-piracy protections so Ajily could flog the hot gear as he pleased outside the US, it is alleged. The software usually carries a $40,000 to $800,000 price tag.

The pair now have been indicted on counts of criminal conspiracy relating to:

  • Computer fraud and abuse
  • Unauthorized access to and theft of information from computers
  • Wire fraud
  • Exporting a defense article without a license
  • Violating sanctions against Iran

In a statement on Monday, the Department of Justice said:

According to the allegations in the indictment filed in Rutland, Vermont, beginning in or around 2007, Rezakhah, Ajily, and a third actor who has already pleaded guilty in the District of Vermont for related conduct, conspired together to access computers without authorization in order to obtain software which they would then sell and redistribute in Iran and elsewhere outside the US. Ajily, a businessman, would task Rezakhah and others with stealing or unlawfully cracking particular pieces of valuable software.

Rezakhah would then conduct unauthorized intrusions into victim networks to steal the desired software. Once the software was obtained, Ajily marketed and sold the software through various companies and associates to Iranian entities, including universities and military and government entities, specifically noting that such sales were in contravention of U.S. export controls and sanctions.

That third defendant, Nima Golestaneh, who ran a biz called Dongle Labs, pleaded guilty to aiding the duo by providing servers, based in Canada and the Netherlands, which were used to pull off the intrusions. Arrest warrants for Rezakhah and Ajily have been obtained – whether or not the Feds can actually nab the Iranians any time soon is unclear. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/18/iranians_charged_hacking_us_rocketry_app/

Let’s harden Internet crypto so quantum computers can’t crack it

In case someone manages to make a general purpose quantum computer one day, a group of IETF authors have put forward a proposal to harden Internet key exchange.

It’s a handy reminder that in spite of a stream of headlines telling us that quantum computers will break cryptography, there’s a substantial amount of research going into “post quantum” crypto – and also a sign that standards authors think there’s enough work out there to justify an Internet Draft.

While only an “informational” document at this stage, what the authors describe is how to extend Internet Key Exchange v2 (RFC 5996, IKEv2) to support a quantum-safe key exchange.

The work-in-progress suggests an optional IKEv2 payload “used in conjunction with the existing Diffie-Hellman key exchange to establish a quantum-safe shared secret between an initiator and a responder,” and it supports a number of suitable key exchange schemes.

One way keys can be quantum-safe, the draft explains, is for them to be randomly generated and ephemeral – in other words, it’s an attempt to blend two cryptographic concepts, asymmetric public/private key encryption and something akin to a one-time pad.

The brief explanation of such a key encapsulation mechanism (KEM) is: “the initiator randomly generates a random, ephemeral public and private key pair, and sends the public key to the responder in QSKEi payload. The responder generates a random entity, encrypts it using the received public key, and sends the encrypted quantity to the initiator in QSKEr payload. The initiator decrypts the encrypted payload using the private key. After this point of the exchange, both initiator and responder have the same random entity from which the quantum-safe shared secret (QSSS) is derived.”

Naturally, a quantum-safe key exchange can only take place if both ends of the conversation support it; if not, the draft says, the transaction has to fall back to an ordinary IKEv2 exchange.

We don’t yet have a general purpose quantum computer, so why bother? – Because if we do reach a point where Shor’s algorithm is solvable by general purpose quantum computers, there’ll be a lot of stored traffic it could be applied to.

Research into quantum-safe ciphers has yielded a couple of schemes the paper’s authors consider serious enough to be name-checked in the paper: two variants of what’s called Ring Learning With Errors; and two approaches to NTRU Lattices. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/18/quantum_safe_key_exchange/

Hacked drones flying up, up and away over geofencing restrictions

Drone operators frustrated by geofencing are hopping the fence and hacking their way to fly way, way up and over what’s legal. And they’re more than able to do so, as drone maker DJI reportedly left development debug code in its Assistant 2 application. From @UAVHive, a group for hobbyists in Yorkshire, England:

DJI probably accounts for the vast majority of drone sales in the United States, so this code glitch makes for a hell of a lot of no-holds-barred unmanned aerial vehicles (UAVs) buzzing over our heads.

Some cynics wonder if rather than being a glitch, it could instead be a brilliant marketing ploy to get around flight restrictions, but per Hanlon’s razor, we won’t attribute to malice that which is adequately explained by incompetence, misunderstanding, or “Oops! Debug code left in production app!”

The manufacturer sent a statement to the Register, claiming to have fixed the problem with a firmware update:

A recent firmware update for Phantom 4 Pro, Phantom 4 Advanced, Phantom 3 Standard, Phantom 3 SE, Mavic Pro, Spark, and Inspire 2, among others, fixes reported issues and ensures DJI’s products continue to provide information and features supporting safe flight. DJI will continue to investigate additional reports of unauthorised firmware modifications and issue software updates to address them without further announcement.

But one expert – Kevin Finisterre, one of multiple drone security experts who’ve been repeatedly warning DJI since at least April – says the update hasn’t stopped him from hacking away:

The bugs that I disclosed that were circulating in the underground have NOT been fixed for what it is worth.

The jailbreak has been proved on other DJI models besides Spark, including the Phantom and Inspire 2. The hack is a drop-dead simple change to settings. One YouTube video that shows operators how to tweak flight height to 2,500 feet is less than two minutes long.

That video, for what it’s worth, also offers this advice:

Don’t be an idiot using these settings.

For real?! To state the blindingly obvious, idiots are why geofencing exists. Drone operators have flown close to UAV-sucking jet engines on passenger planes, police helicopters, and firefighting aircraft. They’ve flown UAVs on to the White House lawn and above playgrounds, concussed at least one person at a parade, and aggravated at least one homeowner to the extent of “Hey, gadget! Have a taste of birdshot!!!” (Yup, and he had a right to do it, said the judge.)

You don’t even have to do the simple altitude restricion hack yourself. Anybody who wants to “fly your drone faster and higher than the legal limit” can call on a Russian hacking company called CopterSafe that offers hacked upgrades for DJI drones.

To be fair, drone operators have legitimate gripes about geofencing.

Take Sky 1, a UAV pilot who said that they had a paid gig near a stadium they couldn’t fly over because their DJI drone labelled it a red, no-fly zone. They were also restricted from flying inside the Class D airspace of an airport, even though, they claimed, they had received permission.

As the Register reports, users authorized to fly in restricted areas can either unlock these zones using DJI’s GEO system or by submitting a request via email. Apparently, as somebody who claimed to be a law enforcement operator said, that’s all way too klunky:

I have said it before, when you purchase a car, it does not come with a daggum BABYSITTER!!!!!!!!!!!! Your trusted to abide by the rules and regulations!!! And as a Law Enforcement Operator, I AM NOT WAITING 5 DAYS to get authorization when I have all other paperwork in line and I need to fly NOW!!!!!!!!

I’ve got a request for comment in to DJI and will update the story if I hear back.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0LERt5HCMaQ/

Google wants you to bid farewell to SMS authentication

Google’s campaign to nudge its vast user base towards  more secure two-step (2SV) and two-factor (2FA) authentication continues: from this week anyone logging into its services using SMS codes will start receiving notifications from something called “Google prompt”.

When the user initiates login, a screen will appear on their Android smartphone (iOS users must install the Google Search app) asking them to confirm that they are trying to sign in, with information on the device, browser type and ISP location. The screen clears when the user confirms the login was made by them.

For users who’ve never heard of Google prompt, it’s an authentication option the company launched in June 2016 as a more secure alternative to receiving codes via SMS. Some users might also find it quicker than generating codes using the Authenticator app, as explained below.

Naked Security has already published a pro and con comparison of SMS text authentication versus using the Authenticator app, so we won’t delve into that too deeply.  The question is how Prompt improves on either of those options.

Prompt is primarily aimed at overcoming the growing insecurity of SMS codes. These can be grabbed by malicious apps in a man-in-the middle attack and, of course, there’s the alarming rise of SIM swap fraud, also recently covered by us in some depth.

The takeaway is that while SMS codes are better than nothing, they’re no longer considered as secure as they once were. Confirmation of SMS’s troubled status was confirmed last summer by NIST, which  recommended US government departments stop relying on it.

The weakness of SMS is that data travels across a channel not controlled by Google itself.  With prompt, data is still being sent back and forth, but using an encrypted channel.  As long as the phone is within reach of a data connection, the user will also receive a real time warning every time someone – anyone – attempts to log into their Google account.

The other advantage of this is that it’s quicker to hit “yes” when prompted than it is to enter an SMS code or, in the case of Google Authenticator, a six-digit code. However, Authenticator (which requires no insecure data to and fro once it’s been set up) is still the best choice for anyone who uses 2-step verification to log into third-party sites in addition to Google.

Users who insist on sticking with SMS codes won’t be forced to adopt prompt immediately but the direction of travel here is pretty clear: SMS’s days are numbered, on Google at least.

Longer term, the main hurdle to changing people’s authentication habits could simply be confusion. The inadequacy of passwords is now understood but the fact Google is now offering five authentication options (including hardware tokens such as the YubiKey and single-use “emergency” codes) risks overload.

Over at Facebook, things are almost as confusing with additional options offered such as logging in using a profile picture and nominating trusted users to help access an account.

That many people have accounts with several services, each with their own blend of options, only adds to the impression that seamless authentication security for the post-password world is still some way off.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zDhRLZ9AkEI/

Black Hat USA 2017: what’s on the agenda in Las Vegas

Security experts are preparing to swoop into Las Vegas for next week’s 20th annual Back Hat conference, and there will be much to discuss. Since the last conference, threats against Internet of Things (IoT) devices have become a top news item and outbreaks from the likes of WannaCry and NotPetya have the industry rethinking what they thought they knew about ransomware and threats to critical infrastructure.

Black Hat USA 2017 will take place July 22–27 at Mandalay Bay Convention Center. Among the talks:

  • Facebook CSO Alex Stamos will present a talk called “Stepping up our game: Re-focusing the security community on defense and making security work for everyone”
  • Briefings will focus on vulnerabilities in such areas as IoT, malware, smart grid and industrial security and AppSec.
  • Black Hat Arsenal (Wednesday and Thursday, July 26-27) where independent researchers and the open source community will give live demos of their latest tools.

The event will also include the Black Hat Business Hall (Wednesday and Thursday, July 26-27), featuring more than 270 security companies. There will also be a career zone, an innovation city and vendor sessions. Sophos will be in booth 947.

What’s happening in the Sophos booth?

Sophos researchers will be on hand at the booth throughout the event, including Dorka Palotay, who will discuss her new paper on the Philadelphia ransomware-as-a-service (RaaS) kit. Technical demos will include an Intercept X overview, with particular focus on how it defends customers from the likes of WannaCry. There will also be a shirt giveaway for those who stop by the booth and say “Sophos is next-gen security”.

Sophos data scientist Hillary Sanders will give a talk (July 26 from 5:05pm-5:30pm) called “Garbage in, Garbage Out: How Purportedly Great Machine Learning Models Can Be Screwed Up By Bad Data“.

As processing power and deep learning techniques have improved, Sanders says, deep learning has become a powerful tool to detect and classify increasingly complex and obfuscated malware at scale. A plethora of white papers exist touting impressive malware detection and false positive rates using machine learning, but virtually all of these are shown in the context of a single source of data the authors choose to train and test on. Hillary said in her talk description:

Accuracy statistics are generally the result of training on a portion of some dataset (like VirusTotal data), and testing on a different portion of the same dataset. But model effectiveness (specifically detection rates in the extremely low false-positive-rate region) may vary significantly when used on new, different datasets – specifically, when used in the wild on actual consumer data.

In this presentation, I will present sensitivity results from the same deep learning model designed to detect malicious URLs, trained and tested across 3 different sources of URL data. After reviewing the results, we’ll dive into what caused our results by looking into: 1) surface differences between the different sources of data, and 2) higher level feature activations that our neural net identified in certain data sets, but failed to identify in others.

WannaCry, NotPetya and Vault 7

Expect to hear a lot about May’s massive WannaCry outbreak and the NotPetya attack that came a month later. Both spread rapidly across the globe using NSA exploit tools leaked by the hacking group Shadow Brokers. WannaCry was unique in that it was ransomware spread by a worm instead of the usual phishing tactics. NotPetya was more traditional ransomware, but still spread further than most using the NSA tools.

Though both involved NSA tools leaked by Shadow Brokers, attendees can also expect to hear about WikiLeaks “Vault 7” dump of CIA cyberweapons and the risks they could pose to critical infrastructure.

IoT

IoT threats had been discussed for years at Black Hat, but in largely theoretical terms. This past year, the theoretical became reality when Mirai malware was used to hijack internet-facing webcams and other devices into massive botnets that were then used to launch a coordinated assault against Dyn, one of several companies hosting the the Domain Name System (DNS). That attack crippled such major sites as Twitter, Paypal, Netflix and Reddit. SophosLabs noted in its 2017 malware forecast that attackers were expanding efforts to target IoT devices through vulnerabilities in Linux.

The complete Black Hat USA 2017 schedule is available here.

The event coincides with two other security events – DEF CON 25 and BSidesLV. We’ll let you know about those in the coming days – watch this space.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mbbyB8Emj-8/

News in brief: laptop ban curtailed; robot meets a soggy end; Dow Jones leaks 2.2m customers’ data

Your daily round-up of some of the other stories in the news

Farewell to the laptop ban – almost

The laptop ban on flights inbound to the US from some Middle Eastern airports is all but dead, regular flyers will be glad to hear.

The Transport Security Administration said on Monday that it was lifting the restriction on Saudi Arabian Airlines flights from Jeddah, and added that officials would visit Riyadh airport “later this week” to make sure that airport now met the tougher new security standards.

The ban was imposed in March by the US administration in response to the threat of explosives being smuggled on board in electronic devices: passengers were prohibited from bringing anything larger than a smartphone on to the plane, with bigger items having to go in checked bags.

The ban has been gradually lifted as airports complied with the US restrictions. Meanwhile, the Department of Homeland Security has beefed up its security requirements for inbound flights, which now include enhanced screening procedures at departure airports, affecting some 325,000 people on around 2,000 flights arriving in the US every day.

And increased security restrictions are a fact of life now for passengers heading for the US: Lisa Farbstein of the TSA told Reuters on Monday that “we’ll be working with global aviation stakeholders to expand security measures even further”.

Robot meets a watery end

We’ve written about robots interacting with humans in retail spaces before on Naked Security, from Pepper, which was going to help you in a Japanese mobile phone store, to the robocops Dubai is planning to deploy in malls and tourist attractions.

And we’ve brought you news of how Pepper ran into a drunken customer who took out his anger on the blameless robot, but we’re at a loss to work out how a Knightscope K5 robot ended face-down in the fountains of a Washington DC office block last week.

The news was broken on Twitter by a worker in the office block, who posted a picture of the unfortunate hardware in the fountain.

The Guardian speculated that, like the Daleks, it had been defeated by stairs, though we wonder if perhaps the robot had got a bit too squiffy on WD-40, or perhaps had got into an altercation with a vending machine.

According to Stacy Dean Stephens of Knightscope, the robot’s watery encounter was “an accident”, and “no people were harmed or involved in any way”.

Dow Jones leaks 2.2m customers’ data

Another day, another leak thanks to a poorly secured data repository in the cloud – this time, the details of at least 2.2m customers of Dow Jones, the financial publishing group.

The leak was discovered by security researchers early last month, and Dow Jones confirmed that the data, including names, addresses and the final four digits of credit cards of subscribers to publications including the Wall Street Journal and Barron’s, had been leaked thanks to a wrongly configured Amazon Web Services S3 cloud storage server.

According to the UpGuard researchers, the server was configured so that any “authenticated user” could download the data if they had the URL of the repository – which in practice meant anyone with a free Amazon AWS account.

Dow Jones told The Hill that it hadn’t notified customers of the breach because the information wasn’t sensitive enough, adding: “This was due to an internal error, not a hack or attack. We have no evidence any of the over-exposed information was taken … [the information] did not include full credit card or account login information that could pose a significant risk for consumers or require notification.”

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iD0q6FTyPJQ/

CoinDash crowdfunding hack further dents trust in crypto-trading world

More than $7m was stolen by hackers on Monday from folks investing in a cryptocurrency startup.

Israel-based CoinDash – which bills itself as an “an operating system” for “interacting, handling and trading crypto assets” – launched what’s called an initial coin offering. This is a process in which people buy virtual tokens from a fledgling biz, such as CoinDash. These tokens are vital to whatever service the company is offering. As the startup grows, these tokens are supposed to increase in value. Buying the tokens early is akin to buying shares during a normal business’s IPO. It’s a way of crowdfunding investment.

Well, on Monday, $7m of that investment, all in the Ethereum cyber-currency, went not to CoinDash in exchange of tokens, but to hackers. Security and financial technology experts voiced concerns that this latest online heist only serves to undermine confidence in digital currency trading platforms.

The thieves changed the Ethereum address for the initial coin offering address to their own money store after hacking CoinDash’s site. In the process, the crooks were able to trick backers into sending Ethereum digital cash to an account under their control before the assault was detected and the plug was pulled on the scam.

In a statement on its site, CoinDash admitted the infiltration, and said that victims tricked as a direct result of the website hack will be compensated in CoinDash tokens. More than half the funds paid by supporters went into the pockets of as yet unidentified cybercriminals.

It is unfortunate for us to announce that we have suffered a hacking attack during our Token Sale event. During the attack, $7 million were stolen by a currently unknown perpetrator. The CoinDash Token Sale secured $6.4 million from our early contributors and whitelist participants and we are grateful for your support and contribution.

CoinDash is responsible to all of its contributors and will send CDTs reflective of each contribution. Contributors that sent ETH to the fraudulent Ethereum address, which was maliciously placed on our website, and sent ETH to the CoinDash.io official address will receive their CDT tokens accordingly. Transactions sent to any fraudulent address after our website was shut down will not be compensated.

This was a damaging event to both our contributors and our company, but it is surely not the end of our project. We are looking into the security breach and will update you all as soon as possible about the findings.

CoinDash added that it was still under attack. “Please do not send any ETH [Ethereum] to any address, as the Token Sale has been terminated,” it warned.

Tracking down the cybercrooks will be a battle of technical skills between attackers and those hoping to catch them, according to security experts. “If the hackers mess up, they can be traced, but smart hackers could cover their tracks – unless smarter hackers later uncover those tracks,” Rob Graham of Errata Security told El Reg.

Mikko Hypponen of F‑Secure added: “If they cash in (and don’t think through how to do it right) they can be found. Not holding my breath.”

Even tracking down the criminals won’t undo the damage already done to CoinDash, which has joined a growing list of hacked or otherwise compromised digital trading platforms.

Brian Honan, founder of Ireland’s CSIRT and a special advisor on internet security to Europol, told El Reg: “This not being the first loss incurred by a platform, it will no doubt undermine the trust and confidence in those platforms, making many much slower to adopt digital currencies.”

Fintech and payments technology guru Neira Jones agreed that the CoinDash hack is not going to have a good effect on confidence.

Adoption at risk

Kyle Wilhoit, senior security researcher at DomainTools, added: “I think this type of attack goes to show even cryptocurrency trading systems can fall prey to attackers. I think this type of incident only helps to slow the progressive growth and adoption of cryptocurrencies.”

“While this may be considered isolated, these types of incidents prove that there are still serious security flaws with how some systems manage and trade cryptocurrencies. Ultimately, CoinDash has mentioned that 37,000 Etherum ($7,803,194 USD equivalent) were stolen during this attack, making it a significant event,” he added.

Insurex, another trading platform, suffered a similar problem last week after hackers hijacked a Twitter feed to post fraudulent messages about pre-sales, directing marks to send digital cash into an account controlled by the crims. Insurex responded by warning punters to be wary of followup scams.

Elsewhere, South Korean police are probing an online subversion attack detected last month on Bithumb – one of the world’s biggest Bitcoin exchanges – that exposed the personal details of thousands of traders. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/18/coindash_hack/