STE WILLIAMS

Google voice Assistant gets new privacy ‘undo’ commands

Google’s controversial voice Assistant is getting a series of new commands designed to work like privacy-centric ‘undo’ buttons.

Assistant, of course, is inside an estimated one billion devices, including Android smartphones, countless brands of home smart speaker, and TV sets based on the Android OS.

But these are only the pioneers for an expanding AI empire. This year Assistant should start popping up in headphones, soundbars, ‘smart’ computer displays and, via Android Auto, more motor cars.

If this sounds oppressive, you could be in for a tough few years because Assistant (and rivals Alexa, Siri, Cortana, and Samsung’s Bixby) – could soon be in anything and everything a human being might reasonably expect to perform a task.

And yet 2019 was the year Google finally got the message that the system’s hidden risks might quickly become the sort of privacy itch that is hard to scratch if it’s not careful.

This included controversies over who might be listening to recordings without users having given consent. Others have likened it to a poorly regulated privacy-killing genie Google won’t voluntarily put back in the bottle.

Google, stop that

Google hopes its new commands will counter that impression by offering offers some control over what Assistant pays attention to.

Right now, Assistant activates for English speakers when it hears the commands, “Hey Google,” or “Ok Google.”

The problem is that Assistant can activate when it mishears a similar string of words, which leaves users unsure what and when it might be recording speech sent to Google’s AI cloud.

Soon (presumably after any necessary updates) users will be able to calm their paranoia with the new command “Hey Google, that wasn’t for you.”

Users can already manually delete interactions with Assistant through Voice Audio Activity, but this will now be possible using the command, “Hey Google, delete everything I said to you this week.”

Other commands include, “Hey Google, are you saving my audio data?” (which brings up a privacy FAQ on a screen) and “How do you keep my information private?”

These look like a logical extension to Google’s revamped Assistant privacy policies, announced last September. Indeed, it’s not hard to imagine that the number of questions users can ask Assistant about privacy might in time expand even more.

Plain sailing?

As helpful as this development might seem, it’s not exactly a great advert for the privacy of a system that users must tell it not to do something many probably didn’t realise it could do in the first place – and that’s before factoring in all the rival voice AI systems out there which people might also be using.

The wider problem for Google is that privacy isn’t just about voice and extends to the wider environment of IoT devices enabled by the platform of which Assistant is only one part.

Take the company’s Home Hub system, which earlier this week had to disconnect Xiaomi IP cameras after someone discovered they were being fed images from other people’s units.

This is how privacy – and the crises it occasionally throws up – work. Using IoT, and interfaces such as voice control, involves trade-offs. Some now suspect these compromises might be inviting trouble in the long run.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/suVe_eSbeHg/

Apple’s scanning iCloud photos for child abuse images

Apple has confirmed that it’s automatically scanning images backed up to iCloud to ferret out child abuse images.

As the Telegraph reports, Apple chief privacy officer Jane Horvath, speaking at the Consumer Electronics Show in Las Vegas this week, said that this is the way that it’s helping to fight child exploitation, as opposed to breaking encryption.

[Compromising encryption is] not the way we’re solving these issues… We are utilizing some technologies to help screen for child sexual abuse material.

Horvath’s comments make sense in the context of the back-and-forth over breaking end-to-end encryption. Last month, during a Senate Judiciary Committee hearing that was attended by Apple and Facebook representatives who testified about the worth of encryption that hasn’t been weakened, Sen. Lindsey Graham asserted his belief that unbroken encryption provides a “safe haven” for child abusers:

You’re going to find a way to do this or we’re going to do this for you.

We’re not going to live in a world where a bunch of child abusers have a safe haven to practice their craft. Period. End of discussion.

Though some say that Apple’s strenuous Privacy-R-Us marketing campaign is hypocritical, it’s certainly earned a lot of punches on its frequent-court-appearance card when it comes to fighting off demands to break its encryption.

How, then, does its allegiance to privacy jibe with the automatic scanning of users’ iCloud content?

Horvath didn’t elaborate on the specific technology Apple is using, but whether the company is using its own tools or one such as Microsoft’s PhotoDNA, it’s certainly not alone in using automatic scanning to find illegal images. Here are the essentials of how these technologies work and why they only threaten the privacy of people who traffic in illegal images:

A primer on image hashing

A hash is created by feeding a photo into a hashing function. What comes out the other end is a digital fingerprint that looks like a short jumble of letters and numbers. You can’t turn the hash back into the photo, but the same photo, or identical copies of it, will always create the same hash.

So, a hash of a picture turns out no more revealing than this:

48008908c31b9c8f8ba6bf2a4a283f29c15309b1

Since 2008, the National Center for Missing Exploited Children (NCMEC) has made available a list of hash values for known child sexual abuse images, provided by ISPs, that enables companies to check large volumes of files for matches without those companies themselves having to keep copies of offending images.

Hashing is efficient, though it only identifies exact matches. If an image is changed in any way at all, it will generate a different hash, which is why Microsoft donated its PhotoDNA technology to the effort. Some companies, including Facebook, are likely using their own sophisticated image-recognition technology, but it’s instructive to look at how PhotoDNA identifies images that are similar rather than identical: namely, PhotoDNA creates a unique signature for an image by converting it to black and white, resizing it, and breaking it into a grid. In each grid cell, the technology finds a histogram of intensity gradients or edges from which it derives its so-called DNA. Images with similar DNA can then be matched.

Given that the amount of data in the DNA is small, large data sets can be scanned quickly, enabling companies including Microsoft, Google, Verizon, Twitter, Facebook and Yahoo to find needles in haystacks and sniff out illegal child abuse imagery.

But how does hashing work with encryption?

Henry Farid, one of the people who helped develop PhotoDNA, wrote an article for Wired that tackled the question of how hashing can work on content that’s been encrypted by Apple or Facebook.

Again, we don’t know if Apple’s using PhotoDNA, per se, or its own, homegrown hashing, but here’s his take on either one’s efficacy when used within end-to-end encryption:

Recent advances in encryption and hashing mean that technologies like PhotoDNA can operate within a service with end-to-end encryption. Certain types of encryption algorithms, known as partially or fully homomorphic, can perform image hashing on encrypted data. This means that images in encrypted messages can be checked against known harmful material without Facebook or anyone else being able to decrypt the image. This analysis provides no information about an image’s contents, preserving privacy, unless it is a known image of child sexual abuse.

Apple’s Commitment to Child Safety

An Apple spokesman pointed the Telegraph to a disclaimer on the company’s website – Our Commitment to Child Safety, saying that…

Apple is dedicated to protecting children throughout our ecosystem wherever our products are used, and we continue to support innovation in this space.

As part of this commitment, Apple uses image matching technology to help find and report child exploitation. Much like spam filters in email, our systems use electronic signatures to find suspected child exploitation.

Accounts with child exploitation content violate our terms and conditions of service, and any accounts we find with this material will be disabled.

Apple’s been doing this a while

At CES, Horvath merely confirmed what others had picked up on months ago: that it had been conducting photo scanning for months, at least. In October, Mac Observer noted that it had come across a change to Apple’s privacy policy that dates back at least as far as a 9 May 2019 update. In that update, Apple inserted language specifying that it’s scanning for child abuse imagery:

We may …use your personal information for account and network security purposes, including in order to protect our services for the benefit of all our users, and pre-screening or scanning uploaded content for potentially illegal content, including child sexual exploitation material.

In sum, scanning for child abuse materials isn’t new, all the tech giants are doing it, and Apple’s not reading actual messages or looking directly at photo content. It’s just keeping an eye out for a string of telltale characters, as gray as numbers and letters but as telling as the scent of blood to a hound.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Z3fh0bRjivU/

S2 Ep22: Word doc stops fraud, bye bye Python 2, latest from the ransomware swamp – Naked Security Podcast

This week we discuss the IT exec who scammed his employer out of $6m with fake invoices and the death of Python. Peter also shares two of his latest investigations from the ransomware swamp.

Mark reached peak peeve in this episode and so asked us to share our own technology pet peeves. Did you agree with us?

Producer Alice Duckett is joined by Mark Stockley, Greg Iddon and Peter Mackenzie in this week’s episode.

Thank you to everyone who gives us feedback on the podcast and helps us promote it on social media, it really helps us reach more people.

Listen now!

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/dxdFXkYo_F0/

Browser zero day: Update your Firefox right now!

Just two days after releasing Firefox 72, Mozilla has issued an update to patch a critical zero-day flaw.

According to an advisory on Mozilla’s website, the issue identified as CVE-2019-17026 is a type confusion bug affecting Firefox’s IonMonkey JavaScript Just-in-Time (JIT) compiler.

Simply put, a JIT compiler takes JavaScript source code, as you’ll find in most web pages these days, and converts it to executable computer code, so that the JavaScript runs directly inside Firefox as if it were a built-in part of the app.

This typically improves performance, often noticeably.

Ironically, most modern apps implement what’s called DEP, short for Data Execution Prevention, a threat mitigation that helps stop crooks from sending over what looks like innocent data but then tricking the app into running that data as if it were an already-trusted program.

(Code that’s disguised as data is known in the jargon as shellcode.)

DEP means that once a program is running, the data it consumes – especially if it originates from an untrusted source – can’t be turned into executing code, whether accidentally or otherwise.

But JIT compilers have to exempt themselves from DEP controls, because converting data to code and running it is precisely what they do – and that’s why crooks love to probe for flaws in JIT systems.

This bug was reported to Mozilla by Chinese security company Qihoo 360, but the bad news is that attackers were one step ahead of Mozilla, which said:

We are aware of targeted attacks in the wild abusing this flaw.

Nothing has yet been revealed about the nature of the attacks beyond that remark.

The word targeted is often used to imply a campaign run by so-called state-sponsored actors, but it’s safer to assume that anyone and everyone could be at risk – what starts as a limited campaign against specific targets can quickly be picked up by more mainstream attackers.

The last time Mozilla had to patch a zero day was last June when it fixed two in a single week that were being used to target cryptocurrency exchanges.

What to do?

If you use the regular version of Firefox, make sure you have version 72.0.1.

Your Firefox may well have updated automatically, but it’s worth checking.

Go to HelpAbout Firefox (or FirefoxAbout Firefox on a Mac), where you will see the current version number and be offered an update if you’re still behind.

Some Linux distros and many businesses stick to Firefox’s Extended Support Release (ESR) because it gets security fixes at the same pace as the regular version, but doesn’t force you to take on new features at every update.

So if you are an ESR user, you need to update to 68.4.1esr to get this patch. (Note that 68+4=72, which is a general way of telling which ESR release corresponds in security updates to the current bleeding-edge version.)

Note to Tor users

Importantly, the browser that comes with Tor, the privacy-enhancing software bundle that helps you browse without being tracked, is a special build of Firefox ESR.

Unfortunately, Tor only updated within the last 24 hours to the 68.4.0esr version of Firefox’s code, and hasn’t got its 68.4.1esr update out yet.

The Tor site currently [2020-01-09T12:00Z] says, “we are planning to release version 9.0.4 of Tor Browser picking up this fix soon,” so keep your eyes out for an update – a zero-day attack that works against the browser in Tor could undo the anonmyity and privacy that made you choose Tor in the first place.

In the meantime, we think you can mitigate the risk in Tor by turning the IonMonkey JIT system off altogether – put about:config in the address bar, find the entry javascript.options.ion and change it from true (the default) to false.

This may slow down some of your browsing slightly, but as far as we can tell, it skips the IonMonkey JIT compilation process and therefore sidesteps this bug.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8rtXfw6rWto/

Rockwell Automation to Buy ICS Security Services Firm

Industrial control systems vendor plans to acquire Avnet Data Security, which provides penetration testing, assessments, training, and managed network and security services for the ICS sector.

In the latest move by a major industrial control systems (ICS) vendor to beef up its cybersecurity portfolio, Rockwell Automation has announced plans to purchase Israeli cybersecurity services firm Avnet Data Security.

Rockwell Automation yesterday said it has signed an agreement to buy the privately held Avnet Data Security, which provides penetration testing, assessments, training, and managed network and security services for the ICS sector. The company said cybersecurity is one of the fastest-growing segments of the services side of its business. 

“Avnet’s combination of service delivery, training, research, and managed services will enable us to service a much larger set of customers globally while also continuing to accelerate our portfolio development in this rapidly developing market,” said Frank Kulaszewicz, senior vice president of control products and solutions at Rockwell Automation.

Financial details of the deal, which is expected to close in early 2020, were not disclosed.

Read more here

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Car Hacking Hits the Streets

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/rockwell-automation-to-buy-ics-security-services-firm/d/d-id/1336754?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 Free Tools for Better Visibility Into Your Network

It’s hard to protect what you don’t know is there. These free tools can help you understand just what it is that you need to protect — and need to protect yourself from.PreviousNext

What’s on your network? It’s a simple question, but one that countless security and network management teams struggle to answer because most enterprise networks are dynamic, living things that change at a rapid pace. That change is the key to adapting to a changing business environment — and key to criminals’ ability to breach the perimeter and gain access to enterprise assets.

Security teams tend to have a very good idea of what the network looked like on the day it went live. Nevertheless, conversations with consultants (and over drinks at conferences) overflow with complaints and confessions about how those same teams are ignorant of what the network looks like right now. That’s a problem. And it becomes a bigger problem when it runs into the reality of the way that criminal hackers work.

Criminal hackers specialize in understanding how a targeted network is configured today. The extent to which they understand every component and interface is the extent to which they can find exploitable vulnerabilities. And those weaknesses are even more vulnerable if the network owner doesn’t know they exist.

So one of the first steps in protecting a network is understanding precisely what is there to be protected. There are a number of different commercial products that can help provide an inventory and map of a network. But for many smaller organizations, even lower cost tools can be difficult additions to the security budget. That’s why the focus of this article is on free products that provide network visibility and monitoring.

Some of the products on this list are open source and some are not. Several of them may require an investment of time and effort to make up for the lack of a purchase price. Regardless, each of these could be a way for a security team to either get its first solid picture of its current network or augment the view provided by other tools. In either case, visibility is always a good thing.

We’re curious; are there free or open source network discovery and monitoring tools that you use? Are there any that you’ve tried and abandoned? We’d like to hear about your experience — let us know in the comment section, below!

(Image: GoodIdeas VIA Adobe Stock)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/analytics/security-monitoring/7-free-tools-for-better-visibility-into-your-network/d/d-id/1336751?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Operationalizing Threat Intelligence at Scale in the SOC

Open source platforms such as the Malware Information Sharing Platform are well positioned to drive a community-based approach to intelligence sharing.

Today’s security operations centers (SOCs) are struggling. Cyber threats are ever-increasing and growing daily in sophistication. Massive volumes of data created every second lead to new vulnerabilities and attack vectors. How do SOCs keep pace with the threats happening across the landscape and better understand them to increase their organization’s security posture?

A Ponemon Institute survey, “Improving the Effectiveness of the Security Operations Center,” found that 53% of respondents believe their SOC is ineffective at gathering evidence, investigating, and finding the source of threats. To be effective, SOCs must have access to the right data with the right context at the right time to fulfill their mission of identifying and responding to threats.

Cyber threat intelligence (CTI) has become a key tool for SOCs in this mission. According to the 2019 SANS’ “Evolution of Cyber Threat Intelligence” survey, 70% of customers consider CTI a necessity for security operations. Yet many organizations still struggle to operationalize the disparate sources of threat intelligence or institute an effective culture of sharing to combine forces against adversaries.

Leveraging the Different Types of Threat Data
CTI intelligence feeds come from many sources, and there are two primary categories. The first is open source/community feeds, such as the Collective Intelligence Framework (CIF) and sector-based Information Sharing and Analysis Centers (ISACs); the second is vendor-specific and paid-for threat intelligence services (iDefense, Cisco, Team Cymru, Greynoise.io, McAfee, Symantec, ATLAS, Farsight, and Reversing Labs, to name a few). Security operations teams also produce proprietary intelligence that needs to be shared internally.

There are several challenges to making the most of this threat data, however:

• Scaling Up the Pyramid of Pain: The more useful the threat data, the more difficult (or painful) it is to obtain and integrate into workflows. Imagine a pyramid formed by threat data values versus the level of integration difficulty:

o Tactics, techniques, and procedures (TTPs): Tough (Top of pyramid)

o Tools: Challenging

o Network/host artifacts: Annoying

o Domain names: Simple

o IP addresses: Easy

o Hash values: Trivial (base of pyramid)

At the top of the pyramid is intelligence on threat actor tools and TTPs. These are the most painful indicators to detect and verify but also the most useful for knowing context about threat actors, their intentions, and their methods to understand a threat well enough to respond.

• Timing, Complexity, Taxonomy, and Formatting: The period of time for which threat data is valid is limited. Organizations need current information about vulnerabilities and malware being used in attacks before they are targeted. Intelligence feeds will have shifting levels of urgency and simplifying the prioritization process is a complex task.

In the past, security practitioners shared Word documents, PDFs, or simple file formats like CSV tables and Excel Sheets of indicators of compromise These were difficult to operationalize due to taxonomy and formatting differences, lack of integration, and the time-sensitive nature of the data. Also, it is difficult to describe and share a more complex behavioral indicator such as a threat actor tactic in a standardized format.

• Sharing and Consumption: The cyber community has tried — and failed — to institute an effective culture of sharing. Taxonomies and standards have been created but none have caught on at scale, leaving accessibility to CTI fragmented. As a result, most sharing doesn’t go beyond domains. And even though security analysts across industries share common goals, often the organization does not see it that way and sharing and collaborating is hidden from management.

Automated analysis and instantaneous sharing of threat intelligence is the key that has been missing to unlock CTI to live up to its potential value.

What Is MISP and Why Is It Important? 
MISP — the Malware Information Sharing Platform — has gained traction as a pragmatic, flexible approach to the threat intelligence consumption and sharing problem. MISP is a vendor-agnostic, open source standard with a growing community, co-financed by the European Union. It is an infrastructure for consuming, collecting, and sharing indicators of malware either in a trusted circle or with the general public, depending on the user’s preferences. MISP provides a number of benefits:

• MISP allows users to push and query known indicators of compromise collected and shared by a community of security practitioners from around the globe.

• MISP is flexible because it does not enforce a single methodology for sharing threat intelligence, and it outputs information in multiple formats.

• MISP reduces double-work within the intelligence community by sharing information across CERTs, organizations, governments, and security vendors.

• MISP provides a level of control to ensure that the right data is confidentially shared and consumed.

To operationalize threat intelligence at scale without overburdening security teams, practitioners should consider integrating MISP with their existing security information and event management (SIEM) solution. MISP is built for flexible ingestion and extraction, rapid analysis, automation, and sharing. A SIEM’s integration with MISP builds threat data consumption and threat sharing directly into the analyst workflow. This enables analysts to find and share threats much faster than before by correlating community intelligence with multiple other data sources through their SIEM.

The SANS survey found that 33.7% of organizations produce and consume threat intelligence, while 60.5% only consume it. This means that a majority of organizations recognize the value of CTI yet for different reasons currently decline to participate in sharing their findings.

Open source platforms such as MISP, combined with automated integration into SIEMs, are well positioned to drive a community-based approach to intelligence sharing. This will enable SOCs to stand together through collaborative analysis, potentially driving the industry to an inflection point for moving beyond source-to-subscriber to partner-to-partner sharing.

Sebastien Tricaud is the Director of Security Engineering at Devo. He has worked on numerous open source projects, such as Linux PAM, Prelude IDS, etc. He is the lead developer of Faup, a URL Parser that is widely popular. Sebastien is also a board member of the Honeynet … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/operationalizing-threat-intelligence-at-scale-in-the-soc/a/d-id/1336702?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

AWS Issues ‘Urgent’ Warning for Database Users to Update Certs

Users of AWS Aurora, DocumentDB, and RDS databases must download and install a fresh certificate and rotate the certificate authority.

Amazon Web Services has issued an “important” warning to users of its Amazon Aurora, Amazon Relational Database Service (RDS), and Amazon DocumentDB (with MongoDB compatibility) databases, urging them to update their certificates by January 14, 2020.

Those who use SSL/TLS certificate validation when they connect to database instances are urged to download and install a fresh certificate, rotate the certificate authority (CA) for the instances, and reboot the instances. Users who don’t have SSL/TLS connections or certificate validation don’t need to make any updates; however, AWS advises doing so in case they want to use SSL/TLS connections in the future.

This process is standard: SSL/TLS certificates for RDS, Aurora, and DocumentDB expire and are replaced every five years as part of standard maintenance. Users may already have received an email or console notification alerting them to the process.

Instances created on or after January 14 will have the new (CA-2019) certificates, made available in September 2019. Users can temporarily switch back to the old (CA-2015) certificates if needed. CA-2015 certificates will expire on March 5, 2020; at this point, applications that use certificate validation but haven’t been updated will lose connectivity.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Car Hacking Hits the Streets

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/aws-issues-urgent-warning-for-database-users-to-update-certs/d/d-id/1336766?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Developers Still Don’t Properly Handle Sensitive Data

The top classes of vulnerabilities for 2019 indicate that developers still don’t correctly sanitize inputs, nor protect passwords and keys as they should.

Open-source software projects continue to struggle with handling sensitive information, according to automated scans of hundreds of millions of commits to code repositories.

Software-security toolmaker DeepCode found that four of the seven vulnerabilities classes with the greatest impact on the security of software projects had to do with failures to protect data. The categories of Missing Input Data Sanitization and Insecure Password Handling laid claim to the top-two slots on the company’s list of important vulnerability classes. Two other data security issues — Weak Cryptography and Lack of Information Hiding — came in No. 6 and No. 7 on the list, which was published this week.

The issues underscore that developers need to continue to focus on producing secure code in 2020, says Boris Paskalev, CEO and co-founder of the company. “We believe that any developer who has these issues should want to take care of them,” he says, adding the developers should continue to learn about security. “In at least every other other major repository, we see a security vulnerability.”

Driven by increased research into software security, more software under development, companies’ greater openness to vulnerability reporting, and perhaps most of all – improvements to the process of recording vulnerability reports – the number of software security issues published in the National Vulnerability Database rose to the highest recorded level in 2019, surpassing 17,300 issues reported during the year.

This continues a trend that started in 2017, when the number of vulnerabilities reported annually jumped to 14,645, more than doubling the prior year’s tally.

Focusing on all the issues is impossible, so security teams and developers have to be selective about where they invest their security effort and training. The Common Weakness Enumeration (CWE) list of most dangerous software errors, for example, points at different classes of vulnerabilities based on their relative frequency of occurrence over the past two years. 

Yet, developers have trouble closing security holes in a timely manner. In a report based on tests of more than 85,000 applications, software security firm Veracode found that companies only manage to fix 56% of vulnerabilities between the first and last scans.

From its scans of open-source repositories and the commits — changes made by the developers — to those projects, the company can track the ebb and flow of vulnerabilities as they are deployed to code, caught, and then fixed. The list of the most important security vulnerabilities can act as a cheat sheet for developers, Paskalev says.

“Almost any large open-source framework has these issues, so it’s good to know what to look out for,” he says. 

The list created by DeepCode does not count just the most common vulnerabilities, but instead what the company considers the most important. The top category of defect is malformed date-time values and the mishandling of those variables, but those bugs typically do not have a major impact on programs, Paskalev says. The company categorizes vulnerabilities into 200 categories and focuses on the most impactful for the list.

“A bug could be anything from an unsanitized input to broken pipes,” he says. “Most of them result in performance degradation or a resource or memory leak, things like that.”

Automated tools are critical to finding vulnerabilities and catching coding errors before the software is deployed, especially as software is produced with increasing velocity, Paskalev says.

“A set of tools — automated tools — are needed to make sure they catch these issues,” he says. “You want to check as often as possible. I almost see a continuous checking like a debugger.”

Yet, corporate management also needs to create an environment where security is a focus for developers. While seven in ten developers are expected to write secure code in their jobs, half of developers only find coding errors after applications are deployed to a test environment or to a later stage, according to a July report by DevOps service provider GitLab

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “What Tools Will Find Misconfigurations in My AWS S3 Cloud Buckets?”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/application-security/developers-still-dont-properly-handle-sensitive-data-/d/d-id/1336752?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Las Vegas Suffers Cyberattack on First Day of CES

The attack, still under investigation, hit early in the morning of Jan. 7.

On the opening day of the huge Consumer Electronics Show (CES), officials in Las Vegas were busy assessing the damage from a cyberattack that hit the city. Officials there reportedly said preliminary analysis indicated that no sensitive data was compromised in the attack, which began around 4:30 a.m. local time Tuesday, Jan. 7.

The attack reportedly appears to have begun with a malicious link included in an email to a city employee. There was thus far was no indication that the attack was either related to the recent military activity with Iran, or the beginning of CES.

This is a developing story. For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Car Hacking Hits the Streets

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/las-vegas-suffers-cyberattack-on-first-day-of-ces-/d/d-id/1336753?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple