STE WILLIAMS

Container ship loading plans are ‘easily hackable’

Security researchers have warned that it might be possible to destabilise a container ship by manipulating the vessel stowage plan or “Bay Plan”.

The issue stems from the absence of security in BAPLIE EDIFACT, a messaging system used to create ship loading and container stowage plans – for example which locations are occupied and which are empty – from the numerous electronic messages exchanged between shipping lines, port authorities, terminals and ships.

The messaging standard is developed and maintained by the Shipping Message Development Group (SMDG).

Criminals less interested in destabilising ships but perhaps instead stealing goods by rerouting containers, would use “COPRAR / COPARN / CODECO / COARRI” messages instead. These deal with shipping line to terminal messaging and vice versa.

Evidence suggests that ship and terminal messaging systems have been abused at times in order to either conceal or re-route drugs or steal valuables. “We believe this was done using front end GUIs in port rather than manipulating the data itself,” according to Joe Bursell, a security researcher at Pen Test Partners.

Rollover

BAPLIE messages, once their syntax is understood, might potentially be manipulated to change the destinations of cargo, money and more. Pen Test Partners was more interested in message subsets that are found in “LIN” line items about contents and handling for individual containers.

Most straightforwardly it’s possible to manipulate container weight and thus the ship’s balance.

A potential hacker would simply search the message for VGM (Verified Gross Mass). The trailing value is the weight, so changing this value to make it either lighter or heavier would mean that the vessel load-planning software would place the container in the wrong place for stability. “Some ports may intercept the wrong weight at a weighbridge or possibly at the crane, but overloading containers to save on shipping cost is already a significant issue in some regions,” Bursell explained.

Researchers explained that it might be possible, using similar trickery, to place a mislabelled heavy container at the top of the stack, moving the centre of gravity too high. For example, it’s possible to set the handling for “load third tier on deck”, so high up, out of the hold. Manipulating the weight distribution is an issue because the ship becomes more and more unstable if heavy goods are loaded higher up in the stack.

Reefer madness

Certain attributes can be set for a container to flag that it needs special handling. Manipulating the message opens the door to all sorts of mischief.

For example, the status for an aggregation of explosive materials could be changed to an batch of regular liquids. Alternatively a potential hacker could modify the flashpoint of a flammable vapour.

Refrigerated containers need special handling, as they need to be located in certain bays that have power supplies. A particular code states that the container is a “reefer”, so the load plan software will sign it to a powered bay.

Mischief-makers could change the designation of a batch of goods that need refrigeration could be changed to signify normal handling or (more subtly) that the refrigeration unit is inoperative, so the goods can be placed anywhere. The consequences for a batch of prawns, for example, of such trickery would be altogether malodorous.

Certain cargoes are sensitive to strong smells, particularly coffee. Handling codes are set to place them well away from smelly things. Pranksters could potentially change the designation so that the a container full of odour-sensitive goods, such as coffee, has its door open and locate next to a container of fishmeal, which will be tagged as odorous.

To make things even worse the combo could be assigned to a hold using the “keep dry” code where there’s poor air circulation.

“Whatever happens, the coffee will stink of fish on arrival at port,” Bursell writes.

The integrity of BAPLIE messaging is critical to the safety of container ships.

“I strongly encourage all operators, ports and terminals to carry out a thorough review of their EDI systems to ensure that message tampering isn’t possible,” Bursell concluded.

The BAPLIE protocol features a literal checksum that uses the total number of message segments, including itself, but excluding the UNH message header.

“So, if you remove or add a message segment, don’t forget to update the UNT [message] trailer,” Bursell explained. “If you’re just manipulating segment values, you don’t need to worry about UNT.”

The terminal/ship/port receiving a doctored message will probably respond with a CONTRL message, acknowledging receipt.

This is much of a stumbling block, either.

“If you’re intercepting and forwarding the entire EDI message stream, be prepared to spoof a message back to the sender,” Bursell notes. “It’s easy to generate the correct CONTRL message for your modified request: there’s a generator here.”

“Already there is evidence of theft of valuable items from containers in port, potentially through insider access by criminals to load information. It doesn’t take much imagination to see some far more serious attacks,” Bursell concluded. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/container_ship_loading_software_mischief/

3 Ways to Retain Security Operations Staff

Finding skilled security analysts is hard enough. Once you do, you’ll need to fight to keep them working for you. These tips can help.

The shortfall in security professionals, and most notably security operations center (SOC) analysts, has been well documented. However, hiring skilled security analysts is only part of the problem. Even if an organization is able to recruit security analysts, retaining them in the long term is an even greater challenge. The foundational market forces of supply and demand enable these professionals to easily jump ship, often achieving a higher salary and title in the process.

During my time at Gartner, informal feedback I received from managed security service providers (MSSP) indicated that the average retention period for a junior SOC analyst was between 12 and 18 months. It’s important to bear in mind that MSSPs are generally able to offer a better career advancement path for SOC employees than most enterprises.

Nevertheless, using the right techniques, retention can be improved. Here are the top three ways to attract and retain SOC analysts.

1. Convert Roles to Duties, and Then Rotate Them
The primary roles in a SOC, with some variation, are shown in Figure 1.

Figure 1.

The greatest mistake organizations make is defining these as fixed roles (jobs). Tier 1 work is repetitive and monotonous, and intellectually unchallenging. In addition, anyone who has ever stared at an alert console for months on end can attest to the fact that it also conditions analysts to pay less attention, which has a negative impact on effectiveness and efficiency.

Meanwhile, staff retention in Tier 2 and Tier 3 roles is higher, which results in fewer new openings and promotion opportunities for junior analysts. Once junior analysts have successfully worked in a SOC for 12 months or more, they can easily find more senior roles with another organization.

Each one of the Tier 1 through 3 roles can easily be rotated, with analysts working in each position for one-week intervals. This approach distributes both the interesting and tedious work across the team, which improves alertness and provides everyone the opportunity to perform some intellectually challenging and interesting work.

In addition to increasing retention, this rotation provides every analyst the opportunity to become familiar with the various roles required to operate a SOC. This cross-functional training helps mitigate skills gaps and maintain operational continuity if someone leaves the organization or is on paid time off.

2. Offer Phased Training and Certifications
Providing training certifications is another great retention mechanism, if offered based on employment tenure. For example, a new analyst may be offered a certification course such as the GIAC Certified Intrusion Analyst after 6 months of active employment, the GIAC Forensic Analyst after 12 months, and the GIAC Certified Forensic Examiner after 24 months.

I’ve used GIAC here as an example, but SANS and other companies also offer similar courses. Correctly applied, such a system can help increase analyst retention rates from 12 to 18 months to up to 5 years. Alternatively, analysts across a team can be provided different certification courses in each phase. This will ensure that the team has a broad and comprehensive skill set, and the analysts that have attended a given course can train the remainder of the team to transfer knowledge.

Figure 2. Example Training Plans

 

In combination, these three strategies can significantly improve and increase SOC analyst retention, reduce the cost of recruiting and training new analysts, and minimize the negative impact of employee turnover on operations.

3. Offer Step-up Retention Bonuses
Offering increasing retention bonuses for each year of employment rewards analysts for their loyalty and gives them an incentive to stay with the organization. The increase from an entry-level to a midcareer level analyst is between 20% to 30%, so a good bonus strategy will ensure that a similar increase is achieved over a 3- to 5-year period.

Related Content:

Oliver Rochford is the Vice President of Security Evangelism at DFLabs. He previously worked as research director for Gartner, and is a recognized expert on threat and vulnerability management, cybersecurity monitoring and operations management. Oliver has also been a … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/3-ways-to-retain-security-operations-staff/a/d-id/1330444?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

User experience test tools: a privacy accident waiting to happen

Researchers working on browser fingerprinting found themselves distracted by a much more serious privacy breach: analytical scripts siphoning off masses of user interactions.

Steven Englehardt (a PhD student at Princeton), Arvind Narayanan (a Princeton assistant professor) and Gunes Acar (KU Lueven), published their study at Freedom to Tinker last week. Their key finding is that session replay scripts are indiscriminate in what they scoop, user permission is absent, and there’s evidence that the data isn’t always handled securely.

Session replay is a popular user experience tool: it lets a publisher watch users navigating their site to work out why users leave a site and what needs improving.

As the authors wrote in their analysis: “These scripts record your keystrokes, mouse movements, and scrolling behavior, along with the entire contents of the pages you visit, and send them to third-party servers. Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.”

Speaking to Vulture South, Englehardt said the trio decided to analyse fingerprinting by injecting a unique value into Web pages to see where personal information was being sent.

“We didn’t really expect to find” the session replay companies, he said.

The next surprise, he said, is how deep the session replay scripts dig.

Anonymity? They’ve heard of it

“You might think these recordings are anonymous, but some of the companies we studied are offering the option to identify the user — so you know that Richard viewed your site, along with his e-mail address”, Acar told The Register.

One reason this happens, they explained, is that as publishers increasingly put content behind secured paywalls, user activity becomes hard to follow.

Englehardt said the page the user is viewing “might only exist behind the login”, meaning that to capture a session for replay to the publisher, the third-party company has to “scrape the whole page”.

As the researchers wrote in their study, scripts from companies like Yandex, FullStory, Hotjar, UserReplay, Smartlook, Clicktale, and SessionCam “record your keystrokes, mouse movements, and scrolling behaviour, along with the entire contents of the pages you visit, and send them to third-party servers. Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.”

They also found replay scripts capturing checkout and registration processes.

The extent of that data collection meant “sensitive information such as medical conditions, credit card details and other personal information displayed on a page to leak to the third-party as part of the recording”, they wrote.

There is also the potential for data to leak to the outside world, when the customer views the replay, because some of the session recording companies offer their playback over unsecured HTTP.

“Even when a Website is HTTPS, and the information is sent [to the session replay company] over HTTPS, when the publisher logs in to watch the video, they watch on HTTP”, Englehardt said.

That meant network-based third parties could snoop on the replay.

Publishes who used unsecured publisher dashboards included Yandex, Hotjar, and Smartlook.

The study also found the session replay scripts commonly ignore user privacy settings.

The EasyList and EasyPrivacy ad-blockers don’t block FullStory, Smartlook, or UserReplay scripts, although “EasyPrivacy has filter rules that block Yandex, Hotjar, ClickTale and SessionCam.”

“At least one of the five companies we studied (UserReplay) allows publishers to disable data collection from users who have Do Not Track (DNT) set in their browsers,” the study said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/session_replay_exfiltration/

User experience test tools: a privacy accident waiting to happen

Researchers working on browser fingerprinting found themselves distracted by a much more serious privacy breach: analytical scripts siphoning off masses of user interactions.

Steven Englehardt (a PhD student at Princeton), Arvind Narayanan (a Princeton assistant professor) and Gunes Acar (KU Lueven), published their study at Freedom to Tinker last week. Their key finding is that session replay scripts are indiscriminate in what they scoop, user permission is absent, and there’s evidence that the data isn’t always handled securely.

Session replay is a popular user experience tool: it lets a publisher watch users navigating their site to work out why users leave a site and what needs improving.

As the authors wrote in their analysis: “These scripts record your keystrokes, mouse movements, and scrolling behavior, along with the entire contents of the pages you visit, and send them to third-party servers. Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.”

Speaking to Vulture South, Englehardt said the trio decided to analyse fingerprinting by injecting a unique value into Web pages to see where personal information was being sent.

“We didn’t really expect to find” the session replay companies, he said.

The next surprise, he said, is how deep the session replay scripts dig.

Anonymity? They’ve heard of it

“You might think these recordings are anonymous, but some of the companies we studied are offering the option to identify the user — so you know that Richard viewed your site, along with his e-mail address”, Acar told The Register.

One reason this happens, they explained, is that as publishers increasingly put content behind secured paywalls, user activity becomes hard to follow.

Englehardt said the page the user is viewing “might only exist behind the login”, meaning that to capture a session for replay to the publisher, the third-party company has to “scrape the whole page”.

As the researchers wrote in their study, scripts from companies like Yandex, FullStory, Hotjar, UserReplay, Smartlook, Clicktale, and SessionCam “record your keystrokes, mouse movements, and scrolling behaviour, along with the entire contents of the pages you visit, and send them to third-party servers. Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.”

They also found replay scripts capturing checkout and registration processes.

The extent of that data collection meant “sensitive information such as medical conditions, credit card details and other personal information displayed on a page to leak to the third-party as part of the recording”, they wrote.

There is also the potential for data to leak to the outside world, when the customer views the replay, because some of the session recording companies offer their playback over unsecured HTTP.

“Even when a Website is HTTPS, and the information is sent [to the session replay company] over HTTPS, when the publisher logs in to watch the video, they watch on HTTP”, Englehardt said.

That meant network-based third parties could snoop on the replay.

Publishes who used unsecured publisher dashboards included Yandex, Hotjar, and Smartlook.

The study also found the session replay scripts commonly ignore user privacy settings.

The EasyList and EasyPrivacy ad-blockers don’t block FullStory, Smartlook, or UserReplay scripts, although “EasyPrivacy has filter rules that block Yandex, Hotjar, ClickTale and SessionCam.”

“At least one of the five companies we studied (UserReplay) allows publishers to disable data collection from users who have Do Not Track (DNT) set in their browsers,” the study said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/session_replay_exfiltration/

F5 DROWNing, not waving, in crypto fail

If you’re an F5 BIG-IP sysadmin, get patching: there’s a bug in the company’s RSA implementation that can give an attacker access to encrypted messages.

As the CVE assignment stated: “a virtual server configured with a Client SSL profile may be vulnerable to an Adaptive Chosen Ciphertext attack (AKA Bleichenbacher attack) against RSA, which when exploited, may result in plaintext recovery of encrypted messages and/or a Man-in-the-middle (MiTM) attack, despite the attacker not having gained access to the server’s private key itself.”

Named after Swiss cryptographer Daniel Bleichenbacher, that attack first emerged in 2006, as outlined in this IETF mailing list post. The attacker can append their own data to a signed hash, so it matches a bogus key the attacker creates.

F5’s patch announcement said:

Exploiting this vulnerability to perform plaintext recovery of encrypted messages will, in most practical cases, allow an attacker to read the plaintext only after the session has completed. Only TLS sessions established using RSA key exchange are vulnerable to this attack.”

The vulnerable versions of BIG-IP are 11.6.0-11.6.2, 12.0.0-12.1.2 HF1, or 13.0.0-13.0.0 HF2.

Cloudflare’s “head crypto boffin” Nick Sullivan was horrified:

As Sullivan noted, DROWN (Decrypting RSA with Obsolete and Weakened Encryption) only worked in systems configured to enable the ancient SSLv2, which persisted in some servers. The server could be tricked into downgrading its crypto to SSLv2.

The F5 vulnerability was discovered by Hanno Bock, Juraj Somorovsky of Ruhr-Universitat Bochum/Hackmanit GmbH, and Craig Young of Tripwire VERT.

An attacker would need to be in a position to capture traffic, F5’s advisory stated: “The limited window of opportunity, limitations in bandwidth, and latency make this attack significantly more difficult to execute.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/f5_crypto_weakness/

F5 DROWNing, not waving, in crypto fail

If you’re an F5 BIG-IP sysadmin, get patching: there’s a bug in the company’s RSA implementation that can give an attacker access to encrypted messages.

As the CVE assignment stated: “a virtual server configured with a Client SSL profile may be vulnerable to an Adaptive Chosen Ciphertext attack (AKA Bleichenbacher attack) against RSA, which when exploited, may result in plaintext recovery of encrypted messages and/or a Man-in-the-middle (MiTM) attack, despite the attacker not having gained access to the server’s private key itself.”

Named after Swiss cryptographer Daniel Bleichenbacher, that attack first emerged in 2006, as outlined in this IETF mailing list post. The attacker can append their own data to a signed hash, so it matches a bogus key the attacker creates.

F5’s patch announcement said:

Exploiting this vulnerability to perform plaintext recovery of encrypted messages will, in most practical cases, allow an attacker to read the plaintext only after the session has completed. Only TLS sessions established using RSA key exchange are vulnerable to this attack.”

The vulnerable versions of BIG-IP are 11.6.0-11.6.2, 12.0.0-12.1.2 HF1, or 13.0.0-13.0.0 HF2.

Cloudflare’s “head crypto boffin” Nick Sullivan was horrified:

As Sullivan noted, DROWN (Decrypting RSA with Obsolete and Weakened Encryption) only worked in systems configured to enable the ancient SSLv2, which persisted in some servers. The server could be tricked into downgrading its crypto to SSLv2.

The F5 vulnerability was discovered by Hanno Bock, Juraj Somorovsky of Ruhr-Universitat Bochum/Hackmanit GmbH, and Craig Young of Tripwire VERT.

An attacker would need to be in a position to capture traffic, F5’s advisory stated: “The limited window of opportunity, limitations in bandwidth, and latency make this attack significantly more difficult to execute.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/f5_crypto_weakness/

DNS resolver 9.9.9.9 will check requests against IBM threat database

The Global Cyber Alliance has given the world a new free Domain Name Service resolver, and advanced it as offering unusually strong security and privacy features.

The Quad9 DNS service, at 9.9.9.9, not only turns URIs into IP addresses, but also checks them against IBM X-Force’s threat intelligence database. Those checks protect agains landing on any of the 40 billion evil sites and images X-Force has found to be dangerous.

The Alliance (GCA) was co-founded by the City of London Police, the District Attorney of New York County and the Center for Internet Security and styled itself “an international, cross-sector effort designed to confront, address, and prevent malicious cyber activity.”

IBM’s helped the project in two ways: back in 1988, Big Blue secured the 9.0.0.0/8 block of 16 million addresses, which let it dedicate 9.9.9.9 to the cause.

The Alliance, which oversees the initiative, said the other partner, Packet Clearing House, gave the system global reach via 70 points of presence in 40 countries.

It claimed users wouldn’t suffer a performance penalty for using the service, but added it plans to double the Quad9 PoPs over the next 18 months.

GCA, which did the development work, also coordinated the threat intelligence community to incorporate feeds from 18 other partners, “including Abuse.ch, the Anti-Phishing Working Group, Bambenek Consulting, F-Secure, mnemonic, 360Netlab, Hybrid Analysis GmbH, Proofpoint, RiskIQ, and ThreatSTOP.”

The organisation promised that records of user lookups would not be put out to pasture in data farms: “Information about the websites consumers visit, where they live and what device they use are often captured by some DNS services and used for marketing or other purposes”, it said. Quad9 won’t “store, correlate, or otherwise leverage” personal information.

Google makes the same promise for its 8.8.8.8 DNS service, saying: “We don’t correlate or combine information from our temporary or permanent logs with any personal information that you have provided Google for other services.” However, most home users accept the default configuration for their ISP, each of which will have its own attitude to monetising user data.

GCA also said it hoped the resolver would attract users on the security-challenged Internet of Things, because TVs, cameras, video recorders, thermostats or home appliances “often do not receive important security updates”.

If you’re one of the lucky few whose ISP offers IPv6, there’s a Quad9 resolver for you at 2620:fe::fe (the PCH public resolver). ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/quad9_secure_private_dns_resolver/

DNS resolver 9.9.9.9 will check requests against IBM threat database

The Global Cyber Alliance has given the world a new free Domain Name Service resolver, and advanced it as offering unusually strong security and privacy features.

The Quad9 DNS service, at 9.9.9.9, not only turns URIs into IP addresses, but also checks them against IBM X-Force’s threat intelligence database. Those checks protect agains landing on any of the 40 billion evil sites and images X-Force has found to be dangerous.

The Alliance (GCA) was co-founded by the City of London Police, the District Attorney of New York County and the Center for Internet Security and styled itself “an international, cross-sector effort designed to confront, address, and prevent malicious cyber activity.”

IBM’s helped the project in two ways: back in 1988, Big Blue secured the 9.0.0.0/8 block of 16 million addresses, which let it dedicate 9.9.9.9 to the cause.

The Alliance, which oversees the initiative, said the other partner, Packet Clearing House, gave the system global reach via 70 points of presence in 40 countries.

It claimed users wouldn’t suffer a performance penalty for using the service, but added it plans to double the Quad9 PoPs over the next 18 months.

GCA, which did the development work, also coordinated the threat intelligence community to incorporate feeds from 18 other partners, “including Abuse.ch, the Anti-Phishing Working Group, Bambenek Consulting, F-Secure, mnemonic, 360Netlab, Hybrid Analysis GmbH, Proofpoint, RiskIQ, and ThreatSTOP.”

The organisation promised that records of user lookups would not be put out to pasture in data farms: “Information about the websites consumers visit, where they live and what device they use are often captured by some DNS services and used for marketing or other purposes”, it said. Quad9 won’t “store, correlate, or otherwise leverage” personal information.

Google makes the same promise for its 8.8.8.8 DNS service, saying: “We don’t correlate or combine information from our temporary or permanent logs with any personal information that you have provided Google for other services.” However, most home users accept the default configuration for their ISP, each of which will have its own attitude to monetising user data.

GCA also said it hoped the resolver would attract users on the security-challenged Internet of Things, because TVs, cameras, video recorders, thermostats or home appliances “often do not receive important security updates”.

If you’re one of the lucky few whose ISP offers IPv6, there’s a Quad9 resolver for you at 2620:fe::fe (the PCH public resolver). ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/quad9_secure_private_dns_resolver/

It’s 2017, and command injection is still the top threat to Web apps

The Open Web Application Security Project will on Monday, US time, reveal its annual analysis of Web application risks, but The Register has sniffed out the final draft of the report and can report that it has found familiar attacks top its charts, but exotic exploits are on the rise.

A late pre-release version of the Project’s report [PDF] compiled over 40 submissions from application security companies, plus results of an industry survey that queried 500 respondents.

This year’s Top 10 risks in order were:


*Two 2013 entries merged

The Project explained the three new entries in the list since 2013:

  • XML External Entity vulnerabilities – This was added to the list as a result of data from source code analysis tools. Poor configuration in XML parsers created a range of vulnerabilities that included internal file or shares disclosure, internal port scanning, remote code execution (RCE) or denial of service (DoS);
  • Insecure deserialisation – OWASP said this category came out of its community survey. As well as RCE, this class of vulnerability can lead to replay attacks, injection attacks, and privilege escalation.
  • Insufficient logging and monitoring – This makes it difficult for admins to detect and respond to attacks, and the Project noted it can take as long as 200 days to detect a breach.

There are some good news stories in the trends from 2013 to today. Admins are wise to cross-site request forgery (CSRF), which was reported in fewer than five per cent of all apps; while unvalidated redirects and forwards were reported in less than one per cent of the data set.

OWASP also noted architectural changes which were reflected in current risks, or were likely to emerge in the future.

The take-up of microservices often puts old code behind RESTful or other APIs, but was never intended to be exposed to the outside world. “The base assumptions behind the code, such as trusted callers, are no longer valid”, the report said.

Second, the report noted the emergence of “single page applications” written with Angular or React. While these support highly functional front-ends, moving functionality from the server side to the client “brings its own security challenges”.

Finally, by way of node.js, JavaScript has become the Web’s “primary language” (which could account for the rise of deserialisation risks).

The report project’s leads were Andrew van der Stock, Brian Glas, Neil Smithline, and Torsten Gigler. The final release version will be announced at the organisation’s wiki, and on its Twitter account, when it lands. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/open_web_application_security_project_2017_report/

It’s 2017, and command injection is still the top threat to Web apps

The Open Web Application Security Project will on Monday, US time, reveal its annual analysis of Web application risks, but The Register has sniffed out the final draft of the report and can report that it has found familiar attacks top its charts, but exotic exploits are on the rise.

A late pre-release version of the Project’s report [PDF] compiled over 40 submissions from application security companies, plus results of an industry survey that queried 500 respondents.

This year’s Top 10 risks in order were:


*Two 2013 entries merged

The Project explained the three new entries in the list since 2013:

  • XML External Entity vulnerabilities – This was added to the list as a result of data from source code analysis tools. Poor configuration in XML parsers created a range of vulnerabilities that included internal file or shares disclosure, internal port scanning, remote code execution (RCE) or denial of service (DoS);
  • Insecure deserialisation – OWASP said this category came out of its community survey. As well as RCE, this class of vulnerability can lead to replay attacks, injection attacks, and privilege escalation.
  • Insufficient logging and monitoring – This makes it difficult for admins to detect and respond to attacks, and the Project noted it can take as long as 200 days to detect a breach.

There are some good news stories in the trends from 2013 to today. Admins are wise to cross-site request forgery (CSRF), which was reported in fewer than five per cent of all apps; while unvalidated redirects and forwards were reported in less than one per cent of the data set.

OWASP also noted architectural changes which were reflected in current risks, or were likely to emerge in the future.

The take-up of microservices often puts old code behind RESTful or other APIs, but was never intended to be exposed to the outside world. “The base assumptions behind the code, such as trusted callers, are no longer valid”, the report said.

Second, the report noted the emergence of “single page applications” written with Angular or React. While these support highly functional front-ends, moving functionality from the server side to the client “brings its own security challenges”.

Finally, by way of node.js, JavaScript has become the Web’s “primary language” (which could account for the rise of deserialisation risks).

The report project’s leads were Andrew van der Stock, Brian Glas, Neil Smithline, and Torsten Gigler. The final release version will be announced at the organisation’s wiki, and on its Twitter account, when it lands. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/open_web_application_security_project_2017_report/