STE WILLIAMS

Well, Hello, Dolly!

Eight hours is certainly a start.

Source: The Security Awareness Company

What security-related videos have made you laugh? Let us know! Send them to [email protected].

Beyond the Edge content is curated by Dark Reading editors and created by external sources, credited for their work. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/well-hello-dolly!/b/d-id/1336354?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

No, YouTube isn’t planning to jettison your unprofitable channel

Creators are freaking out about YouTube’s new terms of service because of a clause that they’re interpreting to mean “Hey, deadbeat, kiss your content goodbye: it’s not making us enough money.”

This is the scary bit from a preview of the new ToS, which go into effect on 10 December 2019:

YouTube may terminate your access, or your Google account’s access to all or part of the service if YouTube believes, in its sole discretion, that provision of the service to you is no longer commercially viable.

A representative comment from the multiple YouTubers who’ve tweeted out that clause:

So according to Youtube’s new Terms of Service, if your channel isn’t making them enough money, they’ll just terminate it.

To all of the smaller content creators out there, it was nice knowing ya.

In a nutshell, that’s not going to happen. Google isn’t suddenly going to start shutting down channels that aren’t making money. Google released the updated YouTube Terms of Use on Sunday in order to, well, update them, plus to make them easier to read. A YouTube spokesperson says nothing’s changing:

We made some changes to our Terms of Service in order to make them easier to read and to ensure they’re up to date. We’re not changing the way our products work, how we collect or process data, or any of your settings,

So much for “easier to read”

Unfortunately, as plenty of people are pointing out, the clause is as clear as mud. Its placement is part of the problem: it’s from the Account Suspension Termination section, which has understandably led people to conclusions about Google potentially terminating access to, say, people who are choking ad revenue with ad blockers, or to small/new content creators who aren’t pulling in a slew of ad impressions.

In response to multiple media inquiries, Google responded by explaining that the clause isn’t new. The language about “commercial viability” hasn’t been updated. It’s been in the ToS since early 2018:

[Google isn’t] changing how we work with creators, nor their rights over their works, or their right to monetize.

Rather, the clause gives Google more leeway to determine whether it should remove particular YouTube or Google services if they find that it just doesn’t make commercial sense to keep them around.

Google told The Verge that the clause gives YouTube the “sole discretion” to terminate an account, whereas before it said that YouTube must “reasonably believe” it should do so.

Google tried to set the record straight on Monday with this tweet:

As you can see from the responses to that tweet, plenty of people aren’t convinced. If that’s what Google/YouTube really means, the leery are saying, then why doesn’t it just say that?

I’m no lawyer but I’m sure others who are might be able to offer some reason for the broad, unspecific language. It leaves lots of wriggle room, for sure, which is always to the advantage of a business.

Let’s just hope – perhaps even trust, given how popular the video service is – that YouTube doesn’t Google+-ify us all.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/k_f1GYtqaCI/

Microsoft says it will honor California’s new privacy law across US

You know California’s Consumer Privacy Act (CCPA), the tough new privacy law? The sweeping, GDPR-esque legislation set to go into effect on the first day of the new year that’s set off palpitations within the breasts of tech companies and lawmakers, what with its specter of fines and compliance costs?

Microsoft’s cool with it.

In fact, the company said that it plans to “honor” the law throughout the entire country, even though it’s only a state law. That’s similar to what it did in 2018, when the European Union’s comprehensive General Data Protection Regulation (GDPR) went into effect and the company extended the regulation’s data privacy rights worldwide, above and beyond the Europeans it covers.

On Monday, Microsoft chief privacy officer Julie Brill said in a blog post that CCPA is good news, given the failure of Congress to pass a comprehensive privacy protection law at the federal level.

Chalk one up for Microsoft when it comes to privacy signaling in the runup to CCPA’s debut. Here’s Brill:

CCPA marks an important step toward providing people with more robust control over their data in the United States. It also shows that we can make progress to strengthen privacy protections in this country at the state level even when Congress can’t or won’t act.

Brill reminded the world that Microsoft’s privacy attitude “starts with the belief that privacy is a fundamental human right and includes our commitment to provide robust protection for every individual.”

We will extend CCPA’s core rights for people to control their data to all our customers in the U.S.

True, we don’t know exactly what it’s going to take to digest this enchilada, Brill said:

Under CCPA, companies must be transparent about data collection and use, and provide people with the option to prevent their personal information from being sold. Exactly what will be required under CCPA to accomplish these goals is still developing.

…but we’ll stay on top of it, she said:

Microsoft will continue to monitor those changes, and make the adjustments needed to provide effective transparency and control under CCPA to all people in the U.S.

In spite of the US Federal Trade Commission (FTC) marching down to Capitol Hill to beat the drum for a unified federal privacy law (and more regulatory powers to enforce it), and in spite of both the House and Senate holding hearings on privacy legislation, transparency about how data is collected and shared, and the stiffening of penalties for data-handling violations, any of a slew of online privacy bills that tried to get before Congress this year is not going to make it.

Last month, anonymous sources told Reuters that lawmakers haven’t managed to agree on issues such as whether the bill would preempt state rules.

That leaves CCPA to become the ipso facto privacy rule of the land.

California’s law isn’t just for California businesses, of course. Businesses that do business or have customers, or potential customers, in California will still be on the hook, if they meet one of these criteria:

  • Have an annual gross revenue more than $25 million.
  • Receives, shares, or sells personal information of more than 50,000 individuals.
  • Earns 50% or more of its annual revenue from selling personal information of consumers.

These are the general categories for the consumer rights that CCPA is going to deliver:

  1. Businesses must inform consumers of their intent to collect personal information.
  2. Consumers have the right to know what personal information a company has collected, where the data came from, how it will be used, and with whom it’s shared.
  3. Consumers have the right to prevent businesses from selling their personal information to third parties.
  4. Consumers can request that businesses remove their personal information.
  5. Businesses are prohibited from charging consumers different prices or refusing service, even if the consumer exercised their privacy rights.

As of the end of October, we were still waiting for California’s attorney general to issue regulations about the law, but we at least know that each violation carries a $7,500 fine.

Microsoft’s pledge to honor CCPA nationwide could trigger other companies to do the same.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/WyNVOQw0oyw/

US-CERT warns of critical flaws in Medtronic equipment

The United States Computer Emergency Readiness Team (US-CERT) has issued another warning about security flaws in medical equipment made by Medtronic.

The problem this time is in the Valleylab FT10 (V4.0.0 and below) and Valleylab FX8 (v1.1.0 and below), electrosurgical generators used by surgeons for procedures such as cauterisation during operations.

That’s the good news – the equipment is used by hospitals which means locating the equipment and mitigating or patching the vulnerabilities should be relatively straightforward compared to medical equipment being used by thousands of consumers.

Less positively, two of the flaws – CVE-2019-3464, and CVE-2019-3463 – are severe enough to earn a CVSS rating of 9.8, which makes them critical.

The latter vulnerability is the restricted shell (rssh) utility which allows file uploads to the Valleylab units. Using an unpatched version of this could give an attacker admin access and the ability to execute code.

According to the alert, the network access necessary for this to happen is often enabled, presumably for remote management, which gives attackers a way of reaching vulnerable devices.

A third flaw, CVE-2019-13539, is caused by an insecure (i.e. reversible) password hashes, generated by descrypt, which can be pulled from the device thanks to the other vulnerabilities mentioned in the warning.

The fourth flaw, CVE-2019-13543, affects the Medtronic Valleylab Exchange Client version 3.4 and below, is caused by hard-coded credentials.

Currently, patches are available for Valleylab FT10, while the FX8 will receive the same in “early 2020”. In the meantime:

Medtronic recommends to either disconnect affected products from IP networks or to segregate those networks, such that the devices are not accessible from an untrusted network (e.g., Internet).

It’s not clear who discovered the latest flaws although US-CERT mentions them having been reported to it by Medtronic itself.

If so, that’s a step in the right direction after past alerts discovered by independent researchers who sometimes struggled to get the attention of the company.

Medtronic has suffered a number of security problems in its products in the last couple of years, including a brace of flaws in its Implantable Cardioverter Defibrillators (ICDs) in March, and in its pacemakers in 2018.

The last of those was a low point for medical equipment patching after researchers used a session at the Black Hat show to highlight that the equipment was vulnerable to a security flaw 18 months after the company was told of the issue.

Back in 2011, researcher Barnaby Jack demonstrated a proof-of-concept against a Medtronic insulin pump which he claimed could have been exploited to deliver a fatal dose to a patient.

Even though things have changed a lot since then, vulnerabilities continue to emerge at regular intervals. Cleaning up the mistakes of past security coding has a way to go yet.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2SZk6nTN7A0/

Apple pulls Instagram-watching app from store

Apple has yanked an app from its iTunes App Store that allowed Instagram users to follow their friends’ activities on the social network.

Apple removed Like Patrol from its store last weekend citing a violation of its data collection policies. Apple didn’t return requests for comment, but the app showed up as unavailable for Canadian users of the store.

Like Patrol charged users a reported $80 each year to give them access to their Instagram friends’ activities on the platform. It promised to show them detailed information about what people were doing on Instagram, including which posts they were liking, and from whom. They could also reportedly get notifications of a person’s interaction with users of specific genders, and none of this information required the consent of the person being monitored.

The banner on the app’s website, which is still up at the time of writing, reads:

New guy? New girl? What are they up to on Instagram? With Like Patrol you can see the posts they specifically like!

The app appears to work by scraping data that is publicly available via the Instagram API, but while its creators worked to capitalise on that data, Facebook-owned Instagram appeared to be moving in the other direction by restricting access to information about other users on its site.

In October 2019, it removed a ‘following activity’ tab that showed peoples’ likes, comments, and follows in a separate tab. People rarely clicked on it and were disconcerted when they found out that their Instagram friends could see their activity in such detail, the company said at the time.

Later that month, the photo-oriented social networking site reportedly sent a cease and desist letter to Like Patrol, accusing it of scraping content, which violates its policies. Its terms and conditions forbid people from:

Creating accounts or collecting information in an automated way without our express permission.

Like Patrol’s developer, Sergio Luis Quintero, was defiant, questioning Apple’s opposition to the app. He told us:

If our app’s functionality did violate any policies, then Instagram would have violated the exact same policies since 2011 to 2019 with the Following tab. Why weren’t they taken down?

He also had strong words for Instagram’s parent company:

There is a strong hypocrisy in Facebook’s condemnation of our app. Like Patrol does not collect data from Instagram users, it provides the users with a tool to rearrange information that is already available to them. Everything the user sees lives only in the user’s device, we do not have a login, we do not centralize any information, if the user deletes the app every bit of data he was able to see in Like Patrol is deleted.

He vowed to make the code open-source and said that he would announce the release via the Like Patrol website in the next few days.

Apple’s move comes just a few weeks after the FTC indicated a stronger stance towards apps that could be used to stalk mobile users. The FTC reached a settlement with app developer Retina-X Studio in October after criticising its poor security, which led to two hacks in which its users’ data (and their surveillance targets’ data) had been breached.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iD44YJoVJ-c/

UK Info Commish quietly urged court to swat away 100k Morrisons data breach sueball

The UK’s Information Commissioner urged the Court of Appeal to side with Morrisons in the supermarket’s battle to avoid liability for the theft and leaking of nearly 100,000 employees’ payroll details – despite not having read the employees’ legal arguments.

A letter (PDF) sent to the Court of Appeal in May 2018 on behalf of the watchdog’s leader, Elizabeth Denham, urged senior judges to side with Morrisons and rule the supermarket wasn’t responsible for the criminal actions of disgruntled auditor Andrew Skelton.

Crucially, the letter – written by an Information Commissioner’s Office solicitor on Denham’s behalf – admitted the ICO had only seen one side of the detailed legal arguments, months before the case was heard by judges. Those same judges later ruled against Morrisons, effectively dismissing the Information Commissioner’s letter.

Skelton, an auditor for the supermarket chain, had authorised access to its entire payroll while KPMG was auditing the company accounts. He took a secret copy for himself and later dumped nearly 100,000 people’s data online, having tried to cover his tracks by using Tor. Around 9,000 workers (the number is growing) aggrieved by the breach sued Morrisons, saying it was vicariously liable for Skelton’s behaviour – and should pay them compensation.

The lawsuit has progressed from the High Court through the Court of Appeal right up to the Supreme Court.

Although the case refers mostly to the pre-GDPR Data Protection Act 1998, the legal principles that will be stated in the Supreme Court’s ultimate judgment will have a lasting effect on how British data protection law is applied to businesses.

Sent four months before the October 2018 Court of Appeal hearings, the Information Commissioner’s letter said “she is in agreement with the position adopted by the Appellant [i.e. Morrisons] for the reasons set out in its skeleton argument.”

Morrisons’ legal reasons for arguing it shouldn’t pay compensation for the data breach were reported here. In brief, barrister Anya Proops QC said Morrisons was “completely innocent in respect of this data event” and the Data Protection Act 1998 meant Morrisons could not be held directly or vicariously liable for the actions of its rogue auditor.

Half the story

Essentially, Information Commissioner Denham was urging the court to side against the thousands of workers whose data was stolen and dumped online. Not only that, but she was doing so having only seen Morrisons’ legal arguments, as lawyers for the workers told the Supreme Court last week in written submissions:

The ICO’s support for Morrisons in the Court of Appeal by its letter of 8 May 2018 is more notable for what it does not say. The ICO did not have the Claimants’ Respondents’ Skeleton Argument for the Court of Appeal (which with diffidence the Claimants would contend contained the substantial bases upon which the appeal was dismissed). Nor did the ICO consult with the Claimants’ advisers as to their position in relation to the various arguments at that stage raised by Morrisons.

Denham’s letter can be read in full here.

Ultimately the Court of Appeal ruled against Morrisons, finding that the supermarket was vicariously liable for Skelton’s actions. The case has since been appealed again to the Supreme Court, whose judges are pondering their ruling at the moment.

The Information Commissioner’s Office did not respond to The Register‘s invitation to comment on the letter or its intervention into the case. ®

Sponsored:
How to Process, Wrangle, Analyze and Visualize your Data with Three Complementary Tools

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/13/ico_told_court_appeal_side_with_morrisons_data_breach/

Unreasonable Security Best Practices vs. Good Risk Management

Perfection is impossible, and pretending otherwise just makes things worse. Instead, make risk-based decisions.

Years ago, I spoke with the risk management leader at a bank where I was consulting. This person was new in the role and was outlining plans for implementing an IT risk management program. The company’s program was to be based on the NIST 800 series, which predates the creation of NIST Cybersecurity Framework, and they had worked out their own proprietary risk rating system based on the control catalog in SP 800-53. It was well thought out and the leader had some success in a previous role working with the same solution.

Ultimately, the risk ratings assigned as a result of this process came down to the personal opinion of the assessors. But the real trouble with this approach was that the security leader held the viewpoint that, eventually, the process would result in all of the controls in NIST SP 800-53 being implemented. As a result, the model they developed was designed to give good risk ratings when more controls were implemented and bad ratings when those controls were missing.

This person is not alone in the belief that more controls equal less risk. Far too many risk registers are truly just lists of broken or missing things. So sure are we in the belief that we need more security that we tend to believe that only perfection will do. Security conferences are rife with these axioms, such as “we need to get it right every time; hackers only need to get it right once.” Such views are pessimistic and dissuade business leaders from taking the actions they need to properly secure themselves. Why should they bother if they can’t get it perfect?

I often say that we need cybersecurity professionals doing blocking and tackling who believe they can stop 100% of the things trying to break in. I think that mindset is important for high-quality threat management and security operations. However, I know they will eventually fail. This doesn’t mean that their efforts are pointless. Indeed, what we must celebrate are the small wins and consistent behaviors, not perfection.

Control frameworks aren’t to blame; they are simply cataloging the world of possibilities. The blame falls to broken risk models that leverage a “gotta catch ’em all” approach to security controls. This approach pretends there is a linear relationship between security controls and loss exposure. This ignores critical variables such as frequency of attack, attacker capability, and an organization’s tolerance for loss.

Such “collector” approaches to risk management find their way into auditing frameworks that so often purport to be risk-based but instead treat every missing or deficient thing as the risk itself. This approach has allowed risk statements expressing zero appetite to make their way to senior executives and corporate boards. Well-meaning risk appetite statements such as “we don’t accept any cyber-related risk” are virtually impossible to put into action in organizations with limited budgets (and all are limited). Accepting zero risk means that you would spend every dollar an organization has to avoid a loss, and even then, no one can guarantee a future with zero incidents.

A mature way to talk about cyber-risk appetite is using some non-zero loss amount as a guide. Statements about risk and loss should focus on the range of the amounts that could be lost and the timelines over which such a loss could occur. These ranges are necessary because we’re discussing future events that may or may not come to pass, and, as such, any specific measures that may be made about appetite are going to be wrong.

The Goal of Effective Risk Management
Effective risk management enables an organization to attain an acceptable amount of loss over time with the least amount of capital expenditure. In other words, we’re trying to balance money spent today to reduce risk against the probability of some amount of loss at a future time. Nowhere in good risk management is the notion of perfect risk avoidance. Such a focus on risk would choke off innovation and good business management.

First, every dollar spent on risk reduction cannot be spent on the mission of the organization. As a result, risk reduction investments necessarily mean mission curtailment. Second, without the right amount of freedom to operate without safeguards in place, business innovation is also curtailed.

Navigating Risk
Having a good model that represents the nature of risk accurately is important if you intend to navigate risk and approach risk elimination through a security controls process. Further, such a model should support the modern needs of organizations, such as the purchase of cyber insurance and/or setting aside money for risk allocation (risk-based capital). The FAIR Institute was established to promote the open source FAIR standard for cyber-risk quantification. The FAIR model lets you scope and model risk scenarios in a way that is meaningful to the leaders of that organization. It ties things like missing controls and audit findings to statements of loss that allow decision-makers to make well-informed and risk-aware decisions.

Further, it gives companies the opportunity to express those cyber loss scenarios to which they are exposed in terms that are meaningful and actionable: economic impact. For example, FAIR lets an organization express why a control from that voluminous catalog is meaningful by linking it to the company’s potential for loss, impact to customers, and/or its implications to insurance and risk-based capital. In other words, this links technology failures to business impacts. FAIR also enables practitioners to demonstrate how implementing a solution will reduce risk by expressing it in terms of a risk efficacy ratio: a dollar invested in this solution reduces future loss potential by “x” amount.

Beware the allure of “best practice” models when assessing your organization’s risk posture. If that model encourages you to get an A+ on a controls implementation test, you’re signing your company up for an overcontrolled environment that is choking off innovation and leaching off its business plan. Instead, focus on risk navigation: Provide decision-makers with the information they need to make truly risk-informed decisions and accept that the perfect solution to an organization’s cybersecurity problems may be imperfectly implemented security.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Account Fraud Harder to Detect as Criminals Move from Bots to ‘Sweat Shops’.”

Dr. Jack Freund is the Risk Science Director for RiskLens, a cyber-risk quantification platform built on FAIR. Over the course of his 20-year career in technology and risk,  Freund has become a leading voice in cyber-risk measurement and management. He previously worked … View Full Bio

Article source: https://www.darkreading.com/risk/unreasonable-security-best-practices-vs-good-risk-management/a/d-id/1336271?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cardplanet Operator Extradited for Facilitating Credit Card Fraud

Russian national Aleksei Burkov is charged with wire fraud, access device fraud, and conspiracy to commit identity theft, among other crimes.

A Russian national has been extradited from Israel and made his first court appearance on Tuesday on charges related to his alleged operation of two websites designed to facilitate payment card fraud, computer hacking, and other crimes, the Justice Department reports.

The indictment unsealed today charges Aleksei Burkov with wire fraud, access device fraud, and conspiracy to commit wire fraud, access device fraud, computer intrusions, identity theft, and money laundering. Burkov reportedly ran a website called Cardplanet, which sold payment card numbers that were primarily stolen through data breaches and mostly belonged to US citizens.

“The stolen credit card data from more than 150,000 compromised payment cards was allegedly sold on Burkov’s site and has resulted in over $20 million in fraudulent purchases made on U.S. credit cards,” officials wrote in a release on the extradition.

In addition to Cardplanet, Burkov is believed to have operated another cybercrime forum that served as an invite-only hub where cybercriminals could virtually meet to plot crimes, buy and sell stolen goods and services, and offer criminal services such as money laundering and hacking. Prospective members of this elite group were required to vouch for new members and provide thousands of dollars as insurance to ensure law enforcement didn’t make its way into the hub.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Account Fraud Harder to Detect as Criminals Move from Bots to ‘Sweat Shops’.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/cardplanet-operator-extradited-for-facilitating-credit-card-fraud/d/d-id/1336344?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Russian bloke charged in US with running $20 million stolen card-as-a-service online souk

A Russian man was detained at Dulles airport in Washington DC on Monday and charged with running a stolen card trading ring that was responsible for $20m worth of fraud.

Aleksei Burkov appeared in the Eastern Virginia US District Court on Tuesday to face charges (PDF) of access device fraud, conspiracy to commit access device fraud, wire fraud, conspiracy to commit wire fraud, and Conspiracy to Commit Access Device Fraud, Identity Theft, Computer Intrusion, Wire Fraud, and Money Laundering.

The court appearance happened shortly after the 29 year-old Burkov, originally from Saint Petersburg, was extradited to the States from Israel, where he has been held by that country’s court system since 2015.

Prosecutors say that Burkov was the mastermind behind two sites dedicated to buying and selling the details of stolen payment cards. One site, known as Cardplanet, was public and it is estimated that the cards traded on the site were used by criminals to rack up fraudulent charges in excess of $20m. That site operated from 2009 through most of 2013.

While carding is hardly a market known for its customer service, it is claimed that Burkov built a reputation for himself by providing excellent support to his customers. The Cardplanet site offered a special service where buyers could verify that the cards they purchased were valid and Burkov maintained a policy of refunding the price of any cards that did not work as advertised.

kremlin

Disgraced ex-Kaspersky guy made me do it, says bloke in Russian court on hacking charges

READ MORE

Additionally, prosecutors said that Burkov operated a second, more exclusive service designed for the high rollers of the carding world.

That site, which operated as an invite-only forum, served as a secure space for criminals to trade in not only card details, but also stolen goods, pirated software, personal identifying information, money laundering operations, and hacking-for-hire services.

“To obtain membership in Burkov’s cybercrime forum, prospective members needed three existing members to ‘vouch’ for their good reputation among cybercriminals and to provide a sum of money, normally $5,000, as insurance,” the Eastern Virgina US Attorney’s office said of the operation.

“These measures were designed to keep law enforcement from accessing Burkov’s cybercrime forum and to ensure that members of the forum honored any deals made while conducting business on the forum.”

If convicted on all counts, Burkov faces a maximum of 80 years behind bars, though he is unlikely to get more than a fraction of that in sentencing. The next hearing in the case is scheduled for Friday. ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/13/russian_charged_cardplanet/

Londoner accused of accessing National Lottery users’ accounts

A man will appear at Crown court in December to answer charges that he used hacking program Sentry MBA to access and take money from online UK National Lottery gambling accounts.

Prosecutors claim that 29-year-old Anwar Batson gained access to National Lottery users’ accounts in November 2016, having downloaded hacking tools during the previous year. He is then said to have fraudulently conspired to withdraw money from those National Lottery accounts.

Batson, of Lancaster Road in London’s Ladbroke Grove, was told by magistrates today that his case would be sent to Southwark Crown Court on 10 December for a plea and case management hearing.

The accused, wearing a plain white shirt buttoned at the collar and with his long hair slicked back into a ponytail, spoke only to confirm his name, address and British nationality. No plea was entered at this morning’s hearing at Westminster Magistrates’ Court and Batson was granted unconditional bail.

The Londoner is alleged by Crown prosecutors to have committed two crimes under section 3A(2) of the Computer Misuse Act 1990 and two under section 3A(3). Both are about “supplying or offering to supply an article believing that it was likely to be used to commit, or to assist in the commission of, an offence under section 1 or 3 of the Computer Misuse Act 1990”.

Section 1 makes it a crime to access computer material without authorisation. Section 3 makes it illegal to impair the operation of a computer, or to recklessly do something that might impair its normal operation.

Batson is also accused of committing one crime under section 2(1)(b) of the same act; causing “a computer to perform a function with intent to secure unauthorised access to a program or data held in a computer and with intent to facilitate the commission of an offence”.

On top of the Computer Misuse Act charges, he stands accused of two fraud charges for allegedly removing funds from the National Lottery accounts, as well as using credit card details to buy North Face clothing for himself.

Although three magistrates (volunteer lay judges assisted by a trained legal advisor) were present at the start of this morning’s hearing, one left shortly before the start.

“Just because we’re short-staffed… our colleague is going to go to another courtroom,” the white-haired chairwoman of the bench explained to the court.

Batson is innocent unless found guilty. Full details of his alleged crimes will be heard by a jury at the Crown court next year. ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/13/anwar_batson_national_lottery_accounts_court_case/