STE WILLIAMS

Social media scraping app Predictim banned by Facebook and Twitter

Employers get turned off by a lot of things they find out about potential hires on social media: provocative material, posts about drinking or using drugs, racist/sexist/religiously intolerant posts, badmouthing others, lying about qualifications, poor communication skills, criminal behavior, or sharing of confidential information from a previous employer, to name just a few.

We should all take for granted, then, that nowadays our social media posts are being scrutinized. That also goes for those of us whose prefrontal cortexes are currently a pile of still-forming gelatin: namely, children and teenagers.

In fact, there’s an artificial intelligence (AI) app for scraping up the goo that those kids’ emotional, impulsive, amygdala-dominant brains fling online: it’s called Predictim, and it’s funded by the University of California at Berkeley’s Skydeck accelerator. Predictim analyzes Facebook, Instagram, and Twitter accounts to assign a “risk rating” from a scale of 1 to 5, offering to predict whether babysitters or dogwalkers might be bad influences or even dangerous.

You can sympathize with its clientele: Predictim features case studies about abusive babysitters that have caused fatal or near-fatal injuries to the children in their charge. Simple background checks or word-of-mouth references won’t necessarily pick up on the risk factors that its report spotlights, the company says, which include evidence of bullying or harassment, drug abuse, disrespectful or discourteous behavior, or posting of explicit content.

The company claims that after pointing the tool at the social media posts pertaining to two babysitters charged with harming children, it returned a score of “very dangerous.” But while you or I can sympathize with parents, Facebook and Twitter don’t appreciate how Predictim is scraping users’ data to come to these conclusions.

The BBC reports that earlier this month, Facebook revoked most of Predictim’s access to users, on the basis of violating the platform’s policies regarding use of personal data.

After Predictim said it was still scraping public Facebook data to power its algorithms, Facebook launched an investigation into whether it should block Predictim entirely.

The BBC quoted Predictim chief executive and co-founder Sal Parsa, who said –  Hey, it’s public data, and it’s no big deal:

Everyone looks people up on social media. They look people up on Google. We’re just automating this process.

Facebook disagreed. A spokeswoman:

Scraping people’s information on Facebook is against our terms of service. We will be investigating Predictim for violations of our terms, including to see if they are engaging in scraping.

Twitter, after learning what Predictim was up to, investigated and recently revoked its access to the platform’s public APIs, it told the BBC:

We strictly prohibit the use of Twitter data and APIs for surveillance purposes, including performing background checks. When we became aware of Predictim’s services, we conducted an investigation and revoked their access to Twitter’s public APIs.

Predictim uses natural language processing and machine learning algorithms that scour years of social media posts. Then, it generates a risk assessment score, along with flagged posts and an assessment of four personality categories: drug abuse, bullying and harassment, explicit content, and disrespectful attitude.

Predictim was launched last month, but it shot to top of mind over the weekend after the Washington Post published an article suggesting that Predictim is reductive, simplistic, takes social media posts out of the typical teenage context of irony or sarcasm, and depends on “black-box algorithms” that lack humans’ ability to discern nuance. From the article:

The systems depend on black-box algorithms that give little detail about how they reduced the complexities of a person’s inner life into a calculation of virtue or harm. And even as Predictim’s technology influences parents’ thinking, it remains entirely unproven, largely unexplained and vulnerable to quiet biases over how an appropriate babysitter should share, look and speak.

Parsa told the BBC that Predictim doesn’t use “blackbox magic.”

If the AI flags an individual as abusive, there is proof of why that person is abusive.

But neither is the rationale behind the score clarified: the Post’s Drew Harwell spoke to one “unnerved” mother who said that when the tool flagged one babysitter for possible bullying, she “couldn’t tell whether the software had spotted an old movie quote, song lyric or other phrase as opposed to actual bullying language.”

The company insists that Predictim isn’t designed as a tool that should be used to make hiring decisions, and that the score is just a guide. But that doesn’t stop it from using phrases like this on the site’s dashboard:

This person is very likely to display the undesired behavior (high likelihood of being a bad hire).

Technology experts warn that most algorithms used to analyze natural language and images fall far short of infallibility. For example, Facebook has struggled to build systems that automatically discern hate speech.

The Post spoke with Electronic Frontier Foundation attorney Jamie L. Williams, who said that algorithms are particularly bad at parsing what comes out of the mouths of kids:

Running this system on teenagers: I mean, they’re kids! Kids have inside jokes. They’re notoriously sarcastic. Something that could sound like a ‘bad attitude’ to the algorithm could sound to someone else like a political statement or valid criticism.

In fact, Malissa Nielsen, a 24-year-old babysitter who agreed to give access to her social media accounts to Predictim at the request of two separate families, told the Post that she was stunned when it gave her imperfect grades for bullying and disrespect.

Nielsen had figured that she had nothing to be embarrassed about. She’s always been careful about her posts, she said, doesn’t curse, goes to church every week, and is finishing a degree in early childhood education. She says she hopes to open a preschool.

Where in the world did Predictim come up with “bullying” and “disrespect,” she wondered?

I would have wanted to investigate a little. Why would it think that about me? A computer doesn’t have feelings. It can’t determine all that stuff.

Unfortunately, a computer thinks that yes, it can, and the company behind it is willing to charge parents $24.99 for what some experts say is unproven reliability.

The Post quoted Miranda Bogen, a senior policy analyst at Upturn, a Washington think tank that researches how algorithms are used in automated decision-making and criminal justice:

There are no metrics yet to really make it clear whether these tools are effective in predicting what they say they are. The pull of these technologies is very likely outpacing their actual capacity.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2QDSLYqq_7Q/

JavaScript library used for sneak attack on Copay Bitcoin wallet

A mystery payload that was sneaked into a hugely popular JavaScript library seems to have been a deliberate plot to ransack bitcoins from a mobile cryptocoin wallet known as Copay, from a company called BitPay.

Back in September 2018, the author of a popular Node.js utility package called event-stream, used for sending and receiving data, handed over the reins to a new maintainer going by the handle of Right9ctrl.

Days later, the new maintainer released an update to the package, version 3.3.6, to which he’d added additional code from an apparently related package called flatmap-stream.

In early October, another event-stream update appeared, as though Right9ctrl were throwing himself enthusiastically into his new role at the helm of the project…

…except that, on 20 November 2018, someone investigating an error in event-stream discovered cryptocurrency-stealing malware, hidden in the flatmap-stream component.

Lock up your Bitcoins

Because event-stream is used in thousands of projects, working out the payload’s target was an urgent priority.

This week, after frantic research, the intended victims were revealed: users of the Copay cryptowallet software.

Cue relief, mixed with frustration, for anyone not targeted. Developer Chris Northwood wrote :

We’ve wiped our brows as we’ve got away with it, we didn’t have malicious code running on our dev machines, our CI servers, or in prod. This time.

What to do?

There are two sets of worried users here – developers using event-stream, and customers using the Copay wallet – both groups will probably be wondering what is safe and what is not.

On 26 November 2018, NPM reportedly took down the compromised versions of flatmap-stream and event-stream.

Intriguingly, version 4.0.1 of event-stream is still available – even though it ws uploaded by Right9ctrl.

As far as we know, version 4.0.1 is malware-free, presumably uploaded to try to distract suspicion from the unscrupulous changes introduced in version 3.3.6.

Developers who are still willing to trust event-stream and keep on using it should update their dependencies to reflect this (here’s hoping they realise that this is now necessary).

As for the Copay wallet, BitPay released a statement noting that the malicious code was present in versions 5.0.2 through 5.1.0 of the Copay and BitPay apps.

Users should download version 5.2.0 as soon as possible, and reading the company’s full instructions.

In summary:

  • If you still have any Copay version from 5.0.2 to 5.1.0 installed, don’t run or open the app.
  • If you’re a Copay user who ran an infected version of the software, you should assume that your private keys have been compromised. Move your funds to new wallets, using Copay 5.2.0 or later, as soon as possible.

The last thing for the development community to do, of course, is to ponder why Right9ctrl was so easily able to take over this widely-used project, and why many developers immediately and blindly trusted the new maintainer.

As an exasperated Chris Northwood said:

Nothing’s stopping this happening again, and it’s terrifying.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XpUkLjG3hOE/

It’s a patch bonanza as Microsoft showers its OS platforms with update love

Microsoft issued a whole bunch of updates last night, including one to deal with an alarming bug in Windows Server 2016.

Tucked innocuously among a swathe of fixes ranging from dealing with Russian time zone changes to fixing wobbly Hyper-V servers is the text: “Addresses an issue in File Explorer that sometimes deletes the permissions of a shared parent folder when you delete the shared child folder.”

Just think about that for a moment. Permission changes heading up rather than down the folder structure.

The problem was reported back in February, when a user in Microsoft’s TechNet forum came across some decidedly odd behaviour when deleting a sub-folder. Permissions of parent folders appeared to go AWOL when child folders were deleted.

Another user on Reddit shared a similar experience. He found that when a parent folder had an explicitly defined permission with a child folder with inheritance enabled as well as explicit permissions, deleting that child folder would remove the Read/Execute permission from the parent.

At least it was removing rather than adding anything, but still. It is at best counter-intuitive and at worst a pretty nasty bug.

The Redditor went on to document the 40 or so hours spent dealing with Microsoft’s support team to be told initially that the behaviour was by design and introduced in Windows Server 2016. Interestingly, it only occurs when folders are deleted (or cut and pasted) using File Explorer. Using the command line makes things behave as one would expect. It also only occurred when using a local path. Deleting using a UNC path was fine.

The workaround over the last few months has therefore been to either remove any explicit permissions on the child folder before deletion or stick with UNC paths. Or there is always the command line.

KB4467684 purports to deal with the problem in Windows Server 2016, much to the delight of harassed admins.

Alas, the original poster in the TechNet forum, Rolf Berger, has reported that his issue “is still not solved”, so your mileage may vary. We’ve contacted Microsoft for more information and will update if anything is forthcoming.

Patch once, patch often

Microsoft also emitted an update for Windows 10 1703, but only for those lucky users with Enterprise and Education editions. Anyone else clinging to April 2017’s Windows 10 saw support end last month.

Windows 10 1709 and 1803 are both still supported by the software goliath and so received updates in the form of KB4467681 and KB4467682 respectively.

In all instances, the Media Player seek bar failing to, er, seek, is listed as a known issue for certain file types. So perhaps it really is time to retire the old thing once and for all.

The lucky few who have managed to get Windows 1809 (the October 2018 Update) installed have a bit longer to wait for their update. Showing some prudence, Microsoft released 1809’s patch to the Release Preview ring where Windows Insiders can kick the tyres before it is unleashed on the world.

Two notable fixes in the release deal with the failure to reconnect mapped drives on login and the bug that stopped users setting Win32 app defaults for certain file types.

But of the venerable Windows Media Player, mention there was none.

You update iCloud and we’ll lift the update block. Apple and Microsoft play nice

Microsoft also announced last night that it has lifted the Apple-shaped roadblock it put in place due to compatibility issues between Apple’s wares and the Update Of The Damned Windows 10 October 2018 Update.

In the announcement, the gang at Redmond pointed to Apple’s update of iCloud for Windows to version 7.8.1 (which deals with synching problems in 1809) and recommended iCloud users take Apple’s medicine before having a crack at getting the new version of Windows 10 installed.

You lucky, lucky people. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/28/microsoft_windows_10_server_2016_patch/

US told to quit sharing data with human rights-violating surveillance regime. Which one, you ask? That’d be the UK

UK authorities should not be granted access to data held by American companies because British laws don’t meet human rights obligations, nine nonprofits have said.

In a letter to the US Department of Justice, organisations including Human Rights Watch and the Electronic Frontier Foundation set out their concerns about the UK’s surveillance and data retention regimes.

They argue that the nation doesn’t adhere to human rights obligations and commitments, and therefore it should not be allowed to request data from US companies under the CLOUD Act, which Congress slipped into the Omnibus Spending Bill earlier this year.

The law allows US government to sign formal, bilateral agreements with other countries setting standards for cross-border investigative requests for digital evidence related to serious crime and terrorism.

It requires that these countries “adhere to applicable international human rights obligations and commitments or demonstrate respect for international universal human rights”. The civil rights groups say the UK fails to make the grade.

As such, it urged the US administration not to sign an executive order allowing the UK to request access to data, communications content and associated metadata, noting that the CLOUD Act “implicitly acknowledges” some of the info gathered might relate to US folk.

Critics are concerned this could then be shared with US law enforcement, thus breaking the Fourth Amendment, which requires a warrant to be served for the collection of such data.

Setting out the areas in which the UK falls short, the letter pointed to pending laws on counter-terrorism, saying that, as drafted they would “excessively restrict freedom of expression by criminalizing clicking on certain types of online content”.

Man furtively types on laptop in server room

Top Euro court: UK’s former snooping regime breached human rights

READ MORE

Meanwhile, the UK’s surveillance regime has repeatedly been found to fall short of European standards. The letter noted a recent ruling from the European Court of Human Rights that found various failings on the government’s oversight of bulk interception of communications.

Although this ruling related to a previous regime, the human rights groups said they “do not believe” that the Investigatory Powers Act – brought in to replace the previous regime – “has cured this defect”.

UK-based civil liberties groups agreed with this assessment at the time. Similarly, they were unconvinced that changes the government made to the Data Retention and Acquisition Regulations satisfy a ruling from the Court of Justice of the European Union.

The letter from the US groups echoed this, stating: “We urge you to keep abreast of any challenges to the compliance of these regulations, either as written or as applied, with EU law or the ECHR.”

Finally, the groups said the Crime (Overseas Production Orders) bill – introduced to Parliament this summer – contains “numerous” provisions that would violate the UK’s obligations or the provisions of the CLOUD Act, or be “gravely inconsistent” with the Fourth Amendment.

For instance, the groups claimed the bill’s language is too broad about when overseas production orders can be issued, which it said was inconsistent with the Fourth Amendment, and that it fails to limit the duration of the data production or access under the order – which is a requirement of the CLOUD Act.

“For the reasons stated above, we believe an executive agreement permitting UK authorities to order the disclosure of, or access to, data held by US companies pursuant to the CLOUD Act should not be concluded at this time,” the letter concluded. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/28/cloud_act_uk_human_rights_violation/

The "Typical" Security Engineer: Hiring Myths & Stereotypes

In an environment where talent is scarce, it’s critical that hiring managers remove artificial barriers to those whose mental operating systems are different.

The more we learn, the more it becomes clear that there is no “universally optimal” brain. We all have our own unique strengths and weaknesses. Things we do to help people with different neurotypes aren’t just accommodations for rare individuals. Being considerate of each other’s mental operating systems can improve everyone’s functionality.

Each year brings more reports that document the challenges of hiring in cybersecurity, with an alarming number of unfilled positions. But this may ring hollow to those struggling to find work in the industry. There are many factors that cause this discrepancy, and today let’s look into one such area: inclusive hiring practices for neurodiversity.

Defining Neurodiversity
Most of us have a clear mental stereotype of a “typical engineer.” This may include personal issues and quirks as well as traits that help people succeed in intellectually demanding jobs. The positive qualities include things like intense specialized interests, laser-like focus, creative and vivid imagination, or the ability to find signals within noisy data sets.

From a neurological perspective, many of these traits — both positive and more challenging ones — frequently intersect with signs of “mental operating system” differences such as autism and attention deficit hyperactivity disorder. As a result, popular tech-hiring practices can sometimes put off the very people who have always been an important part of science and technology.

Neurodiversity also includes a wide variety of neurological differences related to developmental and learning disorders, mental health conditions, and mental perception variances such as amusia and aphantasia. Individuals are referred to as “neurodivergent” while groups of people are referred to as “neurodiverse.” While many people define these variations as “disabilities,” the traits can and do bring benefits to individuals as well as potential employers.

Hiring Benefits of Neurodiversity
Part of the benefit of having diversity is that it improves the breadth of knowledge within your organization. People with different brains — as well as genders and ethnicities — will have different backgrounds as well as strengths. And naturally, they’ll have different security and privacy concerns, most of which will not be obvious to people outside of those groups.

Paying extra attention to hiring practices can help you root out ways you might be generating “false negatives” that exclude neurodiverse job candidates for reasons that have nothing to do with their ability. In an environment where talent is scarce, it’s imperative to remove artificial barriers to entry.

It’s also important to understand that women and minority communities tend to have high rates of under-diagnosis, so they may not be identified as neurodivergent. And because the constellations of qualities that lead to someone being identified as neurodivergent are not traits absent in “neurotypical” people, being inclusive will help everyone. Here are five neurodiversity hiring practices to keep in mind:

Set Expectations Early and Often
Hiring is seldom a straightforward process because there are many variables that can affect timing. But it’s important to tell people what your process is and to give them a window of time in which steps should occur, including notifying applicants if they were not chosen for the position. If you need to deviate from that schedule due to unforeseen circumstances, it’s best to notify candidates as early as possible rather than leave them guessing. Once someone has been hired, set them up to succeed by continuing to set goals and schedule dates for deliverables, including discussion about deferred activities.

Err on the Side of Clarity
Not everyone processes information the same way. Some people prefer text to verbal instructions, or they may understand diagrams better than written words. Some may misunderstand idioms or interpret things very literally. It’s better to cover all your bases, and stick to simple and clear descriptions. If the option is available, ask people their preferred communication method and double-check that your words are interpreted as you intended them. When you’re not able to ask, err on the side of providing as many options as are appropriate.

Consider your job ad wording
It can be difficult to communicate the level and types of skills a prospective employee is expected to have. The way this is most commonly done is with numbers — for example, such as “five years of experience” associated with a certain technology or position. But there’s nothing intrinsically magical about five years of experience. You can express the same idea more clearly by rewording it as “experience with” or “fluent in,” or other phrases that more clearly express the problems you’re trying to solve or level of familiarity with a technology that you require.

Stick to Criteria that Pertain to the Position
Coders don’t necessarily need to maintain a lot of eye contact to be effective. Being a social butterfly doesn’t indicate someone is a better reverse engineer. Make sure that the criteria on which you’re judging candidates are decided by a group of interested parties in advance, that they pertain to the job at hand, and that they are the deciding factors that employees are graded on.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Lysa Myers began her tenure in malware research labs in the weeks before the Melissa virus outbreak in 1999. She has watched both the malware landscape and the security technologies used to prevent threats from growing and changing dramatically. Because keeping up with all … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the--typical--security-engineer-hiring-myths-and-stereotypes/a/d-id/1333344?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘Grinch bots’ are ruining holiday shopping. Lawmakers hit back

US legislators have introduced a bill to stop bad bots from buying up all the hot holiday toys in bulk and then gouging parents by reselling them at exorbitant prices.

Bots are automated scripts and programs that can be used for good or bad: the good ones do useful things such as crawl the web, and they’re also used on social media to generate everything from poems to memes to self-care reminders to randomly generated awesomeness.

Then there are the bad bots: like, the ones that snatch up all the Super Nintendo and Barbie products before you can even log into an e-commerce site.

Fittingly enough, the Stopping Grinch Bots Act of 2018 was announced on Black Friday.

The bicameral bill comes from US Senators Tom Udall, Richard Blumenthal, and Chuck Schumer, along with US Representative Paul Tonko. Udall said in a press release that resellers are gaming the system with bots that snatch up toys and highly discounted products to sell at “outrageously inflated markups,” all “with a few keystrokes,” and often before any human has managed to even put an item into their online shopping cart.

These Grinch bots let scammers sneak down the proverbial chimneys of online retailers and scoop up the hottest products before regular Americans can even log on – and then turn around and sell them at outrageously inflated prices. That’s just not how the marketplace is supposed to work.

The bot problem is just one example of how consumers get preyed on when they venture online, Udall said. Bots enable “unscrupulous” scammers to game the system and “steal hard-earned money from Americans who have saved up just to buy gifts for their family and friends during the holiday season,” he said.

Yes, but is bulk buying from bots illegal? Yes and no – that’s why the Democrats think that a new, comprehensive bill is needed.

The Grinch bill builds on an earlier bot-aimed bill that was more narrowly focused. Specifically, it addressed only one aspect of bot scalping: online ticket sales. In 2016, Congress passed the Better Online Ticket Sales Act, aimed at ticket scalpers. It made it illegal to skirt event ticket limits for public events with more than 200 people in attendance.

In October 2017, Ticketmaster sued a scalping company that used bots to do just that, buying 30,000 tickets to the hot-hot-hot “Hamilton” musical. According to the lawsuit, while Ticketmaster’s terms of service forbid the use of bots, the reseller managed to override warning or error messages and allegedly used special software to sneak past CAPTCHA codes meant to screen out bots. It then used thousands of separate accounts to place hundreds of thousands of ticket orders.

The company, Prestige Entertainment, was already in trouble for bot chicanery: It signed a $3.5 million settlement with New York after buying 1,012 tickets to a 2014 U2 concert at Madison Square Garden in one minute and then reselling them at markups averaging 49%. (Take note: bot badness might extend to e-businesses themselves. In June, Prestige and other brokers filed counterclaims, accusing Ticketmaster of creating and disseminating its own bots, placing the blame for “Hamilton” ticket resells right back on the ticket seller itself.)

At any rate, because the 2016 law only focused on ticket reselling, bots designed to snap up and scalp other products have gotten a free pass. Time to fix that oversight, according to Representative Tonko:

The American people should be able to spend the holidays with their loved ones, not forced to camp out at store openings or race against an automated buying algorithm just to get an affordable gift for their kids.

The proposed Grinch Bots Act goes beyond toys or tickets to apply to all online retailers, be they selling Nintendo consoles or special-edition Nikes: a comprehensive approach that should help smaller, more specialized retailers.

Rami Essaid, co-founder of Distil Networks – a company that helps corporations battle bad bots – told the Washington Post that it’s not the Amazons or the eBays that get hurt by the practice, rather, it’s smaller retailers and consumers. In fact, the resellers turn to the bigger markets, such as Amazon, to resell the goods.

Essaid says the buyers behind the bots tend to go after products that retailers offer in limited quantities: for example, limited-release Nike sneakers or concert tickets.

Ticket sellers have been dealing with scalping bots for years, he said. According to Distil Networks’ Bad Bot Report 2018, 21.8% of all website traffic in 2017 was from bad bots: up by 9.5% over the previous year.

The top targets of bad bots were gambling sites, followed by airline websites. Bad bots swarm around the holidays, in particular: Essaid said that his company noted a 20% spike in bot traffic during Black Friday and Cyber Monday for a sample of about 300 e-commerce companies.

It is absolutely always happening. These bots are trying to get as much inventory as possible as quickly as possible, and they can even end up bringing your site down. We actually saw that last year where bots took down a company’s site because of a Black Friday sale.

The proposed bill would make it illegal to circumvent website controls meant to enforce posted purchasing limits or to manage inventory. It exempts security researchers: they’ll still be allowed to use bots to research vulnerabilities and to develop security products.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gz3c69IxSXg/

Facebook bug resurrects ghostly messages from the past

We’ve had secretive data profiling, Cambridge Analytica and, of course, the recent big data breach. Now, it seems Facebook has found a new way to inadvertently torment us: resurfacing old chat messages.

On Monday, the company scurried to fix a bug that saw ancient, forgotten messages pop up like new in Facebook message windows, like zombies crawling fresh from the ground.

Users first started reporting the problem on Monday, as their chat windows started resurfacing messages from years ago.

Some were bewildered:

Others inspired:

Sadly, for others, the messages brought back painful memories:

For most, this seems to have been an amusing error and a chance to eyeroll at those annoying fights they used to have with their ex. But it also highlights an important point: Facebook keeps everything, including those messages that you’ve long forgotten. Unless you specifically delete a conversation in Facebook Messenger, it remains accessible.

The company had fixed the issue by Monday evening, telling The Verge that the issue was caused by “software updates”.

It’s not the first time that Facebook has resurfaced old messages without thinking things through.

In 2015, it first introduced the Memories feature, resurfacing private posts on a person’s timeline for a nostalgic twist. It wasn’t very good at distinguishing between painful memories and happy ones, though, leaving some users hurt by painful reminders of the past. That’s a tricky problem for AI to solve, and the company apparently thought it best not to try. Instead, it offers a feature for users to manually filter memories about people and dates that they don’t want to remember.

Facebook’s continued SNAFUs aren’t earning it any friends. One user best summed it up on Twitter:

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/v3ZWvDk_Uao/

Hot fuzz: Bug detectives whip up smarter version of classic AFL fuzzer to hunt code vulnerabilities

A group of university researchers from around the globe have teamed up to develop what they say is a powerful new tool to root out security flaws.

Known as AFLSmart, this fuzzing software is built on the powerful American Fuzzy Lop toolkit. We’re told AFLSmart is pretty good at testing applications for common security flaws, such as buffer overflow errors, that can be targeted by attackers for denial of service or remote code execution exploits.

The researchers say that, on average, AFLSmart can detect twice as many bugs as AFL over a 24 hour period and, since it was put into use fuzzing a handful of open-source software libraries, the software has uncovered a total of 42 zero-day vulnerabilities and has banked 17 CVE-listed holes.

Fuzzing has long been used by security researchers as a way to automate the process of finding security vulnerabilities. By continually bombarding various input fields with strings of data, the tools can see where an application fails to properly handle the incoming data and could be vulnerable to exploitation.

AFLSmart, designed by teams from the National University of Singapore, Monash University in Australia, and University Politechnica of Bucharest, looks to expand the reach of common fuzzing tools by making them more versatile and able to cover a wider range of possible inputs.

The problem, says the team, is that most fuzzing tools move around an application slowly, changing individual bits and slowly hoping to come across a new input field. This makes automated fuzzing a slow and tedious process, particularly for multimedia libraries and tools that handle many types of data and formats.

Mutants

“Finding vulnerabilities effectively in applications processing such widely used formats is of imminent need. Mutations of the bit-level file representation are unlikely to affect any structural changes on the file that are necessary to effectively explore the vast yet sparse domain of valid program inputs,” the boffins write.

“More likely than not arbitrary bit-level mutations of a valid file will result in an invalid file that is rejected by the program’s parser before reaching the data processing portion of the program.”

To solve this, the team set out to create a tool that is better able to look at the entire application and make high-level changes to the code it uses to fuzz an application for possible vulnerabilities.

shutterstock_287971118--snake-hero

Language bugs infest downstream software, fuzzer finds

READ MORE

Where a traditional fuzzing tool moves around an application by changing one or two bits to look for a new input field, AFLSmart tries to look at the entire input format. For example, the codeg would see that an application handles both image and document files, and create seed files for both of those formats.

“Given an input format specification, our smart greybox fuzzer derives a structural representation of the seed file, called virtual structure, and leverages our novel smart mutation operators to modify the virtual file structure in addition to the file’s bit sequence during the generation of new input files,” the researchers said.

“During the greybox fuzzing search, our tool AFLSmart measures the degree of validity of the inputs produced with respect to the file format specification. It prioritizes valid inputs over invalid ones, by enabling the fuzzer to explore more mutations of a valid file as opposed to an invalid one.”

This technique has been found to nearly double the efficiency of the fuzzing tool.

The paper describing the development and effectiveness of AFLSmart, “Smart Greybox Fuzzing” [PDF], was written by Van-Thuan Pham, Marcel Bohme, Andrew E. Santosa, Alexandru Razvan Caciulescu, and Abhik Roychoudhury. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/28/better_fuzzer_aflsmart/

3ve Offline: Countless Windows PCs using 1.7m IP addresses hacked to ‘view’ up to 12 billion adverts a day

A collection of cybersecurity companies, Google, and the Feds are sharing details on how they uncovered and dismantled a massive ad-fraud operation known as “3ve” (pronounced “Eve”.)

Google says that at its peak, the 3ve scam employed nearly two million hijacked devices to generate fake clicks on adverts, and made its operators heavy payouts from duped advertising networks. The idea was that 3ve’s operators would create massive networks of fake websites that would take bids from ad networks and then send the infected machines to the sites in order to collect ad revenues.

“3ve operated on a massive scale: at its peak, it controlled over one million IPs from both residential botnet infections and corporate IP spaces, primarily in North America and Europe (for comparison, this is more than the number of broadband subscriptions in Ireland),” Google said in its summary of the operation this week.

Smut-watchers suckered by evil advertising

READ MORE

“It featured several unique sub-operations, each of which constituted a sophisticated ad fraud scheme in its own right. Shortly after we began to identify the massive infrastructure (comprised of thousands of servers across many data centers) used to host 3ve’s operation, we found similar activity happening within a network of malware-infected residential computers.”

Google says that the 3ve network actually started as a small botnet operation, which was first detected back in 2016. Over the next year the scam would grow far larger and its operators began using a number of complex evasion techniques to avoid detection by click-fraud systems. The operators used a pair of malware packages – Windows-targeting Boaxxe and Kovter – to infect victims’ PCs.

Boaxxe, aka Miuref, and Kovter were spread by booby-trapped email attachments and drive-by-downloads, effectively tricking people into installing them. BGP hijacking was also used in the caper to ultimately control, in just one 10-day sample, 1.7 million IP addresses, which were used to fire off what looked like legit ad requests and clicks.

The above link goes to more technical details, including signs of infection to look out for.

Assembling the A Team

In 2017 Google said it called in additional help from antimalware vendors. ProofPoint and Malwarebytes were brought in to help identify the malware 3ve was using to enlist new commandeered Windows PCs into its ranks. The malware would only install on systems that weren’t running security software and would only execute the ad-fraud activity if its IP address was located in a certain area with a specific ISP.

This allowed the network to evade detection and grow to a massive scale, at its peak viewing and clicking on anywhere from three to 12 billion ads per day.

“3ve’s sheer size and complexity posed a significant risk not just to individual advertisers and publishers, but to the entire advertising ecosystem,” Google said.

“We had to shut the operation down for good, which called for greater, more calculated measures. To that end, it was critical that we played the long game, endeavoring to have a more permanent, more powerful impact against this and future ad fraud operations.”

A picture of a US mail box

Facebook’s big solution to combating election ad fraud: Snail mail

READ MORE

To shut down the operation, Google said it formed a working group consisting of 16 organizations, including security vendors and law enforcement outfits, including the US Department of Homeland Security and the FBI’s Internet Crime Complaint Center.

The takedown of the network, says Google, was swift and severe. After spending several months observing the operators, the group launching a sweeping shutdown operation that caused the network’s traffic to nearly flatline over the span of 18 hours (Google wouldn’t say exactly when this happened.)

Now, the Chocolate Factory says it wants to create and maintain both standards for security vendors and ad networks to guard against fraud operations and educate both advertisers and publishers about fraud.

Meanwhile, the DHS and FBI are advising anyone who thinks their systems might be infected with 3ve’s malware to report the matter to the FBI’s IC3 website. ®

Stop press… US prosecutors today charged Aleksandr Zhukov, Boris Timokhin, Mikhail Andreev, Denis Avdeev, Dmitry Novikov, Sergey Ovsyannikov, Aleksandr Isaev and Yevgeniy Timchenko with their alleged involvement in the 3ve racket.

We’re told Ovsyannikov, 30, was cuffed last month in Malaysia, Zhukov, 38, was collared earlier this month in Bulgaria, and Timchenko, 30, was nabbed earlier this month in Estonia. They await extradition to America. The rest are at large.

They are charged with wire fraud, computer intrusion, aggravated identity theft and money laundering.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/28/3ve_ad_fraud_men_charged/

How to Find a Privacy Job That You’ll Love (& Why)

Advice from a millennial woman who has done it: Find your niche and master your craft. You will be amazed at how significant your work will be.

When I accepted my first job in privacy back in the 2000s, the industry had little to do with data or technology. To my family and friends, it seemed like a bit of a strange choice. My co-workers largely consisted of professionals at the close of their careers, investigating offline issues such as mail privacy or the accidental faxing of documents to the wrong recipient. To your average consumer, privacy simply meant the comfort you received from closing the curtains.

But, within a few years, the world dramatically shifted. Social media platforms and smart devices proliferated. Consumers often found themselves paying for online services — not with cash — but with their personal information.

My role quickly transformed into a valuable link between the fields I loved — technology, consumer protection, cybersecurity, law, and human rights. I garnered a critical voice in key business decisions, advocating for customers to gain greater control over how their data might be used in the face of increasing innovation.

Looking back at my first day, I didn’t anticipate how significant privacy would become. But I was proactive in searching for a career path that I knew would have longevity and let me evolve alongside technological developments. Here are four reasons to explore a career in the privacy field or any other job-of-the-future, yet to be.

Technology + Humanity = A Great Job Description
A recent report by McKinsey Global Institute found that roughly 50% of current work activities can be automated. As robotics and artificial intelligence continue to disrupt today’s workforce, society will need even more individuals with the ability to guide innovation in ways that are helpful — not harmful — to the public.

As a chief privacy officer, I ensure that customer data is used in an ethical manner. To do this, I need to anticipate how consumers might feel about a new service or product and be able to empathize with how it could affect them personally.

For me, finding a job at the crossroads of technology and humanity led me to a career that is not only rewarding but of increasing value to both companies and consumers alike.

Privacy Roles Are Broadening
Technological advancement has continually driven growth in privacy. But the expanded role of many related fields, such as risk analysis, data science, and product development, has also advanced the sector. In my position as chief privacy officer, I work with business partners in these areas on a daily basis. I make it a priority to better understand how their fields are changing, so I can anticipate how I need to evolve to keep pace.

This involves asking questions about who their business partners or clients are, where they are looking to innovate, and where they are focusing their long-term investments and strategies. Then I ask myself how I can adapt my role to be meaningful to their efforts.

Welcome Diversity
Privacy depends on a sincere understanding of people’s rights and feelings. So it is a business imperative that our workforce represents the diversity of those we serve.

While women have fought tooth-and-nail to succeed in industries like technology and finance, privacy has a solid reputation for welcoming diversity. According to research by the International Association of Privacy Professionals (IAPP), women make up the majority of chief privacy officers. As a woman, having these clear examples of diversity in management has helped me feel included and empowered throughout my career.

Seek to find inspiration from industries that don’t just accept differences but embrace them. Draw strength from role models and search for the sponsorship you need to truly succeed at your job.

An Industry with Growth Potential
Privacy is a young, developing industry that is always looking for fresh talent. In fact, a recent IAPP study estimated that the introduction of new rules and regulations will create an additional 28,000 privacy jobs in the Europe and the U.S. alone.

As a millennial leader, I have found this to be to my benefit. I often have fewer total years in the workforce than many of my more senior peers in other fields. But with only 11% of privacy professionals beginning their careers in the sector, I have just as much expertise as my privacy colleagues.

Be open to considering occupations that are growing — even if they might not be the hottest jobs at the time. As your knowledge and responsibilities expand, you will likely find that the role is far more interesting than it appeared at first sight. And as the industry grows, your professional opportunities will grow with it.

All said, my advice to you is to find your niche and master your craft. You will be amazed at how significant your work will be in the future.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Louise Thorpe is American Express’ chief privacy officer. She leads a global team that drives transparency, and oversees risks related to privacy, information security, records management, and information technology. View Full Bio

Article source: https://www.darkreading.com/endpoint/how-to-find-a-privacy-job-that-youll-love-(and-why)/a/d-id/1333316?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple