Privacy

Weak Password Allowed Hackers To Sink a 158-Year-Old Company (bbc.com) 125

An anonymous reader quotes a report from the BBC: One password is believed to have been all it took for a ransomware gang to destroy a 158-year-old company and put 700 people out of work. KNP -- a Northamptonshire transport company -- is just one of tens of thousands of UK businesses that have been hit by such attacks. Big names such as M&S, Co-op and Harrods have all been attacked in recent months. The chief executive of Co-op confirmed last week that all 6.5 million of its members had had their data stolen. In KNP's case, it's thought the hackers managed to gain entry to the computer system by guessing an employee's password, after which they encrypted the company's data and locked its internal systems. KNP director Paul Abbott says he hasn't told the employee that their compromised password most likely led to the destruction of the company. "Would you want to know if it was you?" he asks. "We need organizations to take steps to secure their systems, to secure their businesses," says Richard Horne CEO of the National Cyber Security Centre (NCSC) -- where Panorama has been given exclusive access to the team battling international ransomware gangs. A gang of hackers, known as Akira, broke into the company's system and demanded a payment to restore the data. "The hackers didn't name a price, but a specialist ransomware negotiation firm estimated the sum could be as much as 5 million pounds," reports the BBC. "KNP didn't have that kind of money. In the end all the data was lost, and the company went under."
Open Source

Jack Dorsey Pumps $10M Into a Nonprofit Focused on Open Source Social Media (techcrunch.com) 20

Twitter co-founder/Block CEO Jack Dorsey isn't just vibe coding new apps like Bitchat and Sun Day. He's also "invested $10 million in an effort to fund experimental open source projects and other tools that could ultimately transform the social media landscape," reports TechCrunch," funding the projects through an online collective formed in May called "andOtherStuff: [T]he team at "andOtherStuff" is determined not to build a company but is instead operating like a "community of hackers," explains Evan Henshaw-Plath [who handles UX/onboarding and was also Twitter's first employee]. Together, they're working to create technologies that could include new consumer social apps as well as various experiments, like developer tools or libraries, that would allow others to build apps for themselves.

For instance, the team is behind an app called Shakespeare, which is like the app-building platform Lovable, but specifically for building Nostr-based social apps with AI assistance. The group is also behind heynow, a voice note app built on Nostr; Cashu wallet; private messenger White Noise; and the Nostr-based social community +chorus, in addition to the apps Dorsey has already released. Developments in AI-based coding have made this type of experimentation possible, Henshaw-Plath points out, in the same way that technologies like Ruby on Rails, Django, and JSON helped to fuel an earlier version of the web, dubbed Web 2.0.

Related to these efforts, Henshaw-Plath sat down with Dorsey for the debut episode of his new podcast, revolution.social with @rabble... Dorsey believes Bluesky faces the same challenges as traditional social media because of its structure — it's funded by VCs, like other startups. Already, it has had to bow to government requests and faced moderation challenges, he points out. "I think [Bluesky CEO] Jay [Graber] is great. I think the team is great," Dorsey told Henshaw-Plath, "but the structure is what I disagree with ... I want to push the energy in a different direction, which is more like Bitcoin, which is completely open and not owned by anyone from a protocol layer...."

Dorsey's initial investment has gotten the new nonprofit up and running, and he worked on some of its initial iOS apps. Meanwhile, others are contributing their time to build Android versions, developer tools, and different social media experiments. More is still in the works, says Henshaw-Plath.

"There are things that we're not ready to talk about yet that'll be very exciting," he teases.

It's funny.  Laugh.

That Coldplay 'Kiss Cam' Couple Just Became a Vibe-Coded Videogame - and Then an NFT (forbes.com) 81

"I vibe coded a little game called Coldplay Canoodlers," reads the X.com post by gaming enthusiast/songwriter Jonathan Mann. "You're the camera operator and you have to find the CEO and HR lady canoodling. 10 points every time you find them."

Mann's post includes a 30-second clip from the game, which is playable here.

Forbes notes that the TikTok video of the couple's reaction has drawn more than 100 million views — and that the married-to-someone-else CEO has now tendered his resignation from his dataops company Astronomer (which was accepted). The company is now searching for a new chief executive, according to a statement posted on LinkedIn. ("Comments have been turned off on this post...")

"Our leaders are expected to set the standard in both conduct and accountability, and recently, that standard was not met."

But songwriter Mann saw a chance to have some fun, writes Forbes: Mann used ChatGPT to make the "Coldplay Canoodlers" game, inputting such prompts as: "Can you generate an 8-bit pixel image of a stadium concert viewed from the stage" and "there should be a large jumbotron somewhere up in the stadium seats." He also entered rough drawings of the visual style he envisioned... The response to the game, Mann said in an interview, has been unexpected. "I have gone viral many times with my songs," he said. It's "very strange to have it happen with a game I made in four hours."
Songwriter Mann has been sharing an original song online every day for over 17 years. Last summer Slashdot also covered Mann's attempts to sell NFTs of his songs, and his concerns about SEC regulations. (This led him to file a real-world legal challenge — and to write a song titled "I'm Suing the SEC".) So with all the attention this weekend to his instant game, there was nothing to do but... write a new song about it.

And minutes ago on X.com, Mann also posted a new update about his game.

"I turned it into an NFT."

"Took some time," Mann explained later. "But I vibe coded my own ERC-721 contract and minted the game as a playable NFT. (Plays great on OpenSea)."
Biotech

23andMe's Data Sold to Nonprofit Run by Its Co-Founder - 'And I Still Don't Trust It' (msn.com) 24

"Nearly 2 million people protected their privacy by deleting their DNA from 23andMe after it declared bankruptcy in March," writes a Washington Post technology columnist.

"Now it's back with the same person in charge — and I still don't trust it." As of this week, genetic data from the more than 10 million remaining 23andMe customers has been formally sold to an organization called TTAM Research Institute for $305 million. That nonprofit is run by the person who co-founded and ran 23andMe, Anne Wojcicki. In a recent email to customers, the new 23andMe said it "will be operating with the same employees and privacy protocols that have protected your data." Never mind that Wojcicki and her privacy protocols are what put your DNA at risk in the first place...

The company is legally obligated to maintain and honor 23andMe's existing privacy policies, user consents and data protection measures. And as part of a settlement with states, TTAM also agreed to provide annual privacy reports to state regulators and set up a privacy board. But it hasn't agreed to take the fundamental step of asking for permission to acquire existing customers' genetic information. And it's leaving the door open to selling people's genes to the highest bidder again in the future...

Existing 23andMe customers have the right to delete their data or opt out of TTAM's research. But the new company is not asking for opt-in permission before it takes ownership of customers' DNA... Why does that matter? Because people who handed over the DNA 15 years ago, often to learn about their genetic ancestry, never imagined it might be used in this way now. Asking for new permission might significantly shrink the size (and value) of 23andMe's DNA database — but it would be the right thing to do given the rocky history. Neil M. Richards [the Washington University professor who served as privacy ombudsman for the bankruptcy court], pointed out that about a third of 23andMe customers haven't logged in for at least three years, so they may have no idea what is going on. Some 23andMe users never even clicked "agree" on a legal agreement that allowed their data to be sold like this; the word "bankruptcy" wasn't added to the company's privacy policy until 2022. And then there is an unknown number of deceased users who most certainly can't consent, but whose DNA still has an impact on their living genetic relatives...

[S]everal states have argued that their existing genetic privacy laws don't allow 23andMe to receive the information without getting permission from every single person. Virginia has an ongoing lawsuit over the issue, and the California attorney general's office told me it "will continue to fight to protect and vindicate the rights" of consumers....

Two more points of concern:
  • "There is nothing in 23andMe's bankruptcy agreement or privacy statement to prevent TTAM from selling or transferring DNA to some other organization in the future."

The Internet

DuckDuckGo Now Lets You Hide AI-Generated Images In Search Results (techcrunch.com) 12

An anonymous reader quotes a report from TechCrunch: Privacy-focused browser DuckDuckGo is rolling out a new setting that lets users filter out AI images in search results. The company says it's launching the feature in response to feedback from users who said AI images can get in the way of finding what they're looking for.

Users can access the new setting by conducting a search on DuckDuckGo and heading to the Images tab. From there, they will see a new dropdown menu titled "AI images." Users can then choose whether or not they want to see AI content by selecting "show" or "hide." Users can also turn on the filter in their search settings by tapping the "Hide AI-Generated Images" option.
"The filter relies on manually curated open-source blocklists, including the 'nuclear' list, provided by uBlockOrigin and uBlacklist Huge AI Blocklist," DuckDuckGo said in a post on X. "While it won't catch 100% of AI-generated results, it will greatly reduce the number of AI-generated images you see." DuckDuckGo says it has plans to add other similar filters in the future.
Privacy

'Coldplay Kiss-Cam Flap Proves We're Already Our Own Surveillance State' (theregister.com) 78

Brandon Vigliarolo writes via The Register: A tech executive's alleged affair exposed on a stadium jumbotron is ripe fodder for the gossip rags, but it exhibits something else: proof that we need not wait for an AI-fueled dystopian surveillance state to descend on us -- we're perfectly able and willing to surveil ourselves. The embracing couple caught at a Coldplay concert this week as the jumbotron camera panned around the audience would have been another unremarkable clip, if not for the pair panicking and rushing to hide, triggering attendees to publish the memorable moment on social media. "Either they're having an affair or they're very shy," Coldplay singer Chris Martin said of the pair's reaction.

As is always the case when viral moments of unknown people get uploaded to the internet, they didn't remain anonymous for long, with the internet quickly identifying them as the CEO of data infrastructure outfit Astronomer, Andy Byron, and its Chief People Officer, Kristin Cabot. We're not going to weigh in on Byron's, who internet sleuths have determined is married (for now), or Cabot's behavior - making someone pay for the moral transgression of an alleged extramarital affair may be enough reason for the internet to go on a witch hunt, but that's not our concern here.

What's worrying is what this moment says - yet again - about us as a society: We have cameras everywhere, our personal data has become one of the most valuable commodities in the world, and we're all perpetually ready to use that tech to make those we feel have violated the social contract pay publicly for their transgressions. This is hardly a new phenomenon. [...] There's really no reason to set up an expensive and oppressive surveillance state when we all have location tracking, internet-connected shaming machines in our pockets. Big tech gave us the tools of our own surveillance, and as "ColdplayGate" shows yet again, we'll keep using those tools if they'll make us feel better about ourselves - especially if someone else gets knocked down a peg in the process.

Privacy

Ring Restores Police Video Access 41

Ring has restored police access to user video footage and returned to its original crime-prevention mission under founder Jamie Siminoff, who rejoined Amazon in April after a two-year absence. The video doorbell company announced a partnership with law enforcement technology firm Axon that allows police to request footage through Axon's digital evidence management system, effectively reviving a controversial feature Ring discontinued last year.

Siminoff scrapped Ring's socially-focused mission statement "Keep people close to what's important" that Amazon introduced in 2024 and reinstated the company's original mandate to "make neighborhoods safer." The company previously paid $5.8 million to settle Federal Trade Commission allegations of privacy violations in 2023, though Amazon denied wrongdoing.
Crime

New Russian Law Criminalizes Online Searches For Controversial Content (washingtonpost.com) 83

Russian lawmakers passed sweeping new legislation allowing authorities to fine individuals simply for searching and accessing content labeled "extremist" via VPNs. The Washington Post reports: Russia defines "extremist materials" as content officially added by a court to a government-maintained registry, a running list of about 5,500 entries, or content produced by "extremist organizations" ranging from "the LGBT movement" to al-Qaeda. The new law also covers materials that promote alleged Nazi ideology or incite extremist actions. Until now, Russian law stopped short of punishing individuals for seeking information online; only creating or sharing such content is prohibited. The new amendments follow remarks by high-ranking officials that censorship is justified in wartime. Adoption of the measures would mark a significant tightening of Russia's already restrictive digital laws.

The fine for searching for banned content in Russia would be about a $65, while the penalty for advertising circumvention tools such as VPN services would be steeper -- $2,500 for individuals and up to $12,800 for companies. Previously, the most significant expansion of Russia's restrictions on internet use and freedom of speech occurred shortly after the February 2022 full-scale invasion of Ukraine, when sweeping laws criminalized the spread of "fake news" and "discrediting" the Russian military. The new amendment was introduced Tuesday and attached to a mundane bill on regulating freight companies, according to documents published by Russia's lower house of parliament, the State Duma.

The Courts

Meta Investors, Mark Zuckerberg Reach Settlement To End $8 Billion Trial Over Facebook Privacy Litigation (nbcnews.com) 8

An anonymous reader quotes a report from NBC News: Mark Zuckerberg and current and former directors and officers of Meta Platforms agreed on Thursday to settle claims seeking $8 billion for the damage they allegedly caused the company by allowing repeated violations of Facebook users' privacy, a lawyer for the shareholders told a Delaware judge on Thursday. The parties did not disclose details of the settlement and defense lawyers did not address the judge, Kathaleen McCormick of the Delaware Court of Chancery. McCormick adjourned the trial just as it was to enter its second day and she congratulated the parties. The plaintiffs' lawyer, Sam Closic, said the agreement just came together quickly.

Billionaire venture capitalist Marc Andreessen, who is a defendant in the trial and a Meta director, was scheduled to testify on Thursday. Shareholders of Meta sued Zuckerberg, Andreessen and other former company officials including former Chief Operating Officer Sheryl Sandberg in hopes of holding them liable for billions of dollars in fines and legal costs the company paid in recent years. The Federal Trade Commission fined Facebook $5 billion in 2019 after finding that it failed to comply with a 2012 agreement with the regulator to protect users' data. The shareholders wanted the 11 defendants to use their personal wealth to reimburse the company. The defendants denied the allegations, which they called "extreme claims."
"This settlement may bring relief to the parties involved, but it's a missed opportunity for public accountability," said Jason Kint, the head of Digital Content Next, a trade group for content providers.

"Facebook has successfully remade the 'Cambridge Analytica' scandal about a few bad actors rather than an unraveling of its entire business model of surveillance capitalism and the reciprocal, unbridled sharing of personal data. That reckoning is now left unresolved."
Privacy

Chinese Authorities Are Using a New Tool To Hack Seized Phones and Extract Data (techcrunch.com) 40

An anonymous reader quotes a report from TechCrunch: Security researchers say Chinese authorities are using a new type of malware to extract data from seized phones, allowing them to obtain text messages -- including from chat apps such as Signal -- images, location histories, audio recordings, contacts, and more. In a report shared exclusively with TechCrunch, mobile cybersecurity company Lookout detailed the hacking tool called Massistant, which the company said was developed by Chinese tech giant Xiamen Meiya Pico.

Massistant, according to Lookout, is Android software used for the forensic extraction of data from mobile phones, meaning the authorities using it need to have physical access to those devices. While Lookout doesn't know for sure which Chinese police agencies are using the tool, its use is assumed widespread, which means Chinese residents, as well as travelers to China, should be aware of the tool's existence and the risks it poses. [...]

The good news ... is that Massistant leaves evidence of its compromise on the seized device, meaning users can potentially identify and delete the malware, either because the hacking tool appears as an app, or can be found and deleted using more sophisticated tools such as the Android Debug Bridge, a command line tool that lets a user connect to a device through their computer. The bad news is that at the time of installing Massistant, the damage is done, and authorities already have the person's data.
"It's a big concern. I think anybody who's traveling in the region needs to be aware that the device that they bring into the country could very well be confiscated and anything that's on it could be collected," said Kristina Balaam, a researcher at Lookout who analyzed the malware. "I think it's something everybody should be aware of if they're traveling in the region."
AI

WeTransfer Backtracks on Terms Suggesting User Files Could Train AI Models After Backlash (theguardian.com) 10

WeTransfer has reversed controversial terms of service changes after users protested language suggesting uploaded files could be used to "improve machine learning models."

The file-sharing service, popular among creative professionals and used by 80 million users across 190 countries, clarified that user content had never been used to train AI models and removed all references to machine learning from its updated terms. Creative users including voice actors, filmmakers, and journalists had threatened to cancel subscriptions over the changes.
United Kingdom

Thousands of Afghans Secretly Moved To Britain After Data Leak (reuters.com) 76

The UK secretly relocated thousands of Afghans to the UK after their personal details were disclosed in one of the country's worst ever data breaches, putting them at risk of Taliban retaliation. The operation cost around $2.7 billion and remained under a court-imposed superinjunction until recently lifted. Reuters reports: The leak by the Ministry of Defence in early 2022, which led to data being published on Facebook the following year, and the secret relocation program, were subject to a so-called superinjunction preventing the media reporting what happened, which was lifted on Tuesday by a court. British defence minister John Healey apologised for the leak, which included details about members of parliament and senior military officers who supported applications to help Afghan soldiers who worked with the British military and their families relocate to the UK. "This serious data incident should never have happened," Healey told lawmakers in the House of Commons. It may have occurred three years ago under the previous government, but to all whose data was compromised I offer a sincere apology."

The incident ranks among the worst security breaches in modern British history because of the cost and risk posed to the lives of thousands of Afghans, some of whom fought alongside British forces until their chaotic withdrawal in 2021. Healey said about 4,500 Afghans and their family members have been relocated or were on their way to Britain under the previously secret scheme. But he added that no-one else from Afghanistan would be offered asylum because of the data leak, citing a government review which found little evidence of intent from the Taliban to seek retribution against former officials.

AI

Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People (404media.co) 33

An anonymous reader shares a report: Hugging Face, a company with a multi-billion dollar valuation and one of the most commonly used platforms for sharing AI tools and resources, is hosting over 5,000 AI image generation models that are designed to recreate the likeness of real people. These models were all previously hosted on Civitai, an AI model sharing platform 404 Media reporting has shown was used for creating nonconsensual pornography, until Civitai banned them due to pressure from payment processors.

Users downloaded the models from Civitai and reuploaded them to Hugging Face as part of a concerted community effort to archive the models after Civitai announced in May it will ban them. In that announcement, Civitai said it will give the people who originally uploaded them "a short period of time" before they were removed. Civitai users began organizing an archiving effort on Discord earlier in May after Civitai indicated it had to make content policy changes due to pressure from payment processors, and the effort kicked into high gear when Civitai announced the new "real people" model policy.

Security

Qantas Confirms Data Breach Impacts 5.7 Million Customers (bleepingcomputer.com) 4

Qantas has confirmed that 5.7 million customers have been impacted by a recent data breach through a third-party platform used by its contact center. The breach, attributed to the Scattered Spider threat group, exposed various personal details but did not include passwords, financial, or passport data. BleepingComputer reports: In a new update today, Qantas has confirmed that the threat actors stole data for approximately 5.7 million customers, with varying types of data exposed in the breach:

4 million customer records are limited to name, email address and Qantas Frequent Flyer details. Of this:
- 1.2 million customer records contained name and email address.
- 2.8 million customer records contained name, email address and Qantas Frequent Flyer number. The majority of these also had tier included. A smaller subset of these had points balance and status credits included.

Of the remaining 1.7 million customers, their records included a combination of some of the data fields above and one or more of the following:
- Address - 1.3 million. This is a combination of residential addresses and business addresses including hotels for misplaced baggage delivery.
- Date of birth - 1.1 million
- Phone number (mobile, landline and/or business) - 900,000
- Gender - 400,000. This is separate to other gender identifiers like name and salutation.
- Meal preferences - 10,000

The Courts

German Court Rules Meta Tracking Tech Violates EU Privacy Laws (therecord.media) 14

An anonymous reader quotes a report from The Record: A German court has ruled that Meta must pay $5,900 to a German Facebook user who sued the platform for embedding tracking technology in third-party websites -- a ruling that could open the door to large fines down the road over data privacy violations relating to pixels and similar tools. The Regional Court of Leipzig in Germany ruled Friday that Meta tracking pixels and software development kits embedded in countless websites and apps collect users' data without their consent and violate the continent's General Data Protection Regulation (GDPR).

The ruling in favor of the plaintiff sets a precedent which the court acknowledged will allow countless other users to sue without "explicitly demonstrating individual damages," according to a Leipzig Regional Court press release. "Every user is individually identifiable to Meta at all times as soon as they visit the third-party websites or use an app, even if they have not logged in via the Instagram and Facebook account," the press release said.
"This may very well be one of the most substantial rulings coming out of Europe this year," said Ronni K. Gothard Christiansen, the CEO of AesirX, a consultancy which helps businesses comply with data privacy laws. "$5,900 in damages for one visitor adds up quickly if you have tens of thousands of visitors, or even millions."
Privacy

Swedish Bodyguards Reveal Prime Minister's Location on Fitness App (politico.eu) 18

Swedish security service members who shared details of their running and cycling routes on fitness app Strava have been accused of revealing details of the prime minister's location, including his private address. Politico: According to Swedish daily Dagens Nyheter, on at least 35 occasions bodyguards uploaded their workouts to the training app and revealed information linked to Prime Minister Ulf Kristersson, including where he goes running, details of overnight trips abroad, and the location of his private home, which is supposed to be secret.
Security

Jack Dorsey Says His 'Secure' New Bitchat App Has Not Been Tested For Security (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: On Sunday, Block CEO and Twitter co-founder Jack Dorsey launched an open source chat app called Bitchat, promising to deliver "secure" and "private" messaging without a centralized infrastructure. The app relies on Bluetooth and end-to-end encryption, unlike traditional messaging apps that rely on the internet. By being decentralized, Bitchat has potential for being a secure app in high-risk environments where the internet is monitored or inaccessible. According to Dorsey's white paper detailing the app's protocols and privacy mechanisms, Bitchat's system design "prioritizes" security.

But the claims that the app is secure, however, are already facing scrutiny by security researchers, given that the app and its code have not been reviewed or tested for security issues at all -- by Dorsey's own admission. Since launching, Dorsey has added a warning to Bitchat's GitHub page: "This software has not received external security review and may contain vulnerabilities and does not necessarily meet its stated security goals. Do not use it for production use, and do not rely on its security whatsoever until it has been reviewed." This warning now also appears on Bitchat's main GitHub project page but was not there at the time the app debuted.

As of Wednesday, Dorsey added: "Work in progress," next to the warning on GitHub. This latest disclaimer came after security researcher Alex Radocea found that it's possible to impersonate someone else and trick a person's contacts into thinking they are talking to the legitimate contact, as the researcher explained in a blog post. Radocea wrote that Bitchat has a "broken identity authentication/verification" system that allows an attacker to intercept someone's "identity key" and "peer id pair" -- essentially a digital handshake that is supposed to establish a trusted connection between two people using the app. Bitchat calls these "Favorite" contacts and marks them with a star icon. The goal of this feature is to allow two Bitchat users to interact, knowing that they are talking to the same person they talked to before.

AI

McDonald's AI Hiring Bot Exposed Millions of Applicants' Data To Hackers 25

An anonymous reader quotes a report from Wired: If you want a job at McDonald's today, there's a good chance you'll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and resume, directs them to a personality test, and occasionally makes them "go insane" by repeatedly misunderstanding their most basic questions. Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald's applicants -- including all the personal information they shared in those conversations -- with tricks as straightforward as guessing the username and password "123456."

On Wednesday, security researchers Ian Carroll and Sam Curryrevealedthat they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald's website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with along track record of independent security testing, discovered that simple web-based vulnerabilities -- including guessing one laughably weak password -- allowed them to access a Paradox.ai account and query the company's databases that held every McHire user's chats with Olivia. The data appears to include as many as 64 million records, including applicants' names, email addresses, and phone numbers.

Carroll says he only discovered that appalling lack of security around applicants' information because he was intrigued by McDonald's decision to subject potential new hires to an AI chatbot screener and personality test. "I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that's what made me want to look into it more," says Carroll. "So I started applying for a job, and then after 30 minutes, we had full access to virtually every application that's ever been made to McDonald's going back years."
Paradox.ai confirmed the security findings, acknowledging that only a small portion of the accessed records contained personal data. The company stated that the weak-password account ("123456") was only accessed by the researchers and no one else. To prevent future issues, Paradox is launching a bug bounty program. "We do not take this matter lightly, even though it was resolved swiftly and effectively," Paradox.ai's chief legal officer, Stephanie King, told WIRED in an interview. "We own this."

In a statement to WIRED, McDonald's agreed that Paradox.ai was to blame. "We're disappointed by this unacceptable vulnerability from a third-party provider, Paradox.ai. As soon as we learned of the issue, we mandated Paradox.ai to remediate the issue immediately, and it was resolved on the same day it was reported to us," the statement reads. "We take our commitment to cyber security seriously and will continue to hold our third-party providers accountable to meeting our standards of data protection."
The Courts

Fubo Pays $3.4 Million To Settle Claims It Illegally Shared User Data With Advertisers (arstechnica.com) 9

Fubo has agreed to pay $3.4 million to settle a class-action lawsuit (PDF) accusing it of illegally sharing usersâ(TM) personally identifiable information and video viewing history with advertisers without consent, allegedly violating the Video Privacy Protection Act (VPPA). Ars Technica reports: As reported by Cord Cutters News this week, instead of going to trial, Fubo reached a settlement agreement [PDF] that allows people who used Fubo before May 29, which is when Fubo last updated its privacy policy, to receive part of a $3.4 million settlement. The settlement agreement received preliminary approval on May 29, and users recently started receiving notice of their potential entitlement to some of the settlement. They have until September 12 to submit claims. Fubo said in a statement: "We deny the allegations in the putative class lawsuit and specifically deny that we have engaged in any wrongdoing whatsoever. Fubo has nonetheless chosen to pursue a settlement for this matter in order to avoid the uncertainty and expense of litigation. We look forward to putting this matter behind us."
Open Source

The Open-Source Software Saving the Internet From AI Bot Scrapers (404media.co) 33

An anonymous reader quotes a report from 404 Media: For someone who says she is fighting AI bot scrapers just in her free time, Xe Iaso seems to be putting up an impressive fight. Since she launched it in January, Anubis, a "program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies," has been downloaded nearly 200,000 times, and is being used by notable organizations including GNOME, the popular open-source desktop environment for Linux, FFmpeg, the open-source software project for handling video and other media, and UNESCO, the United Nations organization for educations, science, and culture. [...]

"Anubis is an uncaptcha," Iaso explains on her site. "It uses features of your browser to automate a lot of the work that a CAPTCHA would, and right now the main implementation is by having it run a bunch of cryptographic math with JavaScript to prove that you can run JavaScript in a way that can be validated on the server." Essentially, Anubis verifies that any visitor to a site is a human using a browser as opposed to a bot. One of the ways it does this is by making the browser do a type of cryptographic math with JavaScript or other subtle checks that browsers do by default but bots have to be explicitly programmed to do. This check is invisible to the user, and most browsers since 2022 are able to complete this test. In theory, bot scrapers could pretend to be users with browsers as well, but the additional computational cost of doing so on the scale of scraping the entire internet would be huge. This way, Anubis creates a computational cost that is prohibitively expensive for AI scrapers that are hitting millions and millions of sites, but marginal for an individual user who is just using the internet like a human.

Anubis is free, open source, lightweight, can be self-hosted, and can be implemented almost anywhere. It also appears to be a pretty good solution for what we've repeatedly reported is a widespread problem across the internet, which helps explain its popularity. But Iaso is still putting a lot of work into improving it and adding features. She told me she's working on a non cryptographic challenge so it taxes users' CPUs less, and also thinking about a version that doesn't require JavaScript, which some privacy-minded disable in their browsers. The biggest challenge in developing Anubis, Iaso said, is finding the balance. "The balance between figuring out how to block things without people being blocked, without affecting too many people with false positives," she said. "And also making sure that the people running the bots can't figure out what pattern they're hitting, while also letting people that are caught in the web be able to figure out what pattern they're hitting, so that they can contact the organization and get help. So that's like, you know, the standard, impossible scenario."

Slashdot Top Deals