Software

Russia Orders State-Backed WhatsApp Rival Pre-Installed On Phones and Tablets (reuters.com) 29

Starting September 1st, Russia will require all smartphones and tablets sold in the country to come with MAX, a state-backed messaging app seen as a rival to WhatsApp and Telegram. Critics say the app could be used to track users. Reuters reports: The Russian government said in a statement that MAX, which will be integrated with government services, would be on a list of mandatory pre-installed apps on all "gadgets," including mobile phones and tablets, sold in Russia from September 1. State media says accusations from Kremlin critics that MAX is a spying app are false and that it has fewer permissions to access user data than rivals WhatsApp and Telegram. It will also be mandatory that from September 1, Russia's domestic app store, RuStore, which is pre-installed on all Android devices, will be pre-installed on Apple devices.

A Russian-language TV app called LIME HD TV, which allows people to watch state TV channels for free, will be pre-installed on all smart TVs sold in Russia from January 1, the government added. [...] MAX said this week that 18 million users had downloaded its app, parts of which are still in a testing phase. Russia's interior ministry said on Wednesday that MAX was safer than foreign rivals, but that it had arrested a suspect in the first fraud case using the new messenger.

Facebook

Whistleblower Alleges Meta Artificially Boosted Shops Ads Performance (adweek.com) 8

An anonymous reader quotes a report from Adweek: Meta wanted advertisers to believe its ecommerce ad product, Shops ads, was outperforming the competition, per a whistleblower complaint filed in a U.K. court. The former employee alleges the social media giant artificially inflated return on ad spend (ROAS) by counting shipping fees as revenue, subsidizing bids in ad auctions, and applying undisclosed discounts. The complaint, viewed by ADWEEK, was filed with the London Central Employment Tribunal on Wednesday (August 20) by Samujjal Purkayastha, a former product manager on Meta's Shops ads team. The document claims Meta artificially inflated performance metrics to push brands toward its fledgling ecommerce ad product.

The company's motivation, the complaint says, was in part to combat Apple's 2021 privacy changes that cut the troves of iOS tracking information that had long powered Meta's ad machine. Meta's former chief financial officer (CFO), David Wehner, said the changes would cost "on the order of $10 billion" in losses during the company's Q4 2021 earnings call. User purchases on Facebook or Instagram Shops pages would provide more first-party data, however. Purkayastha, who joined Meta (then Facebook) in 2020 as a product manager on the Facebook Artificial Intelligence Applied Research team, was reassigned to the Shops Ads team in March 2022 and remained at the company until Feb. 19, 2025, when he was terminated.

He alleged that during internal reviews in early 2024, Meta data scientists found the return on ad spend (ROAS) from Shops ads had been inflated between 17% and 19%. This discrepancy stemmed from Meta counting shipping fees and taxes as part of a sale, even though that money never went to merchants, he alleged. The company's other ad products exclude those figures, in line with competitors like Google, the complaint reads. Without including the fees and taxes, Shops ads performed no better than Meta's traditional ads, Purkayastha claimed. "This was significant," the complaint reads. "In addition to the ROAS performance metric being overstated by nearly a fifth, it meant that, rather than having exceeded our primary target, the Shops Ads team had in fact missed it once the figure was reduced to take account of the artificial inflation."
Purkayastha raised these concerns with senior leadership in multiple meetings between 2022 and 2024, and is now seeking interim relief through his employment tribunal filing to have his former position reinstated.

A Meta spokesperson told ADWEEK the company is "actively defending these proceedings," adding that "allegations related to the integrity of our advertising practices are without merit and we have full confidence in our performance review processes."
AI

Harvard Dropouts To Launch 'Always On' AI Smart Glasses That Listen, Record Every Conversation 68

Two Harvard dropouts are launching Halo X, a $249 pair of AI-powered smart glasses that continuously listen, record, and transcribe conversations while displaying real-time information to the wearer. "Our goal is to make glasses that make you super intelligent the moment you put them on," said AnhPhu Nguyen, co-founder of Halo. Co-founder Caine Ardayfio said the glasses "give you infinite memory."

"The AI listens to every conversation you have and uses that knowledge to tell you what to say ... kinda like IRL Cluely," Ardayfio told TechCrunch. "If somebody says a complex word or asks you a question, like, 'What's 37 to the third power?' or something like that, then it'll pop up on the glasses." From the report: Ardayfio and Nguyen have raised $1 million to develop the glasses, led by Pillar VC, with support from Soma Capital, Village Global, and Morningside Venture. The glasses will be priced at $249 and will be available for preorder starting Wednesday. Ardayfio called the glasses "the first real step towards vibe thinking."

The two Ivy League dropouts, who have since moved into their own version of the Hacker Hostel in the San Francisco Bay Area, recently caused a stir after developing a facial-recognition app for Meta's smart Ray-Ban glasses to prove that the tech could be used to dox people. As a potential early competitor to Meta's smart glasses, Ardayfio said Meta, given its history of security and privacy scandals, had to rein in its product in ways that Halo can ultimately capitalize on. [...]

For now, Halo X glasses only have a display and a microphone, but no camera, although the two are exploring the possibility of adding it to a future model. Users still need to have their smartphones handy to help power the glasses and get "real time info prompts and answers to questions," per Nguyen. The glasses, which are manufactured by another company that the startup didn't name, are tethered to an accompanying app on the owner's phone, where the glasses essentially outsource the computing since they don't have enough power to do it on the device itself. Under the hood, the smart glasses use Google's Gemini and Perplexity as its chatbot engine, according to the two co-founders. Gemini is better for math and reasoning, whereas they use Perplexity to scrape the internet, they said.
Security

Male-Oriented App 'TeaOnHer' Also Had Security Flaws That Could Leak Men's Driver's License Photos (techcrunch.com) 112

The women-only dating-advice app Tea "has been hit with 10 potential class action lawsuits in federal and state court," NBC News reported last week, "after a data breach led to the leak of thousands of selfies, ID photos and private conversations online." The suits could result in Tea having to pay tens of millions of dollars in damages to the plaintiffs, which could be catastrophic for the company, an expert told NBC News... One of the suits lists the right-wing online discussion board 4chan and the social platform X as defendants, alleging that they allowed bad actors to spread users' personal information.
But meanwhile, a new competing app for men called "TeaOnHer" has already been launched. And it was also found to have enormous security flaws, reports TechCrunch, that "exposed its users' personal information, including photos of their driver's licenses and other government-issued identity documents..." [W]hen we looked at the TeaOnHer's public internet records, it had no meaningful information other than a single subdomain, appserver.teaonher.com. When we opened this page in our browser, what loaded was the landing page for TeaOnHer's API (for the curious, we uploaded a copy here)... It was on this landing page that we found the exposed email address and plaintext password (which wasn't that far off from "password") for [TeaOnHer developer Xavier] Lampkin's account to access the TeaOnHer "admin panel"... This API landing page included an endpoint called /docs, which contained the API's auto-generated documentation (powered by a product called Swagger UI) that contained the full list of commands that can be performed on the API [including administrator commands to return user data]...

While it's not uncommon for developers to publish their API documentation, the problem here was that some API requests could be made without any authentication — no passwords or credentials were needed...

The records returned from TeaOnHer's server contained users' unique identifiers within the app (essentially a string of random letters and numbers), their public profile screen name, and self-reported age and location, along with their private email address. The records also included web address links containing photos of the users' driver's licenses and corresponding selfies. Worse, these photos of driver's licenses, government-issued IDs, and selfies were stored in an Amazon-hosted S3 cloud server set as publicly accessible to anyone with their web addresses. This public setting lets anyone with a link to someone's identity documents open the files from anywhere with no restrictions...

The bugs were so easy to find that it would be sheer luck if nobody malicious found them before we did. We asked, but Lampkin would not say if he has the technical ability, such as logs, to determine if anyone had used (or misused) the API at any time to gain access to users' verification documents, such as by scraping web addresses from the API. In the days since our report to Lampkin, the API landing page has been taken down, along with its documentation page, and it now displays only the state of the server that the TeaOnHer API is running on as "healthy."

The flaws were discovered while TeaOnHer was the #2 free app in the Apple App Store, the article points out. And while these flaws "appear to be resolved," the article notes a larger issue. "Shoddy coding and security flaws highlight the ongoing privacy risks inherent in requiring users to submit sensitive information to use apps and websites,"

And TeaOnHer also had another authentication issue. A female reporter at Cosmopolitan also noted Friday that TeaOnHer "lets you browse through profiles before your verifications are complete. So literally anyone (like myself) can read reviews..."
Android

Android's pKVM Becomes First Globally Certified Software to Achieve SESIP Level 5 Security Certification (googleblog.com) 32

Protected KVM (pKVM), the hypervisor powering the Android Virtualization Framework, has officially achieved SESIP Level 5 certification (in testing by cybersecurity lab Dekra against the TrustCB SESIP scheme).

Google's security blog called the certification "a watershed moment," and a "new benchmark" for both open-source security — and for the future of consumer electronics. "It provides a single, open-source, and exceptionally high-quality firmware base that all device manufacturers can build upon." This makes pKVM the first software security system designed for large-scale deployment in consumer electronics to meet this assurance bar. The implications for the future of secure mobile technology are profound. With this level of security assurance, Android is now positioned to securely support the next generation of high-criticality isolated workloads. This includes vital features, such as on-device AI workloads that can operate on ultra-personalized data, with the highest assurances of privacy and integrity...

Achieving Security Evaluation Standard for IoT Platforms (SESIP) Level 5 is a landmark because it incorporates AVA_VAN.5, the highest level of vulnerability analysis and penetration testing under the ISO 15408 (Common Criteria) standard. A system certified to this level has been evaluated to be resistant to highly skilled, knowledgeable, well-motivated, and well-funded attackers who may have insider knowledge and access. This certification is the cornerstone of the next-generation of Android's multi-layered security strategy. Many of the TEEs (Trusted Execution Environments) used in the industry have not been formally certified or have only achieved lower levels of security assurance... Looking ahead, Android device manufacturers will be required to use isolation technology that meets this same level of security for various security operations that the device relies on. Protected KVM ensures that every user can benefit from a consistent, transparent, and verifiably secure foundation.

"This achievement represents just one important aspect of the immense, multi-year dedication from the Linux and KVM developer communities and multiple engineering teams at Google developing pKVM and AVF," the post concludes.

"We look forward to seeing the open-source community and Android ecosystem continue to build on this foundation, delivering a new era of high-assurance mobile technology for users."
AI

Illinois Bans AI Therapy, Joins Two Other States in Regulating Chatbots (msn.com) 31

"Illinois last week banned the use of artificial intelligence in mental health therapy," reports the Washington Post, "joining a small group of states regulating the emerging use of AI-powered chatbots for emotional support and advice." Licensed therapists in Illinois are now forbidden from using AI to make treatment decisions or communicate with clients, though they can still use AI for administrative tasks. Companies are also not allowed to offer AI-powered therapy services — or advertise chatbots as therapy tools — without the involvement of a licensed professional.

Nevada passed a similar set of restrictions on AI companies offering therapy services in June, while Utah also tightened regulations for AI use in mental health in May but stopped short of banning the use of AI.

The bans come as experts have raised alarms about the potential dangers of therapy with AI chatbots that haven't been reviewed by regulators for safety and effectiveness. Already, cases have emerged of chatbots engaging in harmful conversations with vulnerable people — and of users revealing personal information to chatbots without realizing their conversations were not private.

Some AI and psychiatry experts said they welcomed legislation to limit the use of an unpredictable technology in a delicate, human-centric field.

News

VP.NET Publishes SGX Enclave Code: Zero-Trust Privacy You Can Actually Verify 12

VP.NET has released the source code for its Intel SGX enclave on GitHub, allowing anyone to build the enclave and verify its mrenclave hash matches what's running on the servers. This takes "don't trust, verify" from marketing to reality, making privacy claims testable all the way down to hardware-enforced execution.

A move like this could set a new benchmark for transparency in privacy tech.
Privacy

Proton Begins Shifting Infrastructure Outside of Switzerland Ahead of Surveillance Legislation (techradar.com) 26

Proton has begun relocating infrastructure outside Switzerland ahead of proposed surveillance legislation requiring VPNs and messaging services with over 5,000 users to identify customers and retain data for six months.

The company's AI chatbot Lumo became the first product hosted on German servers rather than Swiss infrastructure. CEO Andy Yen confirmed the decision and a spokesperson told TechRadar that the company isn't fully exiting Switzerland.

In a blog post about the launch of Lumo last month, Proton's Head of Anti-Abuse and Account Security, Eamonn Maguire, explained that the company had decided to invest outside Switzerland for fear of the looming legal changes. He wrote: "Because of legal uncertainty around Swiss government proposals to introduce mass surveillance -- proposals that have been outlawed in the EU -- Proton is moving most of its physical infrastructure out of Switzerland. Lumo will be the first product to move."

The proposed amendments to Switzerland's Ordinance on the Surveillance of Correspondence by Post and Telecommunications would also mandate decryption capabilities for providers holding encryption keys. Proton is developing additional facilities in Norway.
AI

Google Releases Pint-Size Gemma Open AI Model (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: Google has announced a tiny version of its Gemma open model designed to run on local devices. Google says the new Gemma 3 270M can be tuned in a snap and maintains robust performance despite its small footprint. [...] Running an AI model locally has numerous benefits, including enhanced privacy and lower latency. Gemma 3 270M was designed with these kinds of use cases in mind. In testing with a Pixel 9 Pro, the new Gemma was able to run 25 conversations on the Tensor G4 chip and use just 0.75 percent of the device's battery. That makes it by far the most efficient Gemma model.

Developers shouldn't expect the same performance level of a multi-billion-parameter model, but Gemma 3 270M has its uses. Google used the IFEval benchmark, which tests a model's ability to follow instructions, to show that its new model punches above its weight. Gemma 3 270M hits a score of 51.2 percent in this test, which is higher than other lightweight models that have more parameters. The new Gemma falls predictably short of 1 billion-plus models like Llama 3.2, but it gets closer than you might think for having just a fraction of the parameters.

Google claims Gemma 3 270M is good at following instructions out of the box, but it expects developers to fine-tune the model for their specific use cases. Due to the small parameter count, that process is fast and low-cost, too. Google sees the new Gemma being used for tasks like text classification and data analysis, which it can accomplish quickly and without heavy computing requirements. You can download the new Gemma for free, and the model weights are available. There's no separate commercial licensing agreement, so developers can modify, publish, and deploy Gemma 3 270M derivatives in their tools.
You can download Gemma 3 270M from Hugging Face and Kaggle in both pre-trained and instruction-tuned versions.
Privacy

Data Brokers Are Hiding Their Opt-Out Pages From Google Search (wired.com) 29

Data brokers are required by California law to provide ways for consumers to request their data be deleted. But good luck finding them. From a report: More than 30 of the companies, which collect and sell consumers' personal information, hid their deletion instructions from Google, according to a review by The Markup and CalMatters of hundreds of broker websites. This creates one more obstacle for consumers who want to delete their data.

Many of the pages containing the instructions, listed in an official state registry, use code to tell search engines to remove the page entirely from search results. Popular tools like Google and Bing respect the code by excluding pages when responding to users. Data brokers nationwide must register in California under the state's Consumer Privacy Act, which allows Californians to request that their information be removed, that it not be sold, or that they get access to it. After reviewing the websites of all 499 data brokers registered with the state, we found 35 had code to stop certain pages from showing up in searches.

AI

Google's Gemini AI Will Get More Personalized By Remembering Details Automatically 38

An anonymous reader quotes a report from The Verge: Google is rolling out an update for Gemini that will allow the AI chatbot to "remember" your past conversations without prompting. With the setting turned on, Gemini will automatically recall your "key details and preferences" and use them to personalize its output.

This expands upon an update that Google introduced last year, which lets you ask Gemini to "remember" your personal preferences and interests. Now, Gemini won't need prompting to recall this information. As an example, Google says if you've used Gemini to get ideas for a YouTube channel surrounding Japanese culture in the past, then AI chatbot might suggest creating content about trying Japanese food if you ask it to suggest new video ideas in the future. [...]

Google will turn on this feature by default, but you can disable it by heading to your settings in the Gemini app and selecting Personal Context. From there, toggle off the Your past chats with Gemini option. Google will roll out this feature to its Gemini 2.5 Pro model in "select countries" starting today, before eventually bringing it to more locations and its Gemini 2.5 Flash model.
Google will also rename its "Gemini Apps Activity" setting to "Keep Activity," which will use "a sample" of your file and photo uploads to Gemini to "help improve Google services for everyone" starting on September 2nd. If you've disabled the previous setting, the new "Keep Activity" setting will be disabled too.

There's also a new "temporary chats" feature in Gemini to preserve privacy. "Temporary chats won't appear in your recent chats or your Keep Activity setting," notes The Verge. "Gemini also won't use these chats to personalize future conversations, nor will Google use them to train its AI models. Google will only save these conversations for 72 hours."
Privacy

New York Sues Zelle Parent Company, Alleging It Enabled Fraud (cnbc.com) 28

New York Attorney General Letitia James has sued Zelle's parent company, Early Warning Services, alleging it knowingly enabled over $1 billion in fraud from 2017 to 2023 by failing to implement basic safeguards. CNBC reports: "EWS knew from the beginning that key features of the Zelle network made it uniquely susceptible to fraud, and yet it failed to adopt basic safeguards to address these glaring flaws or enforce any meaningful anti-fraud rules on its partner banks," James' office said in the release. The lawsuit alleges that Zelle became a "hub for fraudulent activity" because the registration process lacked verification steps and that EWS and its partner banks knew "for years" that fraud was spreading and did not take actionable steps to resolve it, according to the press release.

James is seeking restitution and damages, in addition to a court order mandating that Zelle puts anti-fraud measures in place. "No one should be left to fend for themselves after falling victim to a scam," James said in the release. "I look forward to getting justice for the New Yorkers who suffered because of Zelle's security failures."
A Zelle spokesperson called the lawsuit a "political stunt to generate press" and a "copycat" of the CFPB lawsuit, which was dropped in March.

"Despite the Attorney General's assertions, they did not conduct an investigation of Zelle," the spokesperson said. "Had they conducted an investigation, they would have learned that more than 99.95 percent of all Zelle transactions are completed without any report of scam or fraud -- which leads the industry."
The Courts

Russia Is Suspected To Be Behind Breach of Federal Court Filing System (nytimes.com) 66

ole_timer shares a report from the New York Times: Investigators have uncovered evidence that Russia is at least partly responsible for a recent hack of the computer system that manages federal court documents, including highly sensitive records with information that could reveal sources and people charged with national security crimes, according to several people briefed on the breach. It is not clear what entity is responsible, whether an arm of Russian intelligence might be behind the intrusion or if other countries were also involved, which some of the people familiar with the matter described as a yearslong effort to infiltrate the system. Some of the searches included midlevel criminal cases in the New York City area and several other jurisdictions, with some cases involving people with Russian and Eastern European surnames.

Administrators with the court system recently informed Justice Department officials, clerks and chief judges in federal courts that "persistent and sophisticated cyber threat actors have recently compromised sealed records," according to an internal department memo reviewed by The New York Times. The administrators also advised those officials to quickly remove the most sensitive documents from the system. "This remains an URGENT MATTER that requires immediate action," officials wrote, referring to guidance that the Justice Department had issued in early 2021 after the system was first infiltrated. Documents related to criminal activity with an overseas tie, across at least eight district courts, were initially believed to have been targeted. Last month, the chief judges of district courts across the country were quietly warned to move those kinds of cases off the regular document-management system, according to officials briefed on the request. They were initially told not to discuss the matter with other judges in their districts.

AI

The Dead Need Right To Delete Their Data So They Can't Be AI-ified, Lawyer Says 71

Legal scholar Victoria Haneman argues that U.S. law should grant estates a time-limited right to delete a deceased person's data so they can't be recreated by AI without their consent. "Digital resurrection by or through AI requires the personal data of the deceased, and the amount of data that we are storing online is increasing exponentially with each passing year," writes Haneman in an article published earlier this year in the Boston College Law Review. "It has been said that data is the new uranium, extraordinarily valuable and potentially dangerous. A right to delete will provide the decedent with a time-limited right for deletion of personal data." The Register reports: A living person may have some say on the matter through the control of personal digital documents and correspondence. But a dead person can't object, and US law doesn't offer the dead much data protection in terms of privacy law, property law, intellectual property law, or criminal law. The Revised Uniform Fiduciary Access to Digital Assets Act (RUFADAA), a law developed to help fiduciaries deal with digital files of the dead or incapacitated, can come into play. But Haneman points out that most people die intestate (without a will), leaving matters up to tech platforms. Facebook's response to dead users is to allow anyone to request the memorialization of an account, which keeps posts online. As for RUFADAA, it does little to address digital resurrection, says Haneman.

The right to publicity, which provides a private right of action against unauthorized commercial use of a person's name, image, or likeness, covers the dead in about 25 states, according to Haneman. But the monetization of publicity rights has proven to be problematic. Haneman says that there are some states where it's theoretically possible to be prosecuted for libeling or defaming the deceased, such as Idaho, Nevada, and Oklahoma, but adds that such prosecutions have declined because they tread upon the constitutional right to free expression. [...] A recent California law, the Delete Act, which took effect last year, is the first to offer a way for the living to demand the deletion of personal data from data brokers in one step. But according to Haneman, it's unclear whether the text of the law will be extended to cover the dead -- a possibility think tank Aspen Tech Policy Hub supports [PDF].

Haneman argues that a data deletion law for the dead would be grounded in laws governing human remains, where corpses receive protection against abuse despite being neither a person nor property. "The personal representative of the decedent has the right to destroy all physical letters and photographs saved by the decedent; merely storing personal information in the cloud should not grant societal archival rights," she argues. "A limited right of deletion within a twelve-month window balances the interests of society against the rights of the deceased."
Programming

'Hour of Code' Announces It's Now Evolving Into 'Hour of AI' (hourofcode.com) 35

Last month Microsoft pledged $4 billion (in cash and AI/cloud technology) to "advance" AI education in K-12 schools, community and technical colleges, and nonprofits (according to a blog post by Microsoft President Brad Smith). But in the launch event video, Smith also says it's time to "switch hats" from coding to AI, adding that "the last 12 years have been about the Hour of Code, but the future involves the Hour of AI."

Long-time Slashdot reader theodp writes: This sets the stage for Code.org CEO Hadi Partovi's announcement that his tech-backed nonprofit's [annual educational event] Hour of Code is being renamed to the Hour of AI... Explaining the pivot, Partovi says: "Computer science for the last 50 years has had a focal point around coding that's been — sort of like you learn computer science so that you create code. There's other things you learn, like data science and algorithms and cybersecurity, but the focal point has been coding.

"And we're now in a world where the focal point of computer science is shifting to AI... We all know that AI can write much of the code. You don't need to worry about where did the semicolons go, or did I close the parentheses or whatnot. The busy work of computer science is going to be done by the computer itself.

"The creativity, the thinking, the systems design, the engineering, the algorithm planning, the security concerns, privacy concerns, ethical concerns — those parts of computer science are going to be what remains with a focal point around AI. And what's going to be important is to make sure in education we give students the tools so they don't just become passive users of AI, but so that they learn how AI works."

Speaking to Microsoft's Smith, Partovi vows to redouble the nonprofit's policy work to "make this [AI literacy] a high school graduation requirement so that no student graduates school without at least a basic understanding of what's going to be part of the new liberal arts background [...] As you showed with your hat, we are renaming the Hour of Code to an Hour of AI."

Bug

UK Courts Service 'Covered Up' IT Bug That Lost Evidence (bbc.co.uk) 20

Bruce66423 shares a report from the BBC: The body running courts in England and Wales has been accused of a cover-up, after a leaked report found it took several years to react to an IT bug that caused evidence to go missing, be overwritten or appear lost. Sources within HM Courts & Tribunals Service (HMCTS) say that as a result, judges in civil, family and tribunal courts will have made rulings on cases when evidence was incomplete. The internal report, leaked to the BBC, said HMCTS did not know the full extent of the data corruption, including whether or how it had impacted cases, as it had not undertaken a comprehensive investigation. It also found judges and lawyers had not been informed, as HMCTS management decided it would be "more likely to cause more harm than good." HMCTS says its internal investigation found no evidence that "any case outcomes were affected as a result of these technical issues." However, the former head of the High Court's family division, Sir James Munby, told the BBC the situation was "shocking" and "a scandal." Bruce66423 comments: "Given the relative absence of such stories from the USA, should I congratulate you for better-quality software or for being better at covering up disasters?"
Security

Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' For Enterprise (securityweek.com) 87

An anonymous reader quotes a report from SecurityWeek: Two different firms have tested the newly released GPT-5, and both find its security sadly lacking. After Grok-4 fell to a jailbreak in two days, GPT-5 fell in 24 hours to the same researchers. Separately, but almost simultaneously, red teamers from SPLX (formerly known as SplxAI) declare, "GPT-5's raw model is nearly unusable for enterprise out of the box. Even OpenAI's internal prompt layer leaves significant gaps, especially in Business Alignment."

NeuralTrust's jailbreak employed a combination of its own EchoChamber jailbreak and basic storytelling. "The attack successfully guided the new model to produce a step-by-step manual for creating a Molotov cocktail," claims the firm. The success in doing so highlights the difficulty all AI models have in providing guardrails against context manipulation. [...] "In controlled trials against gpt-5-chat," concludes NeuralTrust, "we successfully jailbroke the LLM, guiding it to produce illicit instructions without ever issuing a single overtly malicious prompt. This proof-of-concept exposes a critical flaw in safety systems that screen prompts in isolation, revealing how multi-turn attacks can slip past single-prompt filters and intent detectors by leveraging the full conversational context."

While NeuralTrust was developing its jailbreak designed to obtain instructions, and succeeding, on how to create a Molotov cocktail (a common test to prove a jailbreak), SPLX was aiming its own red teamers at GPT-5. The results are just as concerning, suggesting the raw model is 'nearly unusable'. SPLX notes that obfuscation attacks still work. "One of the most effective techniques we used was a StringJoin Obfuscation Attack, inserting hyphens between every character and wrapping the prompt in a fake encryption challenge." [...] The red teamers went on to benchmark GPT-5 against GPT-4o. Perhaps unsurprisingly, it concludes: "GPT-4o remains the most robust model under SPLX's red teaming, especially when hardened." The key takeaway from both NeuralTrust and SPLX is to approach the current and raw GPT-5 with extreme caution.

United Kingdom

UK Secretly Allows Facial Recognition Scans of Passport, Immigration Databases (theregister.com) 25

An anonymous reader shares a report: Privacy groups report a surge in UK police facial recognition scans of databases secretly stocked with passport photos lacking parliamentary oversight. Big Brother Watch says the UK government has allowed images from the country's passport and immigration databases to be made available to facial recognition systems, without informing the public or parliament.

The group claims the passport database contains around 58 million headshots of Brits, plus a further 92 million made available from sources such as the immigration database, visa applications, and more. By way of comparison, the Police National Database contains circa 20 million photos of those who have been arrested by, or are at least of interest to, the police.

Privacy

'Facial Recognition Tech Mistook Me For Wanted Man' (bbc.co.uk) 112

Bruce66423 shares a report from the BBC: A man who is bringing a High Court challenge against the Metropolitan Police after live facial recognition technology wrongly identified him as a suspect has described it as "stop and search on steroids." Shaun Thompson, 39, was stopped by police in February last year outside London Bridge Tube station. Privacy campaign group Big Brother Watch said the judicial review, due to be heard in January, was the first legal case of its kind against the "intrusive technology." The Met, which announced last week that it would double its live facial recognition technology (LFR) deployments, said it was removing hundreds of dangerous offenders and remained confident its use is lawful. LFR maps a person's unique facial features, and matches them against faces on watch-lists. [...]

Mr Thompson said his experience of being stopped had been "intimidating" and "aggressive." "Every time I come past London Bridge, I think about that moment. Every single time." He described how he had been returning home from a shift in Croydon, south London, with the community group Street Fathers, which aims to protect young people from knife crime. As he passed a white van, he said police approached him and told him he was a wanted man. "When I asked what I was wanted for, they said, 'that's what we're here to find out'." He said officers asked him for his fingerprints, but he refused, and he was let go only after about 30 minutes, after showing them a photo of his passport.

Mr Thompson says he is bringing the legal challenge because he is worried about the impact LFR could have on others, particularly if young people are misidentified. "I want structural change. This is not the way forward. This is like living in Minority Report," he said, referring to the science fiction film where technology is used to predict crimes before they're committed. "This is not the life I know. It's stop and search on steroids. "I can only imagine the kind of damage it could do to other people if it's making mistakes with me, someone who's doing work with the community."
Bruce66423 comments: "I suspect a payout of 10,000 pounds for each false match that is acted on would probably encourage more careful use, perhaps with a second payout of 100,000 pounds if the same person is victimized again."
Privacy

Meta Eavesdropped On Period-Tracker App's Users, Jury Rules (sfgate.com) 101

A San Francisco jury ruled that Meta violated the California Invasion of Privacy Act by collecting sensitive data from users of the Flo period-tracking app without consent. "The plaintiff's lawyers who sued Meta are calling this a 'landmark' victory -- the tech company contends that the jury got it all wrong," reports SFGATE. From the report: The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation. Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of "Custom App Events" -- such as a user clicking a particular button in the "wanting to get pregnant" section of the app. Their complaint also pointed to Facebook's terms for its business tools, which said the company used so-called "event data" to personalize ads and content.

In a 2022 filing (PDF), the tech giant admitted that Flo used Facebook's kit during this period and that the app sent data connected to "App Events." But Meta denied receiving intimate information about users' health. Nonetheless, the jury ruled (PDF) against Meta. Along with the eavesdropping decision, the group determined that Flo's users had a reasonable expectation they weren't being overheard or recorded, as well as ruling that Meta didn't have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.
The jury's ruling could impact over 3.7 million U.S. users who registered between November 2016 and February 2019, with updates to be shared via email and a case website. The exact compensation from the trial or potential settlements remains uncertain.

Slashdot Top Deals