United Kingdom

New Digital ID Will Be Mandatory To Work in the UK (bbc.com) 80

Digital ID will be mandatory in order to work in the UK, as part of plans to tackle illegal migration. From a report: Sir Keir Starmer said the new digital ID scheme would make it tougher to work in the UK illegally and offer "countless benefits" to citizens. However, opposition parties argued the proposals would not stop people crossing the Channel in small boats.

The prime minister set out his plans in a broader speech to a gathering of world leaders, in which he said it had been "too easy" for people to work illegally in the UK because the centre-left had been "squeamish" about saying things that were "clearly true."

Addressing the Global Progressive Action Conference in London - attended by politicians including Australian Prime Minister Anthony Albanese and Canadian Prime Minister Mark Carney - Sir Keir said it was time to "look ourselves in the mirror and recognise where we've allowed our parties to shy away from people's concerns."

"It is not compassionate left-wing politics to rely on labour that exploits foreign workers and undercuts fair wages," he said. "The simple fact is that every nation needs to have control over its borders. We do need to know who is in our country."

Privacy

Neon Goes Dark After Exposing Users' Phone Numbers, Call Recordings, Transcripts (techcrunch.com) 29

An anonymous reader quotes a report from TechCrunch: A viral app called Neon, which offers to record your phone calls and pay you for the audio so it can sell that data to AI companies, has rapidly risen to the ranks of the top-five free iPhone apps since its launch last week. The app already has thousands of users and was downloaded 75,000 times yesterday alone, according to app intelligence provider Appfigures. Neon pitches itself as a way for users to make by providing call recordings that help train, improve, and test AI models. But now Neon has gone offline, at least for now, after a security flaw allowed anyone to access the phone numbers, call recordings, and transcripts of any other user, TechCrunch can now report.

TechCrunch discovered the security flaw during a short test of the app on Thursday. We alerted the app's founder, Alex Kiam (who previously did not respond to a request for comment about the app), to the flaw soon after our discovery. Kiam told TechCrunch later Thursday that he took down the app's servers and began notifying users about pausing the app, but fell short of informing his users about the security lapse. The Neon app stopped functioning soon after we contacted Kiam.
TechCrunch found that the app's backend services didn't properly restrict access, allowing any logged-in user to request and receive data belong to other users. This included call transcripts, raw call recordings, and sensitive metadata, including phone numbers, the date/time of calls, and their durations.
Microsoft

Microsoft Disables Some Cloud Services Used by Israel's Defense Ministry (msn.com) 119

Microsoft has disabled the Israeli Defense Ministry's access to certain services and subscriptions, after finding evidence that the ministry used the tech company's cloud services to surveil Gaza citizens. WSJ adds: The software company made the move after an internal investigation indicated Israel's Defense Ministry used Microsoft's Azure cloud services for surveillance, according to a person familiar with the matter. The company probe is ongoing. "As employees, we all have a shared interest in privacy protection, given the business value it creates by ensuring our customers can rely on our services with rock solid trust," Microsoft President Brad Smith said in a blog post Thursday on Microsoft's company website.

Smith said Microsoft's investigation was guided by the company's "longstanding protection of privacy as a fundamental right." Microsoft opened the probe after the Guardian, the British news organization, reported in August that Israel used Azure to store data on Gaza civilians and surveil them. The issue has been the source of protests at the company.

The Almighty Buck

Neon Pays Users To Record Their Phone Calls, Sell Data To AI Firms 34

Neon Mobile, now the No. 2 social networking app in Apple's U.S. App Store, pays users up to $30 per day to record their phone calls and sell the data to AI companies. The app claims to only capture one side of a call unless both parties use Neon, but its terms grant sweeping rights over recordings. TechCrunch reports: The app, Neon Mobile, pitches itself as a money-making tool offering "hundreds or even thousands of dollars per year" for access to your audio conversations. Neon's website says the company pays 30 cents per minute when you call other Neon users and up to $30 per day maximum for making calls to anyone else. The app also pays for referrals.

According to Neon's terms of service, the company's mobile app can capture users' inbound and outbound phone calls. However, Neon's marketing claims to only record your side of the call unless it's with another Neon user. That data is being sold to "AI companies," the company's terms of service state, "for the purpose of developing, training, testing, and improving machine learning models, artificial intelligence tools and systems, and related technologies."

Despite what Neon's privacy policy says, its terms include a very broad license to its user data, where Neon grants itself a: "...worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed." That leaves plenty of wiggle room for Neon to do more with users' data than it claims. The terms also include an extensive section on beta features, which have no warranty and may have all sorts of issues and bugs.
Peter Jackson, cybersecurity and privacy attorney at Greenberg Glusker, told TechCrunch: "Once your voice is over there, it can be used for fraud. Now, this company has your phone number and essentially enough information -- they have recordings of your voice, which could be used to create an impersonation of you and do all sorts of fraud."
The Internet

Europe's Cookie Law Messed Up the Internet. Brussels Wants To Fix It. (politico.eu) 102

In a bid to slash red tape, the European Commission wants to eliminate one of its peskiest laws: a 2009 tech rule that plastered the online world with pop-ups requesting consent to cookies. From a report: It's the kind of simplification ordinary Europeans can get behind. European rulemakers in 2009 revised a law called the e-Privacy Directive to require websites to get consent from users before loading cookies on their devices, unless the cookies are "strictly necessary" to provide a service. Fast forward to 2025 and the internet is full of consent banners that users have long learned to click away without thinking twice.

"Too much consent basically kills consent. People are used to giving consent for everything, so they might stop reading things in as much detail, and if consent is the default for everything, it's no longer perceived in the same way by users," said Peter Craddock, data lawyer with Keller and Heckman. Cookie technology is now a focal point of the EU executive's plans to simplify technology regulation. Officials want to present an "omnibus" text in December, scrapping burdensome requirements on digital companies. On Monday, it held a meeting with the tech industry to discuss the handling of cookies and consent banners.

The Almighty Buck

Vietnam Shuts Down Millions of Bank Accounts Over Biometric Rules (icobench.com) 23

Longtime Slashdot reader schwit1 shares a report from ICO Bench: As of September 1, 2025, banks across Vietnam are closing accounts deemed inactive or non-compliant with new biometric rules. Authorities estimate that more than 86 million accounts out of roughly 200 million are at risk if users fail to update their identity verification.

The State Bank of Vietnam has also introduced stricter thresholds for transactions:
- Facial authentication is mandatory for online transfers above 10 million VND (about $379).
- Cumulative daily transfers over 20 million VND ($758) also require biometric approval.

The policy is part of the central bank's broader "cashless" strategy, aimed at combating fraud, identity theft, and deepfake-enabled scams. [...] While many Vietnamese citizens have updated their biometric data without issue, the measure has disproportionately affected foreign residents and expatriates who cannot easily return to local branches and dormant accounts that had been left inactive for years.
schwit1 highlights a post on X from Bitcoin expert and TFTC.io founder Marty Bent: "If users don't comply by the 30th they'll lose their money. This is why we bitcoin."
Privacy

DHS Has Been Collecting US Citizens' DNA for Years (wired.com) 63

Customs and Border Protection collected DNA from nearly 2,000 US citizens between 2020 and 2024 and sent the samples to the FBI's CODIS crime database, according to Georgetown Law's Center on Privacy & Technology analysis of newly released government data. The collection included approximately 95 minors, some as young as 14, and travelers never charged with crimes.

Congress never authorized DNA collection from citizens, children or civil detainees. DHS has contributed 2.6 million profiles to CODIS since 2020, with 97% collected under civil rather than criminal authority. The expansion followed a 2020 Justice Department rule that revoked DHS's waiver from DNA collection requirements. Former FBI director Christopher Wray testified in 2023 that monthly DNA submissions jumped from a few thousand to 92,000, creating a backlog of 650,000 unprocessed kits. Georgetown researchers project DHS could account for one-third of CODIS by 2034. The DHS Inspector General found in 2021 that the department lacked central oversight of DNA collection.
Biotech

Apple Watch's New High Blood Pressure Notifications Developed With AI (msn.com) 34

Many Apple Watches will soon be able to alert users about possible high blood pressure, reports Reuters — culminating six years of research and development: Apple used AI to sort through the data from 100,000 people enrolled in a heart and movement study it originally launched in 2019 to see whether it could find features in the signal data from the watch's main heart-related sensor that it could then match up with traditional blood pressure measurements, said Sumbul Ahmad Desai [Apple's vice president of health]. After multiple layers of machine learning, Apple came up with an algorithm that it then validated with a specific study of 2,000 participants.

Apple's privacy measures mean that "one of the ironies here is we don't get a lot of data" outside of the context of large-scale studies, Desai said. But data from those studies "gives us a sense of, scientifically, what are some other signals that are worth pulling the thread on ... those studies are incredibly powerful."

The feature, which received approval from the U.S. Food and Drug Administration, does not measure blood pressure directly, but notifies users that they may have high blood pressure and encourages them to use a cuff to measure it and talk to a doctor. Apple plans to roll out the feature to more than 150 countries, which Ami Bhatt, chief innovation officer of the American College of Cardiology, said could help people discover high blood pressure early and reduce related conditions such as heart attacks, strokes and kidney disease. Bhatt, who said her views are her own and do not represent those of the college, said Apple appears to have been careful to avoid false positives that might alarm users. But she said the iPhone maker should emphasize that the new feature is no substitute for traditional measurements and professional diagnosis.

The article notes that the feature will be available in Apple Watch Series 11 models that go on sale on Friday, as well as models back to the Apple Watch Series 9.
AI

ChatGPT Will Guess Your Age and Might Require ID For Age Verification 111

OpenAI is rolling out stricter safety measures for ChatGPT after lawsuits linked the chatbot to multiple suicides. "ChatGPT will now attempt to guess a user's age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old," reports 404 Media. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff," the company said in its announcement. "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking," OpenAI CEO Sam Altman said on X. From the report: OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures. In addition to attempting to guess or verify a user's age, ChatGPT will now also apply different rules to teens who are using the chatbot. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting," the announcement said. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm."

OpenAI's post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called "uncensored" models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.

"We want users to be able to use our tools in the way that they want, within very broad bounds of safety," Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to "'Treat our adult users like adults' is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else's freedom."
America Online

Apollo Explores Sale of Internet Pioneer AOL (msn.com) 35

An anonymous reader shares a report: Apollo is exploring a sale of early internet darling AOL after receiving inbound interest in the business, according to people familiar with the matter. Any deal could value AOL at around $1.5 billion, the people said. It is also possible the talks won't result in any deal, they cautioned.

Apollo bought AOL in 2021 as part of a $5 billion deal to acquire that business and Yahoo from Verizon. AOL generates around $400 million in annual earnings before interest, taxes, depreciation and amortization, the people familiar with the matter said. Its main business lines include software for internet privacy and protection, and the AOL.com website and email domain.

Privacy

Google Releases VaultGemma, Its First Privacy-Preserving LLM 23

An anonymous reader quotes a report from Ars Technica: The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to 'memorize' any of that content. LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data -- if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data. By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.
The work the team has done here has led to a new Google model called VaultGemma, its first open-weight model trained with differential privacy to minimize memorization risks. It's built on the older Gemma 2 foundation and sized at 1 billion parameters, which the company says performs comparably to non-private models of similar size.

It's available now from Hugging Face and Kaggle.
Privacy

UK's MI5 'Unlawfully' Obtained Data From Former BBC Journalist (theguardian.com) 43

Bruce66423 shares a report from The Guardian: MI5 has conceded it "unlawfully" obtained the communications data of a former BBC journalist, in what was claimed to be an unprecedented admission from the security services. The BBC said it was a "matter of grave concern" that the agency had obtained communications data from the mobile phone of Vincent Kearney, a former BBC Northern Ireland home affairs correspondent. The admission came in a letter to the BBC and to Kearney, in relation to a tribunal examining claims that several reporters in Northern Ireland were subjected to unlawful scrutiny by the police. It related to work carried out by Kearney for a documentary into the independence of the Office of the Police Ombudsman for Northern Ireland (PONI). Kearney is now the northern editor at Irish broadcaster RTE.

In documents submitted to the Investigatory Powers Tribunal (IPT), MI5 conceded it obtained phone data from Kearney on two occasions in 2006 and 2009. Jude Bunting KC, representing Kearney and the BBC, told a hearing on Monday: "The MI5 now confirms publicly that in 2006 and 2009 MI5 obtained communications data in relation to Vincent Kearney." He said the security service accepted it had breached Kearney's rights under article 8 and article 10 of the European convention on human rights. They relate to the right to private correspondence and the right to impart information without interference from public authorities. "This appears to be the first time in any tribunal proceedings in which MI5 publicly accept interference with a journalist's communications data, and also publicly accept that they acted unlawfully in doing so," Bunting said. He claimed the concessions that it accessed the journalist's data represented "serious and sustained illegality on the part of MI5."
Bruce66423 comments: "The good news is that it's come out. The bad news is that it has taken 16 years to do so. The interesting question is whether there will be any meaningful consequences for individuals within MI5; there's a nice charge of 'malfeasance in public office' that can be used to get such individuals into a criminal court. Or will the outcome be like that of when the CIA hacked the US Senate's computers, lied about it, and nothing happened?"
United States

Airlines Sell 5 Billion Plane Ticket Records To the Government For Warrantless Searching (404media.co) 104

404 Media: A data broker owned by the country's major airlines, including American Airlines, United and Delta, is selling access to five billion plane ticketing records to the government for warrantless searching and monitoring of peoples' movements, including by the FBI, Secret Service, ICE, and many other agencies, according to a new contract and other records reviewed by 404 Media.

The contract provides new insight into the scale of the sale of passengers' data by the Airlines Reporting Corporation (ARC), the airlines-owned data broker. The contract shows ARC's data includes information related to more than 270 carriers and is sourced through more than 12,800 travel agencies. ARC has previously told the government to not reveal to the public where this passenger data came from, which includes peoples' names, full flight itineraries, and financial details.

"Americans' privacy rights shouldn't depend on whether they bought their tickets directly from the airline or via a travel agency. ARC's sale of data to U.S. government agencies is yet another example of why Congress needs to close the data broker loophole by passing my bipartisan bill, the Fourth Amendment Is Not For Sale Act," Senator Ron Wyden told 404 Media in a statement.

Privacy

A Third of UK Firms Using 'Bossware' To Monitor Workers' Activity, Survey Reveals (theguardian.com) 23

A third of UK employers are using "bossware" technology to track workers' activity with the most common methods including monitoring emails and web browsing. From a report: Private companies are most likely to deploy in-work surveillance and one in seven employers are recording or reviewing screen activity, according to a UK-wide survey that estimates the extent of office snooping.

The findings, shared with the Guardian by the Chartered Management Institute (CMI), are based on responses from hundreds of UK managers and suggest there has been a recent growth in computerised work surveillance. In 2023, less than a fifth of people thought they were being monitored by an employer, the Information Commissioner's Office (ICO) found. The finding that about a third of managers report their organisations are monitoring workers' online activities on employer-owned devices is probably an underestimate, as roughly the same proportion said they don't know what tracking their organisations do.

Many monitoring systems are aimed at preventing insider threats and safeguarding sensitive information as well as detecting productivity dips. But the trend appears to be causing unease. A large minority of managers are opposed to the practice, saying it undermines trust with staff and invades their personal privacy, the CMI found.

Facebook

Facebook Begins Sending Settlement Payments from Cambridge Analytica Scandal Soon (cnn.com) 30

"Facebook users who filed a claim in parent company Meta's $725 million settlement related to the Cambridge Analytica scandal may soon get a payment," reports CNN, since "on August 27, the court ordered that settlement benefits be distributed." It's been over two years since Facebook users were able to file claims in Meta's December 2022 settlement. The class-action lawsuit began after the social media giant said in 2018 that as many as 87 million Facebook users' private information was obtained by data analytics firm Cambridge Analytica...

Meta was accused of allowing Cambridge Analytica and other third parties, including developers, advertisers and data brokers, to access private information about Facebook users. The social media giant was also accused of insufficiently managing third-party access to and use of user data. Meta did not admit wrongdoing as part of the settlement. Following the Cambridge Analytica incident, Facebook restricted third-party access to user data and "developed more robust tools" to inform users about how data is collected and shared, according to court documents...

Any US Facebook user who had an active account between May 24, 2007, and December 22, 2022, was eligible to file a claim, even if they have deleted the account. The deadline to file was August 25, 2023. Almost 29 million claims were filed and about 18 million were validated as of September 2023, according to Meta's response in a 2024 legal document... Payments will either be sent directly to the bank account provided on the claim form, or via PayPal, a virtual prepaid Mastercard, Venmo or Zelle. Unsuccessful or expired payments will receive a "second chance email" to update the payment method.

Security

Proton Mail Suspended Journalist Accounts At Request of Cybersecurity Agency (theintercept.com) 77

An anonymous reader quotes a report from The Intercept: The company behind the Proton Mail email service, Proton, describes itself as a "neutral and safe haven for your personal data, committed to defending your freedom." But last month, Proton disabled email accounts belonging to journalists reporting on security breaches of various South Korean government computer systems following a complaint by an unspecified cybersecurity agency. After a public outcry, and multiple weeks, the journalists' accounts were eventually reinstated -- but the reporters and editors involved still want answers on how and why Proton decided to shut down the accounts in the first place.

Martin Shelton, deputy director of digital security at the Freedom of the Press Foundation, highlighted that numerous newsrooms use Proton's services as alternatives to something like Gmail "specifically to avoid situations like this," pointing out that "While it's good to see that Proton is reconsidering account suspensions, journalists are among the users who need these and similar tools most." Newsrooms like The Intercept, the Boston Globe, and the Tampa Bay Times all rely on Proton Mail for emailed tip submissions. Shelton noted that perhaps Proton should "prioritize responding to journalists about account suspensions privately, rather than when they go viral." On Reddit, Proton's official account stated that "Proton did not knowingly block journalists' email accounts" and that the "situation has unfortunately been blown out of proportion."

The two journalists whose accounts were disabled were working on an article published in the August issue of the long-running hacker zine Phrack. The story described how a sophisticated hacking operation -- what's known in cybersecurity parlance as an APT, or advanced persistent threat -- had wormed its way into a number of South Korean computer networks, including those of the Ministry of Foreign Affairs and the military Defense Counterintelligence Command, or DCC. The journalists, who published their story under the names Saber and cyb0rg, describe the hack as being consistent with the work of Kimsuky, a notorious North Korean state-backed APT sanctioned by the U.S. Treasury Department in 2023. As they pieced the story together, emails viewed by The Intercept show that the authors followed cybersecurity best practices and conducted what's known as responsible disclosure: notifying affected parties that a vulnerability has been discovered in their systems prior to publicizing the incident.
Phrack said the account suspensions created a "real impact to the author. The author was unable to answer media requests about the article." Phrack noted that the co-authors were already working with affected South Korean organizations on responsible disclosure and system fixes. "All this was denied and ruined by Proton," Phrack stated.

Phrack editors said that the incident leaves them "concerned what this means to other whistleblowers or journalists. The community needs assurance that Proton does not disable accounts unless Proton has a court order or the crime (or ToS violation) is apparent."
Music

Spotify Peeved After 10,000 Users Sold Data To Build AI Tools (arstechnica.com) 17

An anonymous reader quotes a report from Ars Technica: For millions of Spotify users, the "Wrapped" feature -- which crunches the numbers on their annual listening habits -- is a highlight of every year's end, ever since it debuted in 2015. NPR once broke down exactly why our brains find the feature so "irresistible," while Cosmopolitan last year declared that sharing Wrapped screenshots of top artists and songs had by now become "the ultimate status symbol" for tens of millions of music fans. It's no surprise then that, after a decade, some Spotify users who are especially eager to see Wrapped evolve are no longer willing to wait to see if Spotify will ever deliver the more creative streaming insights they crave.

With the help of AI, these users expect that their data can be more quickly analyzed to potentially uncover overlooked or never-considered patterns that could offer even more insights into what their listening habits say about them. Imagine, for example, accessing a music recap that encapsulates a user's full listening history -- not just their top songs and artists. With that unlocked, users could track emotional patterns, analyzing how their music tastes reflected their moods over time and perhaps helping them adjust their listening habits to better cope with stress or major life events. And for users particularly intrigued by their own data, there's even the potential to use AI to cross data streams from different platforms and perhaps understand even more about how their music choices impact their lives and tastes more broadly.

Likely just as appealing as gleaning deeper personal insights, though, users could also potentially build AI tools to compare listening habits with their friends. That could lead to nearly endless fun for the most invested music fans, where AI could be tapped to assess all kinds of random data points, like whose breakup playlists are more intense or who really spends the most time listening to a shared favorite artist. In pursuit of supporting developers offering novel insights like these, more than 18,000 Spotify users have joined "Unwrapped," a collective launched in February that allows them to pool and monetize their data.

Voting as a group through the decentralized data platform Vana -- which Wired profiled earlier this year -- these users can elect to sell their dataset to developers who are building AI tools offering fresh ways for users to analyze streaming data in ways that Spotify likely couldn't or wouldn't. In June, the group made its first sale, with 99.5 percent of members voting yes. Vana co-founder Anna Kazlauskas told Ars that the collective -- at the time about 10,000 members strong -- sold a "small portion" of its data (users' artist preferences) for $55,000 to Solo AI. While each Spotify user only earned about $5 in cryptocurrency tokens -- which Kazlauskas suggested was not "ideal," wishing the users had earned about "a hundred times" more -- she said the deal was "meaningful" in showing Spotify users that their data "is actually worth something."
Spotify responded to the collective by citing both trademark and policy violations. The company sent a letter to Unwrapped developers, warning that the project's name may infringe on Spotify's Wrapped branding, and that Unwrapped breaches developer terms. Specifically, Spotify objects to Unwrapped's use of platform data for AI/ML training and facilitating user data sales.

"Spotify honors our users' privacy rights, including the right of portability," Spotify's spokesperson said. "All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms which prohibit the collection, aggregation, and sale of Spotify user data to third parties."

Unwrapped says it plans to defend users' right to "access, control, and benefit from their own data," while providing reassurances that it will "respect Spotify's position as a global music leader."
AI

AI-generated Medical Data Can Sidestep Usual Ethics Review, Universities Say (nature.com) 38

An anonymous reader shares a report: Medical researchers at some institutions in Canada, the United States and Italy are using data created by artificial intelligence (AI) from real patient information in their experiments without the need for permission from their institutional ethics boards, Nature has learnt.

To generate what is called 'synthetic data', researchers train generative AI models using real human medical information, then ask the models to create data sets with statistical properties that represent, but do not include, human data.

Typically, when research involves human data, an ethics board must review how studies affect participants' rights, safety, dignity and well-being. However, institutions including the IRCCS Humanitas Research Hospital in Milan, Italy, the Children's Hospital of Eastern Ontario (CHEO) in Ottawa and the Ottawa Hospital, both in Canada, and Washington University School of Medicine (WashU Medicine) in St. Louis, Missouri, have waived these requirements for research involving synthetic data.

The reasons the institutions use to justify this decision differ. However, the potential benefits of using synthetic data include protecting patient privacy, being more easily able to share data between sites and speeding up research, says Khaled El Emam, a medical AI researcher at the CHEO Research Institute and the University of Ottawa.

Encryption

Swiss Government Looks To Undercut Privacy Tech, Stoking Fears of Mass Surveillance (therecord.media) 31

The Swiss government could soon require service providers with more than 5,000 users to collect government-issued identification, retain subscriber data for six months and, in many cases, disable encryption. From a report: The proposal, which is not subject to parliamentary approval, has alarmed privacy and digital-freedoms advocates worldwide because of how it will destroy anonymity online, including for people located outside of Switzerland. A large number of virtual private network (VPN) companies and other privacy-preserving firms are headquartered in the country because it has historically had liberal digital privacy laws alongside its famously discreet banking ecosystem.

Proton, which offers secure and end-to-end encrypted email along with an ultra-private VPN and cloud storage, announced on July 23 that it is moving most of its physical infrastructure out of Switzerland due to the proposed law. The company is investing more than $117 million in the European Union, the announcement said, and plans to help develop a "sovereign EuroStack for the future of our home continent." Switzerland is not a member of the EU. Proton said the decision was prompted by the Swiss government's attempt to "introduce mass surveillance."

United States

The US Is Now the Largest Investor In Commercial Spyware (arstechnica.com) 19

An anonymous reader quotes a report from Wired: The United States has emerged as the largest investor in commercial spyware -- a global industry that has enabled the covert surveillance of journalists, human rights defenders, politicians, diplomats, and others, posing grave threats to human rights and national security. In 2024, 20 new US-based spyware investors were identified, bringing the total number of American backers of this technology to 31. This growth has largely outpaced other major investing countries such as Israel, Italy, and the United Kingdom, according to a new report published today by the Atlantic Council.

The study surveyed 561 entities across 46 countries between 1992 and 2024, identifying 34 new investors. This brings the total to 128, up from 94 in the dataset published last year. The number of identified investors in the EU Single Market, plus Switzerland, stands at 31, with Italy -- a key spyware hub -- accounting for the largest share at 12. Investors based in Israel number 26. US-based investors include major hedge funds D.E. Shaw & Co. and Millennium Management, prominent trading firm Jane Street, and mainstream financial-services company Ameriprise Financial -- all of which, according to the Atlantic Council, have channeled funds to Israeli lawful-interception software provider Cognyte, a company allegedly linked to human rights abuses in Azerbaijan and Indonesia, among others. [...]

Apart from focusing on investment, the Atlantic Council notes that the global spyware market is "growing and evolving," with its dataset expanded to include four new vendors, seven new resellers or brokers, 10 new suppliers, and 55 new individuals linked to the industry. Newly identified vendors include Israel's Bindecy and Italy's SIO. [...] The study reveals the addition of three new countries linked to spyware activity -- Japan, Malaysia, and Panama. Japan in particular is a signatory to international efforts to curb spyware abuse, including the Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware and the Pall Mall Process Code of Practice for States.
The Atlantic Council's Jen Roberts, who also worked on the report, urged expanding Executive Order 14105 to also include spyware. He also emphasized preserving Executive Order 14093, noting that U.S. purchasing power is a key lever in shaping and constraining the global spyware market. "US purchasing power is a significant tool in shaping and constraining the global market for spyware," said Roberts.

Slashdot Top Deals