Privacy

Meta Eavesdropped On Period-Tracker App's Users, Jury Rules (sfgate.com) 101

A San Francisco jury ruled that Meta violated the California Invasion of Privacy Act by collecting sensitive data from users of the Flo period-tracking app without consent. "The plaintiff's lawyers who sued Meta are calling this a 'landmark' victory -- the tech company contends that the jury got it all wrong," reports SFGATE. From the report: The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation. Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of "Custom App Events" -- such as a user clicking a particular button in the "wanting to get pregnant" section of the app. Their complaint also pointed to Facebook's terms for its business tools, which said the company used so-called "event data" to personalize ads and content.

In a 2022 filing (PDF), the tech giant admitted that Flo used Facebook's kit during this period and that the app sent data connected to "App Events." But Meta denied receiving intimate information about users' health. Nonetheless, the jury ruled (PDF) against Meta. Along with the eavesdropping decision, the group determined that Flo's users had a reasonable expectation they weren't being overheard or recorded, as well as ruling that Meta didn't have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.
The jury's ruling could impact over 3.7 million U.S. users who registered between November 2016 and February 2019, with updates to be shared via email and a case website. The exact compensation from the trial or potential settlements remains uncertain.
The Courts

OpenAI Offers 20 Million User Chats In ChatGPT Lawsuit. NYT Wants 120 Million. (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case. In a joint letter (PDF) Thursday, both sides requested to hold a confidential settlement conference on August 7. Ars confirmed with the NYT's legal team that the conference is not about settling the case but instead was scheduled to settle one of the most disputed aspects of the case: news plaintiffs searching through millions of ChatGPT logs. That means it's possible that this week, ChatGPT users will have a much clearer understanding of whether their private chats might be accessed in the lawsuit. In the meantime, OpenAI has broken down (PDF) the "highly complex" process required to make deleted chats searchable in order to block the NYT's request for broader access.

Previously, OpenAI had vowed to stop what it deemed was the NYT's attempt to conduct "mass surveillance" of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs. After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case -- short of settling -- as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted. Most recently, OpenAI floated a compromise, asking the court to agree that news organizations didn't need to search all ChatGPT logs. The AI company cited the "only expert" who has so far weighed in on what could be a statistically relevant, appropriate sample size -- computer science researcher Taylor Berg-Kirkpatrick. He suggested that a sample of 20 million logs would be sufficient to determine how frequently ChatGPT users may be using the chatbot to regurgitate articles and circumvent news sites' paywalls. But the NYT and other news organizations rejected the compromise, OpenAI said in a filing (PDF) yesterday. Instead, news plaintiffs have made what OpenAI said was an "extraordinary request that OpenAI produce the individual log files of 120 million ChatGPT consumer conversations."

That's six times more data than Berg-Kirkpatrick recommended, OpenAI argued. Complying with the request threatens to "increase the scope of user privacy concerns" by delaying the outcome of the case "by months," OpenAI argued. If the request is granted, it would likely trouble many users by extending the amount of time that users' deleted chats will be stored and potentially making them vulnerable to a breach or leak. As negotiations potentially end this week, OpenAI's co-defendant, Microsoft, has picked its own fight with the NYT over its internal ChatGPT equivalent tool that could potentially push the NYT to settle the disputes over ChatGPT logs.

United States

Three US Agencies Get Failing Grades For Not Following IT Best Practices (theregister.com) 19

The Government Accountability Office has issued reports criticizing the Department of Homeland Security, Environmental Protection Agency, and General Services Administration for failing to implement critical IT and cybersecurity recommendations.

DHS leads with 43 unresolved recommendations dating to 2018, including seven priority matters. The EPA has 11 outstanding items, including failures to submit FedRAMP documentation and conduct organization-wide cybersecurity risk assessments. GSA has four pending recommendations.

All three agencies failed to properly log cybersecurity events and conduct required annual IT portfolio reviews. The DHS' HART biometric program remains behind schedule without proper cost accounting or privacy controls, with all nine 2023 recommendations still open.
Privacy

AI Is Listening to Your Meetings. Watch What You Say. (msn.com) 33

AI meeting transcription software is inadvertently sharing private conversations with all meeting participants through automated summaries. WSJ found a series of mishaps that people confirmed on-record.

Digital marketing agency owner Tiffany Lewis discovered her "Nigerian prince" joke about a potential client was included in the summary sent to that same client. Nashville branding firm Studio Delger received meeting notes documenting their discussion about "getting sandwich ingredients from Publix" and not liking soup when their client failed to appear. Communications agency coordinator Andrea Serra found her personal frustrations about a neighborhood Whole Foods and a kitchen mishap while making sweet potato recipes included in official meeting recaps distributed to colleagues.
Privacy

Nearly 100,000 ChatGPT Conversations Were Searchable on Google (404media.co) 13

An anonymous reader shares a report: A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.

The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment.

Security

CrowdStrike Investigated 320 North Korean IT Worker Cases In the Past Year (cyberscoop.com) 11

An anonymous reader quotes a report from CyberScoop: North Korean operatives seeking and gaining technical jobs with foreign companies kept CrowdStrike busy, accounting for almost one incident response case or investigation per day in the past year, the company said in its annual threat hunting report released Monday. "We saw a 220% year-over-year increase in the last 12 months of Famous Chollima activity," Adam Meyers, senior vice president of counter adversary operations, said during a media briefing about the report. "We see them almost every day now," he said, referring to the North Korean state-sponsored group of North Korean technical specialists that has crept into the workforce of Fortune 500 companies and small-to-midsized organizations across the globe.

CrowdStrike's threat-hunting team investigated more than 320 incidents involving North Korean operatives gaining remote employment as IT workers during the one-year period ending June 30. CrowdStrike researchers found that Famous Chollima fueled that pace of activity with an assist from generative artificial intelligence tools that helped North Korean operatives maneuver workflows and evade detection during the hiring process. "They use generative AI across all stages of their operation," Meyers said. The insider threat group used generative AI to draft resumes, create false identities, build tools for job research, mask their identity during video interviews and answer questions or complete technical coding assignments, the report found. CrowdStrike said North Korean tech workers also used generative AI on the job to help with daily tasks and manage various communications across multiple jobs -- sometimes three to four -- they worked simultaneously.

Threat hunters observed other significant shifts in malicious activity during the past year, including a 27% year-over-year increase in hands-on-keyboard intrusions -- 81% of which involved no malware. Cybercrime accounted for 73% of all interactive intrusions during the one-year period. CrowdStrike continues to find and add more threat groups and clusters of activity to its matrix of cybercriminals, nation-state attackers and hacktivists. The company identified 14 new threat groups or individuals in the past six months, Meyers said. "We're up to over 265 named adversary groups that we track, and then 150 what we call malicious activity clusters," otherwise unnamed threat groups or individuals under development, Meyers said.

Privacy

Despite Breach and Lawsuits, Tea Dating App Surges in Popularity (www.cbc.ca) 39

The women-only app Tea now "faces two class action lawsuits filed in California" in response to a recent breach," reports NPR — even as the company is now boasting it has more than 6.2 million users.

A spokesperson for Tea told the CBC it's "working to identify any users whose personal information was involved" in a breach of 72,000 images (including 13,000 verification photos and images of government IDs) and a later breach of 1.1 million private messages. Tea said they will be offering those users "free identity protection services." The company said it removed the ID requirement in 2023, but data that was stored before February 2024, when Tea migrated to a more secure system, was accessed in the breach... [Several sites have pointed out Tea's current privacy policy is telling users selfies are "deleted immediately."]

Tea was reportedly intended to launch in Canada on Friday, according to information previously posted on the App Store, but as of this week the launch date is now in February 2026. Tea didn't respond to CBC's questions about the apparent delay. Yet even amid the current turmoil, Tea's waitlist has ballooned to 1.5 million women, all eager to join, the company posted on Wednesday. A day later, Tea posted in its Instagram stories that it had approved "well over" 800,000 women into the app that day alone.

So, why is it so popular, despite the drama and risks?

Tea tapped into a perceived weakness of ther dating apps, according to an associate health studies professor at Ontario's Western University interviewed by the CBC, who thinks users should avoid Tea, at least until its security is restored.

Tech blogger John Gruber called the incident "yet another data point for the argument that any 'private messaging' feature that doesn't use E2EE isn't actually private at all." (And later Gruber notes Tea's apparent absence at the top of the charts in Google's Play Store. "I strongly suspect that, although Google hasn't removed Tea from the Play Store, they've delisted it from discovery other than by searching for it by name or following a direct link to its listing.")

Besides anonymous discussions about specific men, Tea also allows its users to perform background and criminal record checks, according to NPR, as well as reverse image searches. But the recent breach, besides threatening the safety of its users, also "laid bare the anonymous, one-sided accusations against the men in their dating pools." The CBC points out there's a men's rights group on Reddit now urging civil lawsuits against tea as part of a plan to get the app shut down. And "Cleveland lawyer Aaron Minc, who specializes in cases involving online defamation and harassment, told The Associated Press that his firm has received hundreds of calls from people upset about what's been posted about them on Tea."

Yet in response to Tea's latest Instagram post, "The comments were almost entirely from people asking Tea to approve them, so they could join the app."
Bug

A Luggage Service's Web Bugs Exposed the Travel Plans of Every User (wired.com) 1

An anonymous reader quotes a report from Wired: An airline leaving all of its passengers' travel records vulnerable to hackers would make an attractive target for espionage. Less obvious, but perhaps even more useful for those spies, would be access to a premium travel service that spans 10 different airlines, left its own detailed flight information accessible to data thieves, and seems to be favored by international diplomats. That's what one team of cybersecurity researchers found in the form of Airportr, a UK-based luggage service that partners with airlines to let its largely UK- and Europe-based users pay to have their bags picked up, checked, and delivered to their destination. Researchers at the firm CyberX9 found that simple bugs in Airportr's website allowed them to access virtually all of those users' personal information, including travel plans, or even gain administrator privileges that would have allowed a hacker to redirect or steal luggage in transit. Among even the small sample of user data that the researchers reviewed and shared with WIRED they found what appear to be the personal information and travel records of multiple government officials and diplomats from the UK, Switzerland, and the US.

Airportr's CEO Randel Darby confirmed CyberX9's findings in a written statement provided to WIRED but noted that Airportr had disabled the vulnerable part of its site's backend very shortly after the researchers made the company aware of the issues last April and fixed the problems within a few day. "The data was accessed solely by the ethical hackers for the purpose of recommending improvements to Airportr's security, and our prompt response and mitigation ensured no further risk," Darby wrote in a statement. "We take our responsibilities to protect customer data very seriously." CyberX9's researchers, for their part, counter that the simplicity of the vulnerabilities they found mean that there's no guarantee other hackers didn't access Airportr's data first. They found that a relatively basic web vulnerability allowed them to change the password of any user to gain access to their account if they had just the user's email address -- and they were also able to brute-force guess email addresses with no rate limitations on the site. As a result, they could access data including all customers' names, phone numbers, home addresses, detailed travel plans and history, airline tickets, boarding passes and flight details, passport images, and signatures.

By gaining access to an administrator account, CyberX9's researchers say, a hacker could also have used the vulnerabilities it found to redirect luggage, steal luggage, or even cancel flights on airline websites by using Airportr's data to gain access to customer accounts on those sites. The researchers say they could also have used their access to send emails and text messages as Airportr, a potential phishing risk. Airportr tells WIRED that it has 92,000 users and claims on its website that it has handled more than 800,000 bags for customers. [...] The researchers found that they could monitor their browser's communications as they signed up for Airportr and created a new password, and then reuse an API key intercepted from those communications to instead change another user's password to anything they chose. The site also lacked a "rate limiting" security measure that would prevent automated guesses of email addresses to rapidly change the password of every user's account. And the researchers were also able to find email addresses of Airportr administrators that allowed them to take over their accounts and gain their privileges over the company's data and operations.
"Anyone would have been able to gain or might have gained absolute super-admin access to all the operations and data of this company," says Himanshu Pathak, CyberX9's founder and CEO. "The vulnerabilities resulted in complete confidential private information exposure of all airline customers in all countries who used the service of this company, including full control over all the bookings and baggage. Because once you are the super-admin of their most sensitive systems, you have have the ability to do anything."
Advertising

Amazon CEO Wants To Put Ads In Your Alexa+ Conversations (techcrunch.com) 48

An anonymous reader quotes a report from TechCrunch: Amazon CEO Andy Jassy sees an opportunity to deliver ads to users during their conversations with the company's AI-powered digital assistant, Alexa+, he said during Amazon's second-quarter earnings call Thursday. "People are excited about the devices that they can buy from us that has Alexa+ enabled in it. People do a lot of shopping [with Alexa+]; it's a delightful shopping experience that will keep getting better," said Jassy on the call with investors and Wall Street analysts. "I think over time, there will be opportunities, as people are engaging in more multi-turn conversations, to have advertising play a role to help people find discovery, and also as a lever to drive revenue."

[...] Amazon has made Alexa+ free for Prime customers (who pay $14.99 a month) and added a $20-a-month subscription tier for Alexa+ on its own. Jassy suggested on Thursday that Alexa+ could eventually include subscription tiers beyond what's available today -- perhaps an ad-free tier. Up until now, ads have only appeared in Alexa in limited ways. Users may occasionally see a visual ad on Amazon's smart display device, the Echo Show, or hear a pre-recorded ad in between songs on one of Alexa's smart speakers. But Jassy's description of an AI-generated ad that Alexa+ delivers in a multistep conversation, which could help users find new products, is uncharted territory for Amazon and the broader tech industry. Marketers have expressed interest in advertising in AI chatbots, and specifically Alexa+, but exactly how remains unclear. [...] Jassy is betting that users will talk to Alexa+ more than Alexa, which could drive more advertising and more shopping on Amazon.com. However, early reviews of Alexa+ have been mixed. Amazon has reportedly struggled to ship some of Alexa+'s more complicated features, and the rollout has been slower than many expected.

There's a lot to figure out before Amazon puts ads in Alexa+. Like most AI models, Alexa+ is not immune to hallucinations. Before advertisers agree to make Alexa+ a spokesperson for their products, Amazon may have to come up with some ways to ensure that its AI will not offer false advertising for a product. Jassy seems enthusiastic about making advertising a larger part of Amazon business. Amazon's advertising revenue went up 22% in the second quarter, compared to the same period last year. Delivering ads in AI chatbot conversations may also raise privacy concerns. People tend to talk more with AI chatbots compared to deterministic assistants, like the traditional Alexa and Siri products. As a result, generative AI chatbots tend to collect more information on users. Some users might be unsettled by having that information sold to advertisers and having ads appear in their natural language conversations with AI.

Privacy

A Second Tea Breach Reveals Users' DMs About Abortions and Cheating (404media.co) 117

A second, far more recent data breach at women's dating safety app Tea has exposed over a million sensitive user messages -- including discussions about abortions, infidelity, and shared contact info. This vulnerability not only compromised private conversations but also made it easy to unmask anonymous users. 404 Media reports: Despite Tea's initial statement that "the incident involved a legacy data storage system containing information from over two years ago," the second issue impacting a separate database is much more recent, affecting messages up until last week, according to the researcher's findings that 404 Media verified. The researcher said they also found the ability to send a push notification to all of Tea's users.

It's hard to overstate how sensitive this data is and how it could put Tea's users at risk if it fell into the wrong hands. When signing up, Tea encourages users to choose an anonymous screenname, but it was trivial for 404 Media to find the real world identities of some users given the nature of their messages, which Tea has led them to believe were private. Users could be easily found via their social media handles, phone numbers, and real names that they shared in these chats. These conversations also frequently make damning accusations against people who are also named in the private messages and in some cases are easy to identify. It is unclear who else may have discovered the security issue and downloaded any data from the more recent database. Members of 4chan found the first exposed database last week and made tens of thousands of images of Tea users available for download. Tea told 404 Media it has contacted law enforcement. [...]

This new data exposure is due to any Tea user being able to use their own API key to access a more recent database of user data, Rahjerdi said. The researcher says that this issue existed until late last week. That exposure included a mass of Tea users' private messages. In some cases, the women exchange phone numbers so they can continue the conversation off platform. The first breach was due to an exposed instance of app development platform Firebase, and impacted tens of thousands of selfie and driver license images. At the time, Tea said in a statement "there is no evidence to suggest that current or additional user data was affected." The second database includes a data field called "sent_at," with many of those messages being marked as recent as last week.

United Kingdom

VPN Downloads Surge in UK as New Age-Verification Rules Take Effect (msn.com) 96

Proton VPN reported a 1,400 percent hourly increase in signups over its baseline Friday — the day the UK's age verification law went into effect. For UK users, "apps with explicit content must now verify visitors' ages via methods such as facial recognition and banking info," notes Mashable: Proton VPN previously documented a 1,000 percent surge in new subscribers in June after Pornhub left France, its second-biggest market, amid the enactment of an age verification law there... A Proton VPN spokesperson told Mashable that it saw an increase in new subscribers right away at midnight Friday, then again at 9 a.m. BST. The company anticipates further surges over the weekend, they added. "This clearly shows that adults are concerned about the impact universal age verification laws will have on their privacy," the spokesperson said... Search interest for the term "Proton VPN" also saw a seven-day spike in the UK around 2 a.m. BST Friday, according to a Google Trends chart.
The Financial Times notes that VPN apps "made up half of the top 10 most popular free apps on the UK's App Store for iOS this weekend, according to Apple's rankings." Proton VPN leapfrogged ChatGPT to become the top free app in the UK, according to Apple's daily App Store charts, with similar services from developers Super Unlimited and Nord Security also rising over the weekend... Data from Google Trends also shows a significant increase in search queries for VPNs in the UK this weekend, with up to 10 times more people looking for VPNs at peak times...

"This is what happens when people who haven't got a clue about technology pass legislation," Anthony Rose, a UK-based tech entrepreneur who helped to create BBC iPlayer, the corporation's streaming service, said in a social media post. Rose said it took "less than five minutes to install a VPN" and that British people had become familiar with using them to access the iPlayer outside the UK. "That's the beauty of VPNs. You can be anywhere you like, and anytime a government comes up with stupid legislation like this, you just turn on your VPN and outwit them," he added...

Online platforms found in breach of the new UK rules face penalties of up to £18mn or 10 percent of global turnover, whichever is greater... However, opposition to the new rules has grown in recent days. A petition submitted through the UK parliament website demanding that the Online Safety Act be repealed has attracted more than 270,000 signatures, with the vast majority submitted in the past week. Ministers must respond to a petition, and parliament has to consider its topic for a debate, if signatures surpass 100,000.

X, Reddit and TikTok have also "introduced new 'age assurance' systems and controls for UK users," according to the article. But Mashable summarizes the situation succinctly.

"Initial research shows that VPNs make age verification laws in the U.S. and abroad tricky to enforce in practice."
Piracy

Creator of 1995 Phishing Tool 'AOHell' On Piracy, Script Kiddies, and What He Thinks of AI (yahoo.com) 14

In 1995's online world, AOL existed mostly beside the internet as a "walled, manicured garden," remembers Fast Company.

Then along came AOHell "the first of what would become thousands of programs designed by young hackers to turn the system upside down" — built by a high school dropout calling himself "Da Chronic" who says he used "a computer that I couldn't even afford" using "a pirated copy of Microsoft Visual Basic." [D]istributed throughout the teen chatrooms, the program combined a pile of tricks and pranks into a slick little control panel that sat above AOL's windows and gave even newbies an arsenal of teenage superpowers. There was a punter to kick people out of chatrooms, scrollers to flood chats with ASCII art, a chat impersonator, an email and instant message bomber, a mass mailer for sharing warez (and later mp3s), and even an "Artificial Intelligence Bot" [which performed automated if-then responses]. Crucially, AOHell could also help users gain "free" access to AOL. The program came with a program for generating fake credit card numbers (which could fool AOL's sign up process), and, by January 1995, a feature for stealing other users' passwords or credit cards. With messages masquerading as alerts from AOL customer service reps, the tool could convince unsuspecting users to hand over their secrets...

Of course, Da Chronic — actually a 17-year-old high school dropout from North Carolina named Koceilah Rekouche — had other reasons, too. Rekouche wanted to hack AOL because he loved being online with his friends, who were a refuge from a difficult life at home, and he couldn't afford the hourly fee. Plus, it was a thrill to cause havoc and break AOL's weak systems and use them exactly how they weren't meant to be, and he didn't want to keep that to himself. Other hackers "hated the fact that I was distributing this thing, putting it into the team chat room, and bringing in all these noobs and lamers and destroying the community," Rekouche told me recently by phone...

Rekouche also couldn't have imagined what else his program would mean: a free, freewheeling creative outlet for thousands of lonely, disaffected kids like him, and an inspiration for a generation of programmers and technologists. By the time he left AOL in late 1995, his program had spawned a whole cottage industry of teenage script kiddies and hackers, and fueled a subculture where legions of young programmers and artists got their start breaking and making things, using pirated software that otherwise would have been out of reach... In 2014, [AOL CEO Steve] Case himself acknowledged on Reddit that "the hacking of AOL was a real challenge for us," but that "some of the hackers have gone on to do more productive things."

When he first met Mark Zuckerberg, he said, the Facebook founder confessed to Case that "he learned how to program by hacking [AOL]."

"I can't imagine somebody doing that on Facebook today," Da Chronic says in a new interview with Fast Company. "They'll kick you off if you create a Google extension that helps you in the slightest bit on Facebook, or an extension that keeps your privacy or does a little cool thing here and there. That's totally not allowed."

AOHell's creators had called their password-stealing techniques "phishing" — and the name stuck. (AOL was working with federal law enforcement to find him, according to a leaked internal email, but "I didn't even see that until years later.") Enrolled in college, he decided to write a technical academic paper about his program. "I do believe it caught the attention of Homeland Security, but I think they realized pretty quickly that I was not a threat."

He's got an interesting perspective today, noting with today's AI tool's it's theoretically possible to "craft dynamic phishing emails... when I see these AI coding tools I think, this might be like today's Visual Basic. They take out a lot of the grunt work."

What's the moral of the story? "I didn't have any qualifications or anything like that," Da Chronic says. "So you don't know who your adversary is going to be, who's going to understand psychology in some nuanced way, who's going to understand how to put some technological pieces together, using AI, and build some really wild shit."
Privacy

Astronomer Hires Coldplay Lead Singer's Ex-Wife as 'Temporary' Spokesperson: Gwyneth Paltrow (bbc.com) 153

The "Chief People Officer" of dataops company Astronomer resigned this week from her position after apparently being caught on that "Kiss Cam" at a Coldplay concert with the company's CEO, reports the BBC. That CEO has also resigned, with Astronomer appointing their original co-founder and chief product officer as the new interim CEO.

UPDATE (7/26): In an unexpected twist, Astronomer put out a new video Friday night starring... Gwyneth Paltrow.

Actress/businesswoman Paltrow "was married to Coldplay's frontman Chris Martin for 13 years," reports CBS News. In the video posted Friday, Paltrow says she was hired by Astronomer as a "very temporary" spokesperson.

"Astronomer has gotten a lot of questions over the last few days," Paltrow begins, "and they wanted me to answer the most common ones..."

As the question "OMG! What the actual f" begins appearing on the screen, Paltrow responds "Yes, Astronomer is the best place to run Apache Airflow, unifying the experience of running data, ML, and AI pipelines at scale. We've been thrilled so many people have a newfound interest in data workflow automation." (Paltrow also mentions the company's upcoming Beyond Analytics dataops conference in September.)

Astronomer is still grappling with unintended fame after the "Kiss Cam" incident. ("Either they're having an affair or they're just very shy," Coldplay's lead singer had said during the viral video, in which the startled couple hurries to hide off-camera). The incident raised privacy concerns, as it turns out both people in the video were in fact married to someone else, though the singer did earlier warn the crowd "we're going to use our cameras and put some of you on the big screen," according to CNN. The New York Post notes the woman's now-deleted LinkedIn account showed that she has also served as an "advisory board member" at her husband's company since September of 2020. The Post cites a source close to the situation who says the woman's husband "was in Asia for a few weeks," returning to America right as the video went viral. Kristin and Andrew Cabot married sometime after her previous divorce was finalized in 2022. The source said there had been little indication of any trouble in paradise before the Coldplay concert video went viral. "The family is now saying they have been having marriage troubles for several months and were discussing separating..."
The video had racked up 127 million videos by yesterday, notes Newsweek, adding that the U.K. tabloid the Daily Mail apparently took photos outside the woman's house, reporting that she does not appear to be wearing a wedding ring.
Privacy

Women Dating Safety App 'Tea' Breached, Users' IDs Posted To 4chan (404media.co) 95

An anonymous reader quotes a report from 404 Media: Users from 4chan claim to have discovered an exposed database hosted on Google's mobile app development platform, Firebase, belonging to the newly popular women's dating safety app Tea. Users say they are rifling through peoples' personal data and selfies uploaded to the app, and then posting that data online, according to screenshots, 4chan posts, and code reviewed by 404 Media. In a statement to 404 Media, Tea confirmed the breach also impacted some direct messages but said that the data is from two years ago. Tea, which claims to have more than 1.6 million users, reached the top of the App Store charts this week and has tens of thousands of reviews there. The app aims to provide a space for women to exchange information about men in order to stay safe, and verifies that new users are women by asking them to upload a selfie.

"Yes, if you sent Tea App your face and drivers license, they doxxed you publicly! No authentication, no nothing. It's a public bucket," a post on 4chan providing details of the vulnerability reads. "DRIVERS LICENSES AND FACE PICS! GET THE FUCK IN HERE BEFORE THEY SHUT IT DOWN!" The thread says the issue was an exposed database that allowed anyone to access the material. [...] "The images in the bucket are raw and uncensored," the user wrote. Multiple users have created scripts to automate the process of collecting peoples' personal information from the exposed database, according to other posts in the thread and copies of the scripts. In its terms of use, Tea says "When you first create a Tea account, we ask that you register by creating a username and including your location, birth date, photo and ID photo."

After publication of this article, Tea confirmed the breach in an email to 404 Media. The company said on Friday it "identified unauthorized access to one of our systems and immediately launched a full investigation to assess the scope and impact." The company says the breach impacted data from more than two years ago, and included 72,000 images (13,000 selfies and photo IDs, and 59,000 images from app posts and direct messages). "This data was originally stored in compliance with law enforcement requirements related to cyber-bullying prevention," the email continued. "We have engaged third-party cybersecurity experts and are working around the clock to secure our systems. At this time, there is no evidence to suggest that current or additional user data was affected. Protecting our users' privacy and data is our highest priority. We are taking every necessary step to ensure the security of our platform and prevent further exposure."

Businesses

American Airlines Chief Blasts Delta's AI Pricing Plans as 'Inappropriate' (yahoo.com) 20

American Airlines Chief Executive Robert Isom criticized the use of AI in setting air fares during an earnings call, calling the practice "inappropriate" and a "bait and switch" move that could trick travelers. Isom's comments target Delta Air Lines, which is testing AI to help set pricing on about 3% of its network today with plans to expand to 20% by year-end.

Delta maintains it is not using the technology to target customers with individualized offers based on personal information, stating all customers see identical fares across retail channels. US Senators Ruben Gallego, Richard Blumenthal, and Mark Warner have questioned Delta's AI pricing plans, citing data privacy concerns and potential fare increases. Southwest Airlines CEO Bob Jordan said his carrier also has no plans to use AI in revenue management or pricing decisions.
The Courts

After $380 Million Hack, Clorox Sues Its 'Service Desk' Vendor For Simply Giving Out Passwords (arstechnica.com) 89

An anonymous reader quotes a report from Ars Technica: Hacking is hard. Well, sometimes. Other times, you just call up a company's IT service desk and pretend to be an employee who needs a password reset, an Okta multifactor authentication reset, and a Microsoft multifactor authentication reset... and it's done. Without even verifying your identity. So you use that information to log in to the target network and discover a more trusted user who works in IT security. You call the IT service desk back, acting like you are now this second person, and you request the same thing: a password reset, an Okta multifactor authentication reset, and a Microsoft multifactor authentication reset. Again, the desk provides it, no identity verification needed. So you log in to the network with these new credentials and set about planting ransomware or exfiltrating data in the target network, eventually doing an estimated $380 million in damage. Easy, right?

According to The Clorox Company, which makes everything from lip balm to cat litter to charcoal to bleach, this is exactly what happened to it in 2023. But Clorox says that the "debilitating" breach was not its fault. It had outsourced the "service desk" part of its IT security operations to the massive services company Cognizant -- and Clorox says that Cognizant failed to follow even the most basic agreed-upon procedures for running the service desk. In the words of a new Clorox lawsuit, Cognizant's behavior was "all a devastating lie," it "failed to show even scant care," and it was "aware that its employees were not adequately trained."

"Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques," says the lawsuit, using italics to indicate outrage emphasis. "The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox's network, and Cognizant handed the credentials right over. Cognizant is on tape handing over the keys to Clorox's corporate network to the cybercriminal -- no authentication questions asked." [...] The new lawsuit, filed in California state courts, wants Cognizant to cough up millions of dollars to cover the damage Clorox says it suffered after weeks of disruption to its factories and ordering systems. (You can read a brief timeline of the disruption here.)

Wireless Networking

Humans Can Be Tracked With Unique 'Fingerprint' Based On How Their Bodies Block Wi-Fi Signals (theregister.com) 38

Researchers from La Sapienza University in Rome have developed "WhoFi," a system that uses the way a person's body distorts Wi-Fi signals to re-identify them across different locations -- even if they're not carrying a phone. By training a deep neural network on these subtle signal distortions, the researchers claim WhoFi is able to achieve up to 95.5% accuracy. The Register reports: "The core insight is that as a Wi-Fi signal propagates through an environment, its waveform is altered by the presence and physical characteristics of objects and people along its path," the authors state in their paper. "These alterations, captured in the form of Channel State Information (CSI), contain rich biometric information." CSI in the context of Wi-Fi devices refers to information about the amplitude and phase of electromagnetic transmissions. These measurements, the researchers say, interact with the human body in a way that results in person-specific distortions. When processed by a deep neural network, the result is a unique data signature.

Researchers proposed a similar technique, dubbed EyeFi, in 2020, and asserted it was accurate about 75 percent of the time. The Rome-based researchers who proposed WhoFi claim their technique makes accurate matches on the public NTU-Fi dataset up to 95.5 percent of the time when the deep neural network uses the transformer encoding architecture. "The encouraging results achieved confirm the viability of Wi-Fi signals as a robust and privacy-preserving biometric modality, and position this study as a meaningful step forward in the development of signal-based Re-ID systems," the authors say.

Businesses

Amazon Buys Bee AI Wearable That Listens To Everything You Say 28

Amazon is acquiring Bee, a startup that makes a $49.99 AI-powered wearable that passively listens to conversations and generates personalized summaries and suggestions. "You can also give the device permission to access your emails, contacts, location, reminders, photos, and calendar events to help inform its AI-generated insights, as well as create a searchable history of your activities," adds The Verge. From the report: When asked about Amazon's plans to apply the same privacy measures offered by Bee, such as its policy against storing audio, Amazon spokesperson Alexandra Miller says the company "cares deeply" about customer privacy and security, adding that the company will work with Bee to give users "even greater control over" their devices when the deal closes.

"We've been strong stewards of customer data since our founding, and have never been in the business of selling our customers' personal information to others," Miller says. "We design our products to protect our customers' privacy and security and to make it easy for them to be in control of their experience -- and this approach would of course apply to Bee." Miller also says the terms of the deal are "confidential," and all Bee employees have "received offers to join Amazon."
Privacy

Brave Browser Blocks Microsoft Recall By Default (brave.com) 48

The Brave Browser now blocks Microsoft Recall by default for Windows 11+ users, preventing the controversial screenshot-logging feature from capturing any Brave tabs -- regardless of whether users are in private mode. Brave cites persistent privacy concerns and potential abuse scenarios as justification. From a blog post: Microsoft has, to their credit, made several security and privacy-positive changes to Recall in response to concerns. Still, the feature is in preview, and Microsoft plans to roll it out more widely soon. What exactly the feature will look like when it's fully released to all Windows 11 users is still up in the air, but the initial tone-deaf announcement does not inspire confidence.

Given Brave's focus on privacy-maximizing defaults and what is at stake here (your entire browsing history), we have proactively disabled Recall for all Brave tabs. We think it's vital that your browsing activity on Brave does not accidentally end up in a persistent database, which is especially ripe for abuse in highly-privacy-sensitive cases such as intimate partner violence.

Microsoft has said that private browsing windows on browsers will not be saved as snapshots. We've extended that logic to apply to all Brave browser windows. We tell the operating system that every Brave tab is 'private', so Recall never captures it. This is yet another example of how Brave engineers are able to quickly tweak Chromium's privacy functionality to make Brave safer for our users (inexhaustive list here). For more technical details, see the pull request implementing this feature. Brave is the only major Web browser that disables Microsoft Recall by default in all tabs.

Privacy

Weak Password Allowed Hackers To Sink a 158-Year-Old Company (bbc.com) 125

An anonymous reader quotes a report from the BBC: One password is believed to have been all it took for a ransomware gang to destroy a 158-year-old company and put 700 people out of work. KNP -- a Northamptonshire transport company -- is just one of tens of thousands of UK businesses that have been hit by such attacks. Big names such as M&S, Co-op and Harrods have all been attacked in recent months. The chief executive of Co-op confirmed last week that all 6.5 million of its members had had their data stolen. In KNP's case, it's thought the hackers managed to gain entry to the computer system by guessing an employee's password, after which they encrypted the company's data and locked its internal systems. KNP director Paul Abbott says he hasn't told the employee that their compromised password most likely led to the destruction of the company. "Would you want to know if it was you?" he asks. "We need organizations to take steps to secure their systems, to secure their businesses," says Richard Horne CEO of the National Cyber Security Centre (NCSC) -- where Panorama has been given exclusive access to the team battling international ransomware gangs. A gang of hackers, known as Akira, broke into the company's system and demanded a payment to restore the data. "The hackers didn't name a price, but a specialist ransomware negotiation firm estimated the sum could be as much as 5 million pounds," reports the BBC. "KNP didn't have that kind of money. In the end all the data was lost, and the company went under."

Slashdot Top Deals