Privacy

ChatGPT Models Are Surprisingly Good At Geoguessing (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: There's a somewhat concerning new trend going viral: People are using ChatGPT to figure out the location shown in pictures. This week, OpenAI released its newest AI models, o3 and o4-mini, both of which can uniquely "reason" through uploaded images. In practice, the models can crop, rotate, and zoom in on photos -- even blurry and distorted ones -- to thoroughly analyze them. These image-analyzing capabilities, paired with the models' ability to search the web, make for a potent location-finding tool. Users on X quickly discovered that o3, in particular, is quite good at deducing cities, landmarks, and even restaurants and bars from subtle visual clues.

In many cases, the models don't appear to be drawing on "memories" of past ChatGPT conversations, or EXIF data, which is the metadata attached to photos that reveal details such as where the photo was taken. X is filled with examples of users giving ChatGPT restaurant menus, neighborhood snaps, facades, and self-portraits, and instructing o3 to imagine it's playing "GeoGuessr," an online game that challenges players to guess locations from Google Street View images. It's an obvious potential privacy issue. There's nothing preventing a bad actor from screenshotting, say, a person's Instagram Story and using ChatGPT to try to doxx them.

Facebook

Google, Apple, and Snap Aren't Happy About Meta's Poorly-redacted Slides (theverge.com) 13

During Meta's antitrust trial this week, lawyers representing Apple, Google, and Snap each expressed irritation with Meta over the slides it presented on Monday that The Verge found to contain easy-to-remove redactions. From a report: Attorneys for both Apple and Snap called the errors "egregious," with Apple's representative indicating that it may not be able to trust Meta with its internal information in the future. Google's attorney also blamed Meta for jeopardizing the search giant's data with the mistake.

Details about the attorneys' comments come from The Verge's Lauren Feiner, who is currently in the courtroom where proceedings are taking place today. Apple, Google, and Meta did not immediately respond to The Verge's request for comment. Snap declined to comment. Snap's attorney maligned Meta's "cavalier approach and casual disregard" of other companies swept into the case, and wondered if "Meta would have applied meaningful redactions if it were its own information that was at stake."

Google

Google Used AI To Suspend Over 39 Million Ad Accounts Suspected of Fraud (techcrunch.com) 25

An anonymous reader quotes a report from TechCrunch: Google on Wednesday said it suspended 39.2 million advertiser accounts on its platform in 2024 -- more than triple the number from the previous year -- in its latest crackdown on ad fraud. By leveraging large language models (LLMs) and using signals such as business impersonation and illegitimate payment details, the search giant said it could suspend a "vast majority" of ad accounts before they ever served an ad.

Last year, Google launched over 50 LLM enhancements to improve its safety enforcement mechanisms across all its platforms. "While these AI models are very, very important to us and have delivered a series of impressive improvements, we still have humans involved throughout the process," said Alex Rodriguez, a general manager for Ads Safety at Google, in a virtual media roundtable. The executive told reporters that a team of over 100 experts assembled across Google, including members from the Ads Safety team, the Trust and Safety division, and researchers from DeepMind.
"In total, Google said it blocked 5.1 billion ads last year and removed 1.3 billion pages," adds TechCrunch. "In comparison, it blocked over 5.5 billion ads and took action against 2.1 billion publisher pages in 2023. The company also restricted 9.1 billion ads last year, it said."
Google

Google To Phase Out Country Code Top-level Domains (blog.google) 47

Google has announced that it will begin phasing out country code top-level domains (ccTLDs) such as google.ng and google.com.br, redirecting all traffic to google.com. The change comes after improvements in Google's localization capabilities rendered these separate domains unnecessary.

Since 2017, Google has provided identical local search experiences whether users visited country-specific domains or google.com. The transition will roll out gradually over the coming months, and users may need to re-establish search preferences during the migration.
AI

Gemini App Rolling Out Veo 2 Video Generation For Advanced Users 2

Google is rolling out Veo 2 video generation in the Gemini app for Advanced subscribers, allowing users to create eight-second, 720p cinematic-style videos from text prompts. 9to5Google reports: Announced at the end of last year, Veo 2 touts "fluid character movement, lifelike scenes, and finer visual details across diverse subjects and styles," as well as "cinematic realism," thanks to an understanding of real-world physics and human motion. In Gemini, Veo 2 can create eight-second video clips at 720p resolution. Specifically, you'll get an MP4 download in a 16:9 landscape format. There's also the ability to share via a g.co/gemini/share/ link. To enter your prompt, select Veo 2 from the model dropdown on the web and mobile apps. Just describe the scene you want to create: "The more detailed your description, the more control you have over the final video." It takes 1-2 minutes for the clip to generate. [...]

On the safety front, each frame features a SynthID digital watermark. Only available to Gemini Advanced subscribers ($19.99 per month), there is a "monthly limit" on how many videos you can generate, with Google notifying users when they're close. It is rolling out globally -- in all languages supported by Gemini -- starting today and will be fully available in the coming weeks.
Programming

Figma Sent a Cease-and-Desist Letter To Lovable Over the Term 'Dev Mode' (techcrunch.com) 73

An anonymous reader quotes a report from TechCrunch: Figma has sent a cease-and-desist letter to popular no-code AI startup Lovable, Figma confirmed to TechCrunch. The letter tells Lovable to stop using the term "Dev Mode" for a new product feature. Figma, which also has a feature called Dev Mode, successfully trademarked that term last year, according to the U.S. Patent and Trademark office. What's wild is that "dev mode" is a common term used in many products that cater to software programmers. It's like an edit mode. Software products from giant companies like Apple's iOS, Google's Chrome, Microsoft's Xbox have features formally called "developer mode" that then get nicknamed "dev mode" in reference materials.

Even "dev mode" itself is commonly used. For instance Atlassian used it in products that pre-date Figma's copyright by years. And it's a common feature name in countless open source software projects. Figma tells TechCrunch that its trademark refers only to the shortcut "Dev Mode" -- not the full term "developer mode." Still, it's a bit like trademarking the term "bug" to refer to "debugging." Since Figma wants to own the term, it has little choice but send cease-and-desist letters. (The letter, as many on X pointed out, was very polite, too.) If Figma doesn't defend the term, it could be absorbed as a generic term and the trademarked becomes unenforceable.

AI

Google DeepMind Is Hiring a 'Post-AGI' Research Scientist (404media.co) 61

An anonymous reader shares a report: None of the frontier AI research labs have presented any evidence that they are on the brink of achieving artificial general intelligence, no matter how they define that goal, but Google is already planning for a "Post-AGI" world by hiring a scientist for its DeepMind AI lab to research the "profound impact" that technology will have on society.

"Spearhead research projects exploring the influence of AGI on domains such as economics, law, health/wellbeing, AGI to ASI [artificial superintelligence], machine consciousness, and education," Google says in the first item on a list of key responsibilities for the job. Artificial superintelligence refers to a hypothetical form of AI that is smarter than the smartest human in all domains. This is self explanatory, but just to be clear, when Google refers to "machine consciousness" it's referring to the science fiction idea of a sentient machine.

OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Elon Musk, and other major and minor players in the AI industry are all working on AGI and have previously talked about the likelihood of humanity achieving AGI, when that might happen, and what the consequences might be, but the Google job listing shows that companies are now taking concrete steps for what comes after, or are at least are continuing to signal that they believe it can be achieved.

Android

Android Phones Will Soon Reboot Themselves After Sitting Unused For 3 Days (arstechnica.com) 98

An anonymous reader shares a report: A silent update rolling out to virtually all Android devices will make your phone more secure, and all you have to do is not touch it for a few days. The new feature implements auto-restart of a locked device, which will keep your personal data more secure. It's coming as part of a Google Play Services update, though, so there's nothing you can do to speed along the process.

Google is preparing to release a new update to Play Services (v25.14), which brings a raft of tweaks and improvements to myriad system features. First spotted by 9to5Google, the update was officially released on April 14, but as with all Play Services updates, it could take a week or more to reach all devices. When 25.14 arrives, Android devices will see a few minor improvements, including prettier settings screens, improved connection with cars and watches, and content previews when using Quick Share.

AI

Publishers and Law Professors Back Authors in Meta AI Copyright Battle 14

Publishers and law professors have filed amicus briefs supporting authors who sued Meta over its AI training practices, arguing that the company's use of "thousands of pirated books" fails to qualify as fair use under copyright law.

The filings [PDF] in California's Northern District federal court came from copyright law professors, the International Association of Scientific, Technical and Medical Publishers (STM), Copyright Alliance, and Association of American Publishers. The briefs counter earlier support for Meta from the Electronic Frontier Foundation and IP professors.

While Meta's defenders pointed to the 2015 Google Books ruling as precedent, the copyright professors distinguished Meta's use, arguing Google Books told users something "about" books without "exploiting expressive elements," whereas AI models leverage the books' creative content.

"Meta's use wasn't transformative because, like the AI models, the plaintiffs' works also increased 'knowledge and skill,'" the professors wrote, warning of a "cascading effect" if Meta prevails. STM is specifically challenging Meta's data sources: "While Meta attempts to label them 'publicly available datasets,' they are only 'publicly available' because those perpetuating their existence are breaking the law."
China

Chinese Robotaxis Have Government Black Boxes, Approach US Quality (forbes.com) 43

An anonymous reader quotes a report from Forbes: Robotaxi development is speeding at a fast pace in China, but we don't hear much about it in the USA, where the news focuses mostly on Waymo, with a bit about Zoox, Motional, May, trucking projects and other domestic players. China has 4 main players with robotaxi service, dominated by Baidu (the Chinese Google.) A recent session at last week's Ride AI conference in Los Angeles revealed some details about the different regulatory regime in China, and featured a report from a Chinese-American YouTuber who has taken on a mission to ride in the different vehicles.

Zion Maffeo, deputy general counsel for Pony.AI, provided some details on regulations in China. While Pony began with U.S. operations, its public operations are entirely in China, and it does only testing in the USA. Famously it was one of the few companies to get a California "no safety driver" test permit, but then lost it after a crash, and later regained it. Chinese authorities at many levels keep a close watch over Chinese robotaxi companies. They must get approval for all levels of operation which control where they can test and operate, and how much supervision is needed. Operation begins with testing with a safety driver behind the wheel (as almost everywhere in the world,) with eventual graduation to having the safety driver in the passenger seat but with an emergency stop. Then they move to having a supervisor in the back seat before they can test with nobody in the vehicle, usually limited to an area with simpler streets.

The big jump can then come to allow testing with nobody in the vehicle, but with full time monitoring by a remote employee who can stop the vehicle. From there they can graduate to taking passengers, and then expanding the service to more complex areas. Later they can go further, and not have full time remote monitoring, though there do need to be remote employees able to monitor and assist part time. Pony has a permit allowing it to have 3 vehicles per remote operator, and has one for 15 vehicles in process, but they declined comment on just how many vehicles they actually have per operator. Baidu also did not respond to queries on this. [...] In addition, Chinese jurisdictions require that the system in a car independently log any "interventions" by safety drivers in a sort of "black box" system. These reports are regularly given to regulators, though they are not made public. In California, companies must file an annual disengagement report, but they have considerable leeway on what they consider a disengagement so the numbers can't be readily compared. Chinese companies have no discretion on what is reported, and they may notify authorities of a specific objection if they wish to declare that an intervention logged in their black box should not be counted.
On her first trip, YouTuber Sophia Tung found Baidu's 5th generation robotaxi to offer a poor experience in ride quality, wait time, and overall service. However, during a return trip she tried Baidu's 6th generation vehicle in Wuhan and rated it as the best among Chinese robotaxis, approaching the quality of Waymo.
AI

OpenAI Unveils Coding-Focused GPT-4.1 While Phasing Out GPT-4.5 13

OpenAI unveiled its GPT-4.1 model family on Monday, prioritizing coding capabilities and instruction following while expanding context windows to 1 million tokens -- approximately 750,000 words. The lineup includes standard GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano variants, all available via API but not ChatGPT.

The flagship model scores 54.6% on SWE-bench Verified, lagging behind Google's Gemini 2.5 Pro (63.8%) and Anthropic's Claude 3.7 Sonnet (62.3%) on the same software engineering benchmark, according to TechCrunch. However, it achieves 72% accuracy on Video-MME's long video comprehension tests -- a significant improvement over GPT-4o's 65.3%.

OpenAI simultaneously announced plans to retire GPT-4.5 -- their largest model released just two months ago -- from API access by July 14. The company claims GPT-4.1 delivers "similar or improved performance" at substantially lower costs. Pricing follows a tiered structure: GPT-4.1 costs $2 per million input tokens and $8 per million output tokens, while GPT-4.1 nano -- OpenAI's "cheapest and fastest model ever" -- runs at just $0.10 per million input tokens.

All models feature a June 2024 knowledge cutoff, providing more current contextual understanding than previous iterations.
Chrome

Chrome To Patch Decades-Old 'Browser History Sniffing' Flaw That Let Sites Peek At Your History (theregister.com) 34

Slashdot reader king*jojo shared this article from The Register: A 23-year-old side-channel attack for spying on people's web browsing histories will get shut down in the forthcoming Chrome 136, released last Thursday to the Chrome beta channel. At least that's the hope.

The privacy attack, referred to as browser history sniffing, involves reading the color values of web links on a page to see if the linked pages have been visited previously... Web publishers and third parties capable of running scripts, have used this technique to present links on a web page to a visitor and then check how the visitor's browser set the color for those links on the rendered web page... The attack was mitigated about 15 years ago, though not effectively. Other ways to check link color information beyond the getComputedStyle method were developed... Chrome 136, due to see stable channel release on April 23, 2025, "is the first major browser to render these attacks obsolete," explained Kyra Seevers, Google software engineer in a blog post.

This is something of a turnabout for the Chrome team, which twice marked Chromium bug reports for the issue as "won't fix." David Baron, presently a Google software engineer who worked for Mozilla at the time, filed a Firefox bug report about the issue back on May 28, 2002... On March 9, 2010, Baron published a blog post outlining the issue and proposing some mitigations...

AI

AI Industry Tells US Congress: 'We Need Energy' (msn.com) 98

The Washington Post reports: The United States urgently needs more energy to fuel an artificial intelligence race with China that the country can't afford to lose, industry leaders told lawmakers at a House hearing on Wednesday. "We need energy in all forms," said Eric Schmidt, former CEO of Google, who now leads the Special Competitive Studies Project, a think tank focused on technology and security. "Renewable, nonrenewable, whatever. It needs to be there, and it needs to be there quickly." It was a nearly unanimous sentiment at the four-hour-plus hearing of the House Energy and Commerce Committee, which revealed bipartisan support for ramping up U.S. energy production to meet skyrocketing demand for energy-thirsty AI data centers.

The hearing showed how the country's AI policy priorities have changed under President Donald Trump. President Joe Biden's wide-ranging 2023 executive order on AI had sought to balance the technology's potential rewards with the risks it poses to workers, civil rights and national security. Trump rescinded that order within days of taking office, saying its "onerous" requirements would "threaten American technological leadership...." [Data center power consumption] is already straining power grids, as residential consumers compete with data centers that can use as much electricity as an entire city. And those energy demands are projected to grow dramatically in the coming years... [Former Google CEO Eric] Schmidt, whom the committee's Republicans called as a witness on Wednesday, told [committee chairman Brett] Guthrie that winning the AI race is too important to let environmental considerations get in the way...

Once the United States beats China to develop superintelligence, Schmidt said, AI will solve the climate crisis. And if it doesn't, he went on, China will become the world's sole superpower. (Schmidt's view that AI will become superintelligent within a decade is controversial among experts, some of whom predict the technology will remain limited by fundamental shortcomings in its ability to plan and reason.)

The industry's wish list also included "light touch" federal regulation, high-skill immigration and continued subsidies for chip development. Alexandr Wang, the young billionaire CEO of San Francisco-based Scale AI, said a growing patchwork of state privacy laws is hampering AI companies' access to the data needed to train their models. He called for a federal privacy law that would preempt state regulations and prioritize innovation.

Some committee Democrats argued that cuts to scientific research and renewable energy will actually hamper America's AI competitiveness, according to the article. " But few questioned the premise that the U.S. is locked in an existential struggle with China for AI supremacy.

"That stark outlook has nearly coalesced into a consensus on Capitol Hill since China's DeepSeek chatbot stunned the AI industry with its reasoning skills earlier this year."
Google

Google DeepMind Has a Weapon in the AI Talent Wars: Aggressive Noncompete Rules (businessinsider.com) 56

The battle for AI talent is so hot that Google would rather give some employees a paid one-year vacation than let them work for a competitor. From a report: Some Google DeepMind staff in the UK are subject to noncompete agreements that prevent them from working for a competitor for up to 12 months after they finish work at Google, according to four former employees with direct knowledge of the matter who asked to remain anonymous because they were not permitted to share these details with the press.

Aggressive noncompetes are one tool tech companies wield to retain a competitive edge in the AI wars, which show no sign of slowing down as companies launch new bleeding-edge models and products at a rapid clip. When an employee signs one, they agree not to work for a competing company for a certain period of time. Google DeepMind has put some employees with a noncompete on extended garden leave. These employees are still paid by DeepMind but no longer work for it for the duration of the noncompete agreement.

Several factors, including a DeepMind employee's seniority and how critical their work is to the company, determine the length of noncompete clauses, those people said. Two of the former staffers said six-month noncompetes are common among DeepMind employees, including for individual contributors working on Google's Gemini AI models. There have been cases where more senior researchers have received yearlong stipulations, they said.

Google

Google Maps is Launching Tools To Help Cities Analyze Infrastructure and Traffic (theverge.com) 9

Google is opening up its Google Maps Platform data so that cities, developers, and other business decision makers can more easily access information about things like infrastructure and traffic. The Verge: Google is integrating new datasets for Google Maps Platform directly into BigQuery, the tech giant's fully managed data analytics service, for the first time. This should make it easier for people to access data from Google Maps platform products, including Imagery Insights, Roads Management Insights, and Places Insights.
Google

Samsung and Google Partner To Launch Ballie Home Robot with Built-in Projector (engadget.com) 25

Samsung Electronics and Google Cloud are jointly entering the consumer robotics market with Ballie, a yellow, soccer-ball-shaped robot equipped with a video projector and powered by Google's Gemini AI models. First previewed in 2020, the long-delayed device will finally launch this summer in the US and South Korea. The mobile companion uses small wheels to navigate homes autonomously and integrates with Samsung's SmartThings platform to control smart home devices.

Running on Samsung's Tizen operating system, Ballie can manage calendars, answer questions, handle phone calls, and project video content from services including YouTube and Netflix. Samsung EVP Jay Kim described it as a "completely new Ballie" compared to the 2020 version, with Google Cloud integration being the most significant change. The robot leverages Gemini for understanding commands, searching the web, and processing visual data for navigation, while using Samsung's AI models for accessing personal information.
Social Networks

The Tumblr Revival is Real - and Gen Z is Leading the Charge (fastcompany.com) 35

"Gen Z is rediscovering Tumblr — a chaotic, cozy corner of the internet untouched by algorithmic gloss and influencer overload..." writes Fast Company, "embracing the platform as a refuge from an internet saturated with influencers and algorithm fatigue." Thanks to Gen Z, the site has found new life. As of 2025, Gen Z makes up 50% of Tumblr's active monthly users and accounts for 60% of new sign-ups, according to data shared with Business Insider's Amanda Hoover, who recently reported on the platform's resurgence. User numbers spiked in January during the near-ban of TikTok and jumped again last year when Brazil temporarily banned X. In response, Tumblr users launched dedicated communities to archive and share their favorite TikToks...

To keep up with the momentum, Tumblr introduced Reddit-style Communities in December, letting users connect over shared interests like photography and video games. In January, it debuted Tumblr TV — a TikTok-like feature that serves as both a GIF search engine and a short-form video platform. But perhaps Tumblr's greatest strength is that it isn't TikTok or Facebook. Currently the 10th most popular social platform in the U.S., according to analytics firm Similarweb, Tumblr is dwarfed by giants like Instagram and X. For its users, though, that's part of the appeal.

First launched in 2007, Tumblr peaked at over 100 million users in 2014, according to the article. Trends like Occupy Wall Street had been born on Tumblr, notes Business Insider, calling the blogging platform "Gen Z's safe space... as the rest of the social internet has become increasingly commodified, polarized, and dominated by lifestyle influencers." Tumblr was also "one of the most hyped startups in the world before fading into obsolescence — bought by Yahoo for $1.1 billion in 2013... then acquired by Verizon, and later offloaded for fractions of pennies on the dollar in a distressed sale.

"That same Tumblr, a relic of many millennials' formative years, has been having a moment among Gen Z..." "Gen Z has this romanticism of the early-2000s internet," says Amanda Brennan, an internet librarian who worked at Tumblr for seven years, leaving her role as head of content in 2021... Part of the reason young people are hanging out on old social platforms is that there's nowhere new to go. The tech industry is evolving at a slower pace than it was in the 2000s, and there's less room for disruption. Big Tech has a stranglehold on how we socialize. That leaves Gen Z to pick up the scraps left by the early online millennials and attempt to craft them into something relevant. They love Pinterest (founded in 2010) and Snapchat (2011), and they're trying out digital point-and-shoot cameras and flip phones for an early-2000s aesthetic — and learning the valuable lesson that sometimes we look better when blurrier.

More Gen Zers and millennials are signing up for Yahoo. Napster, surprising many people with its continued existence, just sold for $207 million. The trend is fueled by nostalgia for Y2K aesthetics and a longing for a time when people could make mistakes on the internet and move past them. The pandemic also brought more Gen Z users to Tumblr...

And Tumblr still works much like an older internet, where people have more control over what they see and rely less on algorithms. "You curate your own stuff; it takes a little bit of work to put everything in place, but when it's working, you see the content you want to see," Fjodor Everaerts, a 26-year-old in Belgium who has made some 250,000 posts since he joined Tumblr when he was 14... Under Automattic, Tumblr is finally in the home that serves it, [says Ari Levine, the head of brand partnerships at Tumblr]. "We've had ups and downs along the way, but we're in the most interesting position and place that we've been in 18 years," he says... And following media companies (including Business Insider) and social platforms like Reddit, Automattic in 2024 was making a deal with OpenAI and Midjourney to allow the systems to train on Tumblr posts.


"The social internet is fractured," the article argues. ("Millennials are running Reddit. Gen Xers and Baby Boomers have a home on Facebook. Bluesky, one of the new X alternatives, has a tangible elder-millennial/Gen X vibe. Gen Zers have created social apps like BeReal and the Myspace-inspired Noplace, but they've so far generated more hype than influence....")

But in a world where megaplatforms "flatten our online experiences and reward content that fits a mold," the article suggests, "smaller communities can enrich them."
EU

As Stocks (and Cryptocurrencies) Drop After Tariffs, France Considers Retaliating Against US Big Tech (politico.eu) 277

"U.S. stock market futures plunged on Sunday evening," reports Yahoo Finance, "after the new U.S. tariff policy began collecting duties over the weekend..."

The EU will vote on $28 billion in retaliatory tariffs Wednesday, Reuters reports. (And those tariffs will be approved unless "a qualified majority of 15 EU members representing 65% of the EU's population oppose it. They would enter force in two stages, a smaller part on April 15 and the rest a month later.")

But France's Economy and Finance Minister has an idea: more strictly regulating how data is used by America's Big Tech companies. Politico EU reports/A>: "We may strengthen certain administrative requirements or regulate the use of data," Lombard said in an interview with Le Journal Du Dimanche. He added that another option could be to "tax certain activities," without being more specific.

A French government spokesperson already said last week that the EU's retaliation against U.S. tariffs could include "digital services that are currently not taxed." That suggestion was fiercely rejected by Ireland, which hosts the European headquarters of several U.S. Big Tech firms...

Technology is seen as a possible area for Europe to retaliate. The European Union has a €157 billion trade surplus in goods, which means it exports more than it imports, but it runs a deficit of €109 billion in services, including digital services. Big Tech giants like Apple, Microsoft, Amazon, Google and Meta dominate many parts of the market in Europe.

Amid the market turmoil, what about cryptocurrencies, often seen as a "proxy" for the level of risk felt by investors? In the 10 weeks after October 6, the price of Bitcoin skyrocketed 67% to $106,490 by December 10th. But by January 30th it had started dropping again, and now sits at $77,831 — still up 22% for the last six months, but down nearly 27% over the last 10 weeks. Yet even after all that volatility, Bitcoin suddenly fell again more than 6% on Sunday, reports Reuters, "as markets plunged amid tariff tensions. Ether, the second largest cryptocurrency, fell more than 10% on Sunday."
AI

Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders (bleepingcomputer.com) 57

Slashdot reader zlives shared this report from BleepingComputer: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."
AI

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com) 10

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.)

So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

Slashdot Top Deals