Perl

Is Perl the World's 10th Most Popular Programming Language? (i-programmer.info) 79

TIOBE attempts to calculate programming language popularity using the number of skilled engineers, courses, and third-party vendors.

And the eight most popular languages in September's rankings haven't changed since last month:

1. Python
2. C++
3. C
4. Java
5. C#
6. JavaScript
7. Visual Basic
8. Go

But by TIOBE's ranking, Perl is still the #10 most-popular programming in September (dropping from #9 in August). "One year ago Perl was at position 27 and now it suddenly pops up at position 10 again," marvels TIOBE CEO Paul Jansen. The technical reason why Perl is rated this high is because of its huge number of books on Amazon. It has 4 times more books listed than for instance PHP, or 7 times more books than Rust. The underlying "real" reason for Perl's increase of popularity is unknown to me. The only possibility I can think of is that Perl 5 is now gradually considered to become the real Perl... Perl 6/Raku is at position 129 of the TIOBE index, thus playing no role at all in the programming world. Perl 5 on the other hand is releasing more often recently, thus gaining attention.
An article at the i-Programmer blog thinks Perl's resurgence could be from its text processing capabilities: Even in this era of AI, everything is still governed by text formats; text is still the King. XML, JSON calling APIs, YAML, Markdown, Log files..That means that there's still need to process it, transform it, clean it, extract from it. Perl with its first-class-citizen regular expressions, the wealth of text manipulation libraries up on CPAN and its full Unicode support of all the latest standards, was and is still the best. Simply there's no other that can match Perl's text processing capabilities.
They also cite Perl's backing by the open source community, and its "getting a 'proper' OOP model in the last couple of years... People just don't know what Perl is capable of and instead prefer to be victims of FOMO ephemeral trends, chasing behind the new and shiny."

Perl creator Larry Wall answered questions from Slashdot's readers in 2016. So I'd be curious from Slashdot's readers about Perl today. (Share your experiences in the comments if you're still using Perl -- or Raku...)

Perl's drop to #9 means Delphi/Object Pascal rises up one rank, growing from 1.82% in August to 2.26% in September to claim September's #9 spot. "At number 11 and 1.86%, SQL is quite close to entering the top 10 again," notes TechRepublic. (SQL fell to #12 in June, which the site speculated was due to "the increased use of NoSQL databases for AI applications.")

But TechRepublic adds that the #1 most popular programming language (according to TIOBE) is still Python: Perl sits at 2.03% in TIOBE's proprietary ranking system in September, up from 0.64% in January. Last year, Perl held the 27th position... Python's unstoppable rise dipped slightly from 26.14% in August to 25.98% in September. Python is still well ahead of every other language on the index.
AI

The Software Engineers Paid To Fix Vibe Coded Messes (404media.co) 51

"Freelance developers and entire companies are making a business out of fixing shoddy vibe coded software," writes 404 Media, interviewing one of the "dozens of people on Fiverr... now offering services specifically catering to people with shoddy vibe coded projects."

Hamid Siddiqi, who offers to "review, fix your vibe code" on Fiverr, told the 404 Media that "Currently, I work with around 15-20 clients regularly, with additional one-off projects throughout the year. ("Siddiqi said common issues he fixes in vibe coded projects include inconsistent UI/UX design in AI-generated frontends, poorly optimized code that impacts performance, misaligned branding elements, and features that function but feel clunky or unintuitive," as well as work o color schemes, animations, and layouts.)

And others coders are also pursuing the "vibe coded mess" market: Swatantra Sohni, who started VibeCodeFixers.com, a site for people with vibe coded projects who need help from experienced developers to fix or finish their projects, says that almost 300 experienced developers have posted their profiles to the site. He said so far VibeCodeFixers.com has only connected between 30-40 vibe code projects with fixers, but that he hasn't done anything to promote the service and at the moment is focused on adding as many software developers to the platform as possible...

"Most of these vibe coders, either they are product managers or they are sales guys, or they are small business owners, and they think that they can build something," Sohni told me. "So for them it's more for prototyping..." Another big issue Sohni identified is "credit burn," meaning the money vibe coders waste on AI usage fees in the final 10-20 percent stage of developing the app, when adding new features breaks existing features.

Sohni told me he thinks vibe coding is not going anywhere, but neither are human developers. "I feel like the role [of human developers] would be slightly limited, but we will still need humans to keep this AI on the leash," he said.

The article also notes that established software development companies like Ulam Labs, now say "we clean up after vibe coding. Literally."

"Built something fast? Now it's time to make it solid," Ulam Labs pitches on its site," suggesting that for their potential customers "the tech debt is holding you back: no tests, shaky architecture, CI/CD is a dream, and every change feels like defusing a bomb. That's where we come in."
AI

Developers Joke About 'Coding Like Cavemen' As AI Service Suffers Major Outage (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: On Wednesday afternoon, Anthropic experienced a brief but complete service outage that took down its AI infrastructure, leaving developers unable to access Claude.ai, the API, Claude Code, or the management console for around half an hour. The outage affected all three of Anthropic's main services simultaneously, with the company posting at 12:28 pm Eastern that "APIs, Console, and Claude.ai are down. Services will be restored as soon as possible." As of press time, the services appear to be restored. The disruption, though lasting only about 30 minutes, quickly took the top spot on tech link-sharing site Hacker News for a short time and inspired immediate reactions from developers who have become increasingly reliant on AI coding tools for their daily work. "Everyone will just have to learn how to do it like we did in the old days, and blindly copy and paste from Stack Overflow," joked one Hacker News commenter. Another user recalled a joke from a previous AI outage: "Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024."

The most recent outage came at an inopportune time, affecting developers across the US who have integrated Claude into their workflows. One Hacker News user observed: "It's like every other day, the moment US working hours start, AI (in my case I mostly use Anthropic, others may be better) starts dying or at least getting intermittent errors. In EU working hours there's rarely any outages." Another user also noted this pattern, saying that "early morning here in the UK everything is fine, as soon as most of the US is up and at it, then it slowly turns to treacle." While some users criticized Anthropic for reliability issues in recent months, the company's status page acknowledged the issue within 39 minutes of the initial reports, and by 12:55 pm Eastern announced that a fix had been implemented and that the company's teams were monitoring the results.

Social Networks

Sam Altman Says Bots Are Making Social Media Feel 'Fake' (techcrunch.com) 82

An anonymous reader quotes a report from TechCrunch: X enthusiast and Reddit shareholder Sam Altman had an epiphany on Monday: Bots have made it impossible to determine whether social media posts are really written by humans, he posted. The realization came while reading (and sharing) some posts from the r/Claudecode subreddit, which were praising OpenAI Codex. OpenAI launched the software programming service that takes on Anthropic's Claude Code in May. Lately, that subreddit has been so filled with posts from self-proclaimed Code users announcing that they moved to Codex that one Reddit user even joked: "Is it possible to switch to codex without posting a topic on Reddit?"

This left Altman wondering how many of those posts were from real humans. "I have had the strangest experience reading this: I assume it's all fake/bots, even though in this case I know codex growth is really strong and the trend here is real," he confessed on X. He then live-analyzed his reasoning. "I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots)."

[...] Altman also throws a dig at the incentives when social media sites and creators rely on engagement to make money. Fair enough. But then Altman confesses that one of the reasons he thinks the pro-OpenAI posts in this subreddit might be bots is because OpenAI has also been "astroturfed." That typically involves posts by people or bots paid for by the competitor, or paid by some third-degree contractor, giving the competitor plausible deniability. [...] Altman surmises, "The net effect is somehow AI twitter/AI Reddit feels very fake in a way it really didn't a year or two ago." If that's true, who's fault is it? GPT has led models to become so good at writing, that LLMs have become a plague not just to social media sites (which have always had a bot problem) but to schools, journalism, and the courts.

AI

AI Tool Usage 'Correlates Negatively' With Performance in CS Class, Estonian Study Finds (phys.org) 63

How do AI tools impact college students? 231 students in an object-oriented programming class participated in a study at Estonia's University of Tartu (conducted by an associate professor of informatics and a recently graduated master's student). They were asked how frequently they used AI tools and for what purposes. The data were analyzed using descriptive statistics, and Spearman's rank correlation analysis was performed to examine the strength of the relationships. The results showed that students mainly used AI assistance for solving programming tasks — for example, debugging code and understanding examples. A surprising finding, however, was that more frequent use of chatbots correlated with lower academic results. One possible explanation is that struggling students were more likely to turn to AI. Nevertheless, the finding suggests that unguided use of AI and over-reliance on it may in fact hinder learning.
The researchers say their report provides "quantitative evidence that frequent AI use does not necessarily translate into better academic outcomes in programming courses."

Other results from the survey:
  • 47 respondents (20.3%) never used AI assistants in this course.
  • Only 3.9% of the students reported using AI assistants weekly, "suggesting that reliance on such tools is still relatively low."
  • "Few students feared plagiarism, suggesting students don't link AI use to it — raising academic concerns."

Open Source

Rust Foundation Announces 'Innovation Lab' to Support Impactful Rust Projects (webpronews.com) 30

Announced this week at RustConf 2025 in Seattle, the new Rust Innovation Lab will offer open source projects "the opportunity to receive fiscal sponsorship from the Rust Foundation, including governance, legal, networking, marketing, and administrative support."

And their first project will be the TLS library Rustls (for cryptographic security), which they say "demonstrates Rust's ability to deliver both security and performance in one of the most sensitive areas of modern software infrastructure." Choosing Rustls "underscores the lab's focus on infrastructure-critical tools, where reliability is paramount," argues explains WebProNews. But "Looking ahead, the foundation plans to expand the lab's portfolio, inviting applications from promising Rust initiatives. This could catalyze innovations in areas like embedded systems and blockchain, where Rust's efficiency shines."

Their article notes that the Rust Foundation "sees the lab as a way to accelerate innovation while mitigating the operational burdens that often hinder open-source development." [T]he Foundation aims to provide a stable, neutral environment for select Rust endeavors, complete with governance oversight, legal and administrative backing, and fiscal sponsorship... At its core, the Rust Innovation Lab addresses a growing need within the developer community for structured support amid Rust's rising adoption in sectors like systems programming and web infrastructure. By offering a "home" for projects that might otherwise struggle with sustainability, the lab ensures continuity and scalability. This comes at a time when Rust's memory safety features are drawing attention from major tech firms, including those in cloud computing and cybersecurity, as a counter to vulnerabilities plaguing languages like C++...

Industry observers note that such fiscal sponsorship could prove transformative, enabling projects to secure funding from diverse sources while maintaining independence. The Rust Foundation's involvement ensures compliance with best practices, potentially attracting more corporate backers wary of fragmented open-source efforts... By providing a neutral venue, the foundation aims to prevent the pitfalls seen in other ecosystems, such as project abandonment due to maintainer burnout or legal entanglements... For industry insiders, the Rust Innovation Lab represents a strategic evolution, potentially accelerating Rust's integration into mission-critical systems.

Medicine

Common Pesticide Linked To Widespread Brain Abnormalities In Children (sciencealert.com) 46

alternative_right shares a report from ScienceAlert: The insecticide chlorpyrifos is a powerful tool for controlling various pests, making it one of the most widely used pesticides during the latter half of the 20th century. Like many pesticides, however, chlorpyrifos lacks precision. In addition to harming non-target insects like bees, it has also been linked to health risks for much larger animals -- including us. Now, a new US study suggests those risks may begin before birth. Humans exposed to chlorpyrifos prenatally are more likely to exhibit structural brain abnormalities and reduced motor functions in childhood and adolescence.

Progressively higher prenatal exposure to chlorpyrifos was associated with incrementally greater deviations in brain structure, function, and metabolism in children and teens, the researchers found, along with poorer measures of motor speed and motor programming. [...] This supports previous research linking chlorpyrifos with impaired cognitive function and brain development, but these findings are the first evidence of widespread and long-lasting molecular, cellular, and metabolic effects in the brain.
"The disturbances in brain tissue and metabolism that we observed with prenatal exposure to this one pesticide were remarkably widespread throughout the brain," says first author Bradley Peterson, a developmental neuroscientist at the University of Southern California's Keck School of Medicine. Senior author Virginia Rauh added: "It is vitally important that we continue to monitor the levels of exposure in potentially vulnerable populations, especially in pregnant women in agricultural communities, as their infants continue to be at risk."

The report notes that the EPA banned residential use of chlorpyrifos in 2001 but the pesticide is still used in agriculture around the world.

The findings have been published in the journal JAMA Neurology.
Python

Python Surges in Popularity. And So Does Perl (techrepublic.com) 80

Last month, Python "reached the highest ranking a programming language ever had in the TIOBE index," according to TIOBE CEO Paul Jansen.

"We thought Python couldn't grow any further, but AI code assistants let Python take yet another step forward." According to recent studies of Stanford University (Yegor Denisov-Blanch), AI code assistants such as Microsoft Copilot, Cursor or Google Gemini Code Assist are 20% more effective if used for popular programming languages. The reason for this is obvious: there is more code for these languages available to train the underlying models. This trend is visible in the TIOBE index as well, where we see a consolidation of languages at the top. Why would you start to learn a new obscure language for which no AI assistance is available? This is the modern way of saying that you don't want to learn a new language that is hardly documented and/or has too few libraries that can help you.
TIOBE's "Programming Community Index" attempts to calculate the popularity of languages using the number of skilled engineers, courses, and third-party vendors. It nows gives Python a 26.14% rating, which TechRepublic notes "is well ahead of the next two programming languages on this month's leaderboard: C++ is at 9.18% and C is 9.03%." But the first top six languages haven't changed since last year...
  1. Python
  2. C++
  3. C
  4. Java
  5. C#
  6. JavaScript

Since August of 2024 SQL has dropped from its #7 rank down to #12 (meaning Visual Basic and Go each rise up one rank from their position a year ago, into the #7 and #8 positions).

In the last year Perl has risen from the #25 position to #9, beating out Delphi/Oracle Pascal at #10, and Fortran at #11 (last year's #10). TIOBE CEO Jansen "told TechRepublic in an email that many people were asking why Perl was becoming more popular, but he didn't have a definitive answer. He said he double-checked the underlying data and found the increase to be accurate, though the reason for the shift remains unclear."


Businesses

'Goodbye, $165,000 Tech Jobs. Student Coders Seek Work At Chipotle.' (nytimes.com) 178

theodp writes: The New York Times reports from the CS grad job-seeking trenches: Growing up near Silicon Valley, Manasi Mishra remembers seeing tech executives on social media urging students to study computer programming. "The rhetoric was, if you just learned to code, work hard and get a computer science degree, you can get six figures for your starting salary," Ms. Mishra, now 21, recalls hearing as she grew up in San Ramon, Calif.

Those golden industry promises helped spur Ms. Mishra to code her first website in elementary school, take advanced computing in high school and major in computer science in college. But after a year of hunting for tech jobs and internships, Ms. Mishra graduated from Purdue University in May without an offer. "I just graduated with a computer science degree, and the only company that has called me for an interview is Chipotle," Ms. Mishra said in a get-ready-with-me TikTok video this summer that has since racked up more than 147,000 views.

Some graduates described feeling caught in an A.I. "doom loop." Many job seekers now use specialized A.I. tools like Simplify to tailor their resumes to specific jobs and autofill application forms, enabling them to quickly apply to many jobs. At the same time, companies inundated with applicants are using A.I. systems to automatically scan resumes and reject candidates.

Businesses

GitHub No Longer Independent at Microsoft As CEO Steps Down (axios.com) 28

GitHub CEO Thomas Dohmke announced Monday he will step down to pursue entrepreneurial endeavors, with Microsoft restructuring the subsidiary's leadership rather than appointing a direct replacement.

Microsoft developer division head Julia Liuson will oversee GitHub's revenue, engineering and support operations, while chief product officer Mario Rodriguez will report to Microsoft AI platform VP Asha Sharma.
Programming

Rust's Annual Tech Report: Trusted Publishing for Packages and a C++/Rust Interop Strategy (rustfoundation.org) 25

Thursday saw the release of Rust 1.89.0 But this week the Rust Foundation also released its second comprehensive annual technology report.

A Rust Foundation announcement shares some highlights: - Trusted Publishing [GitHub Actions authentication using cryptographically signed tokens] fully launched on crates.io, enhancing supply chain security and streamlining workflows for maintainers.

- Major progress on crate signing infrastructure using The Update Framework (TUF), including three full repository implementations and stakeholder consensus.

- Integration of the Ferrocene Language Specification (FLS) into the Rust Project, marking a critical step toward a formal Rust language specification [and "laying the groundwork for broader safety certification and formal tooling."]

- 75% reduction in CI infrastructure costs while maintaining contributor workflow stability. ["All Rust repositories are now managed through Infrastructure-as-Code, improving maintainability and security."]

- Expansion of the Safety-Critical Rust Consortium, with multiple international meetings and advances on coding guidelines aligned with safety standards like MISRA. ["The consortium is developing practical coding guidelines, aligned tooling, and reference materials to support regulated industries — including automotive, aerospace, and medical devices — adopting Rust."]

- Direct engagement with ISO C++ standards bodies and collaborative Rust-C++ exploration... The Foundation finalized its strategic roadmap, participated in ISO WG21 meetings, and initiated cross-language tooling and documentation planning. These efforts aim to unlock Rust adoption across legacy C++ environments without sacrificing safety.

The Rust Foundation also acknowledges continued funding from OpenSSF's Alpha-Omega Project and "generous infrastructure donations from organizations like AWS, GitHub, and Mullvad VPN" to the Foundation's Security Initiative, which enabled advances like including GitHub Secret Scanning and automated incident response to "Trusted Publishing" and the integration of vulnerability-surfacing capabilities into crates.io.

There was another announcement this week. In November AWS and the Rust Foundation crowdsourced "an effort to verify the Rust standard library" — and it's now resulted in a new formal verification tool called "Efficient SMT-based Context-Bounded Model Checker" (or ESBMCESBMC) This winning contribution adds ESBMC — a state-of-the-art bounded model checker — to the suite of tools used to analyze and verify Rust's standard library. By integrating through Goto-Transcoder, they enabled ESBMC to operate seamlessly in the Rust verification workflow, significantly expanding the scope and flexibility of verification efforts...

This achievement builds on years of ongoing collaboration across the Rust and formal verification communities... The collaboration has since expanded. In addition to verifying the Rust standard library, the team is exploring the use of formal methods to validate automated C-to-Rust translations, with support from AWS. This direction, highlighted by AWS Senior Principal Scientist Baris Coskun and celebrated by the ESBMC team in a recent LinkedIn post, represents an exciting new frontier for Rust safety and verification tooling.

Programming

'Hour of Code' Announces It's Now Evolving Into 'Hour of AI' (hourofcode.com) 35

Last month Microsoft pledged $4 billion (in cash and AI/cloud technology) to "advance" AI education in K-12 schools, community and technical colleges, and nonprofits (according to a blog post by Microsoft President Brad Smith). But in the launch event video, Smith also says it's time to "switch hats" from coding to AI, adding that "the last 12 years have been about the Hour of Code, but the future involves the Hour of AI."

Long-time Slashdot reader theodp writes: This sets the stage for Code.org CEO Hadi Partovi's announcement that his tech-backed nonprofit's [annual educational event] Hour of Code is being renamed to the Hour of AI... Explaining the pivot, Partovi says: "Computer science for the last 50 years has had a focal point around coding that's been — sort of like you learn computer science so that you create code. There's other things you learn, like data science and algorithms and cybersecurity, but the focal point has been coding.

"And we're now in a world where the focal point of computer science is shifting to AI... We all know that AI can write much of the code. You don't need to worry about where did the semicolons go, or did I close the parentheses or whatnot. The busy work of computer science is going to be done by the computer itself.

"The creativity, the thinking, the systems design, the engineering, the algorithm planning, the security concerns, privacy concerns, ethical concerns — those parts of computer science are going to be what remains with a focal point around AI. And what's going to be important is to make sure in education we give students the tools so they don't just become passive users of AI, but so that they learn how AI works."

Speaking to Microsoft's Smith, Partovi vows to redouble the nonprofit's policy work to "make this [AI literacy] a high school graduation requirement so that no student graduates school without at least a basic understanding of what's going to be part of the new liberal arts background [...] As you showed with your hat, we are renaming the Hour of Code to an Hour of AI."

Movies

Roku Launches Cheap, Ad-Free Streaming Service 'Howdy' (cnbc.com) 11

Roku has launched Howdy, a new ad-free streaming service that costs $2.99 a month. The streaming platform says it offers 10,000 hours of content from Lionsgate, Warner Bros. Discovery and FilmRise, as well as its own, exclusive programming known as Roku Originals. CNBC reports: The service is available across the U.S. beginning Tuesday. [...] The new service runs alongside the Roku Channel, which will remain free. Howdy will initially be available on the Roku platform, and will later be rolled out on mobile and other platforms, the company said. "Priced at less than a cup of coffee, Howdy is ad-free and designed to complement, not compete with, premium services," said Roku founder and CEO Anthony Wood in the release.
Programming

Winners Announced in 2025's 'International Obfuscated C Code Competition' (ioccc.org) 48

Started in 1984, it's been described as the internet's longest-running contest. And yesterday 2025's International Obfuscated C Code Contest concluded — with 23 new winners announced in a special four-and-a-half-hour livestreamed ceremony!

Programmers submitted their funniest programs showcasing C's unusual/obscure subtleties while having some fun. (And demonstrating the importance of clarity and style by setting some very bad examples...) Among this year's winners were an OpenRISC 32-bit CPU emulator, a virtual machine capable of running Doom, and some kind of salmon recipe that makes clever use of C's U"string" literal prefix...

But yes, every entry's source code is ridiculously obfuscated. ("Before you set off on your adventure to decode this program's logic, make sure you have enough food, ammo, clothes, oxen, and programming supplies," read the judge's remarks on the winner of this year's "diabolical logistics" prize. "You'll be driving for 2170 miles through a wild wilderness inspired by Oregon Trail...") And one entrant also struggled mightily in adapting a rough port of their program's old Atari 2600 version, but was never gonna give it up...

Thanks to long-time Slashdot reader achowe for bringing the news (who has submitted winning entries in four different decades, starting in 1991 and continuing through 2024)...

Including a 2004 award for the best abuse of the contest's guidelines. ("We are not exactly sure how many organisations will be upset with this entry, but we are considering starting an IOCCC standards body just to reign in the likes of Mr Howe....")
Programming

The Toughest Programming Question for High School Students on This Year's CS Exam: Arrays 65

America's nonprofit College Board lets high school students take college-level classes — including a computer programming course that culminates with a 90-minute test. But students did better on questions about If-Then statements than they did on questions about arrays, according to the head of the program. Long-time Slashdot reader theodp explains: Students exhibited "strong performance on primitive types, Boolean expressions, and If statements; 44% of students earned 7-8 of these 8 points," says program head Trevor Packard. But students were challenged by "questions on Arrays, ArrayLists, and 2D Arrays; 17% of students earned 11-12 of these 12 points."

"The most challenging AP Computer Science A free-response question was #4, the 2D array number puzzle; 19% of students earned 8-9 of the 9 points possible."

You can see that question here. ("You will write the constructor and one method of the SumOrSameGame class... Array elements are initialized with random integers between 1 and 9, inclusive, each with an equal chance of being assigned to each element of puzzle...") Although to be fair, it was the last question on the test — appearing on page 16 — so maybe some students just didn't get to it.

theodp shares a sample Java solution and one in Excel VBA solution (which includes a visual presentation).

There's tests in 38 subjects — but CS and Statistics are the subjects where the highest number of students earned the test's lowest-possible score (1 out of 5). That end of the graph also includes notoriously difficult subjects like Latin, Japanese Language, and Physics.

There's also a table showing scores for the last 23 years, with fewer than 67% of students achieving a passing grade (3+) for the first 11 years. But in 2013 and 2017, more than 67% of students achieved that passsing grade, and the percentage has stayed above that line ever since (except for 2021), vascillating between 67% and 70.4%.

2018: 67.8%
2019: 69.6%
2020: 70.4%
2021: 65.1%
2022: 67.6%
2023: 68.0%
2024: 67.2%
2025: 67.0%
AI

5 Million People Tried Microsoft's AI Coding Tool 'GitHub Copilot' in the Last 3 Months (techcrunch.com) 41

Microsoft's AI coding assistant "GitHub Copilot" has now had 20 million "all-time users," a GitHub spokesperson told TechCrunch. That means 5 million people have tried out GitHub Copilot for the first time in the last three months — the company reported in April the tool had reached 15 million users.

Microsoft and GitHub don't report how many of these 20 million people have continued to use the AI coding tool on a monthly or daily basis — though those metrics are likely far lower.

Microsoft also reported that GitHub Copilot, which is among the most popular AI coding tools offered today, is used by 90% of the Fortune 100. The product's growth among enterprise customers has also grown about 75% compared to last quarter, according to the company... In 2024, Nadella said GitHub Copilot was a larger business than all of GitHub was when Microsoft acquired it in 2018. In the year since, it seems GitHub Copilot's growth rate has continued in a positive direction.

Programming

Fiverr Ad Mocks Vibe Coding - with a Singing Overripe Avocado (creativebloq.com) 59

It's a cultural milestone. Fiverr just released an ad mocking vibe coding.

The video features what its description calls a "clueless entrepreneur" building an app to tell if an avocado is ripe — who soon ends up blissfully singing with an avocado to the tune of the cheesy 1987 song "Nothing's Gonna Stop Us Now." The avocado sings joyously of "a new app on the rise in a no-code world that's too good to be true" (rhyming that with "So close. Just not tested through...")

"Let them say we're crazy. I don't care about bugs!" the entrepreneur sings back. "Built you in a minute, now I'm so high off this buzz..."

But despite her singing to the overripe avocado that "I don't need a backend if I've got the spark!" and that they can "build this app together, vibe-coding forever. Nothing's going to stop us now!" — the build suddenly fails. (And it turns out that avocado really was overripe...) Fiverr then suggests viewers instead hire one of their experts for building their apps...

The art/design site Creative Bloq acknowledges Fiverr "flip-flopping between scepticism and pro-AI marketing." (They point out a Fiverr ad last November had ended with the tagline "Nobody cares that you use AI! They care about the results — for the best ones higher Fiverr experts who've mastered every digital skill including AI.") But the site calls this new ad "a step in the right direction towards mindful AI usage." Just like an avocado that looks perfect on the outside, once you inspect the insides, AI-generated code can be deceptively unripe.
Fiverr might be feeling the impact of vibecoding themselves. The freelancing web site saw the company's share price fall over 14% this week, with one Yahoo! Finance site saying this week's quarterly results revealed Fiverr's active buyers dropped 10.9% compared to last year — a decrease of 3.4 million buyers which "overshadowed a 9.8% increase in spending per buyer."

Even when issuing a buy recommendation, Seeking Alpha called it "a short-term rebound play, as the company faces longer-term risks from AI and active buyer churn."
AI

Would AI Perform Better If We Simulated Guilt? (sciencenews.org) 35

Remember, it's all synthesized "anthropomorphizing". But with that caveat, Science News reports: In populations of simple software agents (like characters in "The Sims" but much, much simpler), having "guilt" can be a stable strategy that benefits them and increases cooperation, researchers report July 30 in Journal of the Royal Society Interface... When we harm someone, we often feel compelled to pay a penance, perhaps as a signal to others that we won't offend again. This drive for self-punishment can be called guilt, and it's how the researchers programmed it into their agents. The question was whether those that had it would be outcompeted by those that didn't, say Theodor Cimpeanu, a computer scientist at the University of Stirling in Scotland, and colleagues.
Science News spoke to a game-theory lecturer from Australia who points out it's hard to map simulations to real-world situations — and that they end up embodying many assumptions. Here researchers were simulating The Prisoner's Dilemma, programming one AI agent that "felt guilt (lost points) only if it received information that its partner was also paying a guilt price after defecting." And that turned out to be the most successful strategy.

One of the paper's authors then raises the possibility that an evolving population of AIs "could comprehend the cold logic to human warmth."

Thanks to Slashdot reader silverjacket for sharing the article.
AI

Anthropic Revokes OpenAI's Access To Claude Over Terms of Service Violation 10

An anonymous reader quotes a report from Wired: Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty said in a statement to WIRED. "Unfortunately, this is a direct violation of our terms of service." According to Anthropic's commercial terms of service, customers are barred from using the service to "build a competing product or service, including to train competing AI models" or "reverse engineer or duplicate" the services. This change in OpenAI's access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding.

OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry."
Programming

Stack Overflow Data Reveals the Hidden Productivity Tax of 'Almost Right' AI Code (venturebeat.com) 77

Developers are growing increasingly frustrated with AI coding tools that produce deceptively flawed solutions, according to Stack Overflow's latest survey of over 49,000 programmers worldwide. The 2025 survey exposes a widening gap between AI adoption and satisfaction: while 84% of developers now use or plan to use AI tools, their trust has cratered.

Only 33% trust AI accuracy today, down from 43% last year. The core problem isn't broken code that developers can easily spot and discard. Instead, two-thirds report wrestling with AI solutions that appear correct but contain subtle errors requiring significant debugging time. Nearly half say fixing AI-generated code takes longer than expected, undermining the productivity gains these tools promise to deliver.

Slashdot Top Deals