Programming

GitHub Announces 'Agent HQ', Letting Copilot Subscribers Run and Manage Coding Agents from Multiple Vendors (venturebeat.com) 4

"AI isn't just a tool anymore; it's an integral part of the development experience," argues GitHub's blog. So "Agents shouldn't be bolted on. They should work the way you already work..."

So this week GitHub announced "Agent HQ," which CNBC describes as a "mission control" interface "that will allow software developers to manage coding agents from multiple vendors on a single platform." Developers have a range of new capabilities at their fingertips because of these agents, but it can require a lot of effort to keep track of them all individually, said GitHub COO Kyle Daigle. Developers will now be able to manage agents from GitHub, OpenAI, Google, Anthropic, xAI and Cognition in one place with Agent HQ. "We want to bring a little bit of order to the chaos of innovation," Daigle told CNBC in an interview. "With so many different agents, there's so many different ways of kicking off these asynchronous tasks, and so our big opportunity here is to bring this all together." Agent HQ users will be able to access a command center where they can assign, steer and monitor the work of multiple agents...

The third-party agents will begin rolling out to GitHub Copilot subscribers in the coming months, but Copilot Pro+ users will be able to access OpenAI Codex in VS Code Insiders this week, the company said.

"We're into this wave two era," GitHub's COO Mario Rodriguez told VentureBeat, an era that's "going to be multimodal, it's going to be agentic and it's going to have these new experiences that will feel AI native...."

Or, as VentureBeat sees it, GitHub "is positioning itself as the essential orchestration layer beneath them all..." Just as the company transformed Git, pull requests and CI/CD into collaborative workflows, it's now trying to do the same with a fragmented AI coding landscape...

The technical architecture addresses a critical enterprise concern: Security. Unlike standalone agent implementations where users must grant broad repository access, GitHub's Agent HQ implements granular controls at the platform level... Agents operating through Agent HQ can only commit to designated branches. They run within sandboxed GitHub Actions environments with firewall protections. They operate under strict identity controls. [GitHub COO] Rodriguez explained that even if an agent goes rogue, the firewall prevents it from accessing external networks or exfiltrating data unless those protections are explicitly disabled.

Beyond managing third-party agents, GitHub is introducing two technical capabilities that set Agent HQ apart from alternative approaches like Cursor's standalone editor or Anthropic's Claude integration. Custom agents via AGENTS.md files: Enterprises can now create source-controlled configuration files that define specific rules, tools and guardrails for how Copilot behaves. For example, a company could specify "prefer this logger" or "use table-driven tests for all handlers." This permanently encodes organizational standards without requiring developers to re-prompt every time... Native Model Context Protocol (MCP) support: VS Code now includes a GitHub MCP Registry. Developers can discover, install and enable MCP servers with a single click. They can then create custom agents that combine these tools with specific system prompts. This positions GitHub as the integration point between the emerging MCP ecosystem and actual developer workflows. MCP, introduced by Anthropic but rapidly gaining industry support, is becoming a de facto standard for agent-to-tool communication. By supporting the full specification, GitHub can orchestrate agents that need access to external services without each agent implementing its own integration logic.

GitHub is also shipping new capabilities within VS Code itself. Plan Mode allows developers to collaborate with Copilot on building step-by-step project approaches. The AI asks clarifying questions before any code is written. Once approved, the plan can be executed either locally in VS Code or by cloud-based agents. The feature addresses a common failure mode in AI coding: Beginning implementation before requirements are fully understood. By forcing an explicit planning phase, GitHub aims to reduce wasted effort and improve output quality.

More significantly, GitHub's code review feature is becoming agentic. The new implementation will use GitHub's CodeQL engine, which previously largely focused on security vulnerabilities to identify bugs and maintainability issues. The code review agent will automatically scan agent-generated pull requests before human review. This creates a two-stage quality gate.

"Don't let this little bit of news float past you like all those self-satisfied marketing pitches we semi-hear and ignore," writes ZDNet: If it works and remains reliable, this is actually a very big deal... Tech companies, especially the giant ones, often like to talk "open" but then do their level best to engineer lock-in to their solution and their solution alone. Sure, most of them offer some sort of export tool, but the barrier to moving from one tool to another is often huge... [T]he idea that you can continue to use your favorite agent or agents in GitHub, fully integrated into the GitHub tool path, is powerful. It means there's a chance developers might not have to suffer the walled garden effect that so many companies have strived for to lock in their customers.
Programming

Cloudflare Raves About Performance Gains After Rust Rewrite (cloudflare.com) 48

"We've spent the last year rebuilding major components of our system," Cloudflare announced this week, "and we've just slashed the latency of traffic passing through our network for millions of our customers," (There's a 10ms cut in the median time to respond, plus a 25% performance boost as measured by CDN performance tests.) They replaced a 15-year-old system named FL (where they run security and performance features), and "At the same time, we've made our system more secure, and we've reduced the time it takes for us to build and release new products."

And yes, Rust was involved: We write a lot of Rust, and we've gotten pretty good at it... We built FL2 in Rust, on Oxy [Cloudflare's Rust-based next generation proxy framework], and built a strict module framework to structure all the logic in FL2... Built in Rust, [Oxy] eliminates entire classes of bugs that plagued our Nginx/LuaJIT-based FL1, like memory safety issues and data races, while delivering C-level performance. At Cloudflare's scale, those guarantees aren't nice-to-haves, they're essential. Every microsecond saved per request translates into tangible improvements in user experience, and every crash or edge case avoided keeps the Internet running smoothly. Rust's strict compile-time guarantees also pair perfectly with FL2's modular architecture, where we enforce clear contracts between product modules and their inputs and outputs...

It's a big enough distraction from shipping products to customers to rebuild product logic in Rust. Asking all our teams to maintain two versions of their product logic, and reimplement every change a second time until we finished our migration was too much. So, we implemented a layer in our old NGINX and OpenResty based FL which allowed the new modules to be run. Instead of maintaining a parallel implementation, teams could implement their logic in Rust, and replace their old Lua logic with that, without waiting for the full replacement of the old system.

Over 100 engineers worked on FL2 — and there was extensive testing, plus a fallback-to-FL1 procedure. But "We started running customer traffic through FL2 early in 2025, and have been progressively increasing the amount of traffic served throughout the year...." As we described at the start of this post, FL2 is substantially faster than FL1. The biggest reason for this is simply that FL2 performs less work [thanks to filters controlling whether modules need to run]... Another huge reason for better performance is that FL2 is a single codebase, implemented in a performance focussed language. In comparison, FL1 was based on NGINX (which is written in C), combined with LuaJIT (Lua, and C interface layers), and also contained plenty of Rust modules. In FL1, we spent a lot of time and memory converting data from the representation needed by one language, to the representation needed by another. As a result, our internal measures show that FL2 uses less than half the CPU of FL1, and much less than half the memory. That's a huge bonus — we can spend the CPU on delivering more and more features for our customers!

Using our own tools and independent benchmarks like CDNPerf, we measured the impact of FL2 as we rolled it out across the network. The results are clear: websites are responding 10 ms faster at the median, a 25% performance boost. FL2 is also more secure by design than FL1. No software system is perfect, but the Rust language brings us huge benefits over LuaJIT. Rust has strong compile-time memory checks and a type system that avoids large classes of errors. Combine that with our rigid module system, and we can make most changes with high confidence...

We have long followed a policy that any unexplained crash of our systems needs to be investigated as a high priority. We won't be relaxing that policy, though the main cause of novel crashes in FL2 so far has been due to hardware failure. The massively reduced rates of such crashes will give us time to do a good job of such investigations. We're spending the rest of 2025 completing the migration from FL1 to FL2, and will turn off FL1 in early 2026. We're already seeing the benefits in terms of customer performance and speed of development, and we're looking forward to giving these to all our customers.

After that, when everything is modular, in Rust and tested and scaled, we can really start to optimize...!

Thanks to long-time Slashdot reader Beeftopia for sharing the article.
Ubuntu

Ubuntu Will Use Rust For Dozens of Core Linux Utilities (zdnet.com) 79

Ubuntu "is adopting the memory-safe Rust language," reports ZDNet, citing remarks at this year's Ubuntu Summit from Jon Seager, Canonical's VP of engineering for Ubuntu: . Seager said the engineering team is focused on replacing key system components with Rust-based alternatives to enhance safety and resilience, starting with Ubuntu 25.10. He stressed that resilience and memory safety, not just performance, are the principal drivers: "It's the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me". This move is echoed in Ubuntu's adoption of sudo-rs, the Rust implementation of sudo, with fallback and opt-out mechanisms for users who want to use the old-school sudo command.

In addition to sudo-rs, Ubuntu 26.04 will use the Rust-based uutils/coreutils for Linux's default core utilities. This setup includes ls, cp, mv, and dozens of other basic Unix command-line tools. This Rust reimplementation aims for functional parity with GNU coreutils, with improved safety and maintainability.

On the desktop front, Ubuntu 26.04 will also bring seamless TPM-backed full disk encryption. If this approach reminds you of Windows BitLocker or MacOS FileVault, it should. That's the idea.

In other news, Canonical CEO Mark Shuttleworth said "I'm a believer in the potential of Linux to deliver a desktop that could have wider and universal appeal." (Although he also thinks "the open-source community needs to understand that building desktops for people who aren't engineers is different. We need to understand that the 'simple and just works' is also really important.")

Shuttleworth answered questions from Slashdot's readers in 2005 and 2012.
Programming

TypeScript Overtakes Python and JavaScript To Claim Top Spot on GitHub (github.blog) 37

TypeScript overtook Python and JavaScript in August 2025 to become the most used language on GitHub. The shift marked the most significant language change in more than a decade. The language grew by over 1 million contributors in 2025, a 66% increase year over year, and finished August with 2,636,006 monthly contributors.

Nearly every major frontend framework now scaffolds projects in TypeScript by default. Next.js 15, Astro 3, SvelteKit 2, Qwik, SolidStart, Angular 18, and Remix all generate TypeScript codebases when developers create new projects. Type systems reduce ambiguity and catch errors from large language models before production. A 2025 academic study found 94% of LLM-generated compilation errors were type-check failures. Tooling like Vite, ts-node, Bun, and I.D.E. autoconfig hide boilerplate setup. Among new repositories created in the past twelve months, TypeScript accounted for 5,394,256 projects. That represented a 78% increase from the prior year.
Python

Python Foundation Rejects Government Grant Over DEI Restrictions (theregister.com) 258

The Python Software Foundation rejected a $1.5 million U.S. government grant because it required them to renounce all diversity, equity, and inclusion initiatives. "The non-profit would've used the funding to help prevent supply chain attacks; create a new automated, proactive review process for new PyPI packages; and make the project's work easily transferable to other open-source package managers," reports The Register. From the report: The programming non-profit's deputy executive director Loren Crary said in a blog post today that the National Science Founation (NSF) had offered $1.5 million to address structural vulnerabilities in Python and the Python Package Index (PyPI), but the Foundation quickly became dispirited with the terms (PDF) of the grant it would have to follow. "These terms included affirming the statement that we 'do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI [diversity, equity, and inclusion], or discriminatory equity ideology in violation of Federal anti-discrimination laws,'" Crary noted. "This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole."

To make matters worse, the terms included a provision that if the PSF was found to have voilated that anti-DEI diktat, the NSF reserved the right to claw back any previously disbursed funds, Crary explained. "This would create a situation where money we'd already spent could be taken back, which would be an enormous, open-ended financial risk," the PSF director added. The PSF's mission statement enshrines a commitment to supporting and growing "a diverse and international community of Python programmers," and the Foundation ultimately decided it wasn't willing to compromise on that position, even for what would have been a solid financial boost for the organization. "The PSF is a relatively small organization, operating with an annual budget of around $5 million per year, with a staff of just 14," Crary added, noting that the $1.5 million would have been the largest grant the Foundation had ever received - but it wasn't worth it if the conditions were undermining the PSF's mission. The PSF board voted unanimously to withdraw its grant application.

Programming

Does Generative AI Threaten the Open Source Ecosystem? (zdnet.com) 47

"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly."

That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports: Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole.

The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...."

"Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk."

O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.

Microsoft

28 Years After 'Clippy', Microsoft Upgrades Copilot With Cartoon Assistant 'Micu' (apnews.com) 19

"Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time," writes the Associated Press: Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE'koh) on Thursday, a floating cartoon face shaped like a blob or flame that will embody the software giant's Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality... "When you talk about something sad, you can see Mico's face change. You can see it dance around and move as it gets excited with you," said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. "It's in this effort of really landing this AI companion that you can really feel."

In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors, spins around and wears glasses when in "study" mode. It's also easy to shut off, which is a big difference from Microsoft's Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997. "It was not well-attuned to user needs at the time," said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. "Microsoft pushed it, we resisted it and they got rid of it. I think we're much more ready for things like that today..."

Microsoft's product releases Thursday include a new option to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta's WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to "troll your friends," in contrast to Microsoft's designs for an "intensely collaborative" AI-assisted workplace.

AI

Fedora Approves AI-Assisted Contributions 15

The Fedora Council has approved a new policy allowing AI-assisted code contributions, provided contributors fully disclose and take responsibility for any AI-generated work. Phoronix reports: AI-assisted code contributions can be used but the contributor must take responsibility for that contribution, it must be transparent in disclosing the use of AI such as with the "Assisted-by" tag, and that AI can help in assisting human reviewers/evaluation but must not be the sole or final arbiter. This AI policy also doesn't cover large-scale initiatives which will need to be handled individually with the Fedora Council. [...] The Fedora Council does expect that this policy will need to be updated over time for staying current with AI technologies.
Transportation

A SiriusXM Update Sent Some Audi Screens Into a Forced-Reboot Loop For Months (thedrive.com) 29

An anonymous reader quotes a report from The Drive: This week, a reader wrote to us sharing that the infotainment in their 2020 Audi A4 had been "rebooting every five minutes all year." It looks like the problem was caused by a compatibility issue with a SiriusXM app update. Audi tells us the situation's been rectified, but it illustrates a serious pain point in modern cars -- myriad apps interacting with a diverse population of in-car software systems. Our reader was not the only Audi owner affected. "Randomly restarting" Audi infotainment screens have been discussed on Reddit, the Audiworld forum, and elsewhere, going back many months. Audi's recall notice and related service action only went out this summer.

It looks like this particular problem was caused when the satellite radio app pushed an update that was supposed to work on the latest version of Audi's infotainment software, but not all cars were running that. Then SiriusXM reverted, which, I guess, did not solve the problem for every owner. Audi now states that the problem has been fixed and originated with the SiriusXM app, but really, the automaker bears more than a little blame, too. [...] I dropped our own contacts at Audi a note about how and why this might have happened, and they added this clarification: "At the beginning of the year, SiriusXM did a programming update which was addressed via a software update to the MMI. However, as not all customers had their cars updated and SiriusXM then reverted back to the previous category numbering. Nonetheless, a MMI update is recommended as the two versions do seem to cause the issue."

Television

Apple Inks $750 Million For US Formula 1 Streaming Coverage (variety.com) 21

Apple has struck a five-year, $750 million deal to become the exclusive U.S. home for Formula 1 starting in 2026. "Apple is paying a significant premium over the $90 million per year currently paid by ESPN, whose F1 broadcast deal expires at the end of 2025 after holding the rights in the U.S. since 2018," notes Variety. From the report: According to Apple, it will deliver the Formula 1 programming with a "more dynamic and elevated viewing experience," and both parties expressed optimism that the deal will attract new motorsports fans in America in the years ahead. The company is rebranding the video-streaming service, which launched in 2019 as Apple TV+, to remove the plus sign.

It's another big move by Apple into sports, which also has streaming deals with MLB and Major League Soccer. The F1 agreement and follows Apple's partnership with Formula 1 for original film "F1 The Movie," starring Brad Pitt, which raked in $629 million worldwide at the box office this year -- the highest-grossing sports movie of all time and Pitt's highest-grossing feature to date. "F1 The Movie" will debut on Apple TV on Dec. 12, 2025.

Television

TiVo Exiting Legacy DVR Business (mediaplaynews.com) 67

TiVo, the digital video recording pioneer, has moved on from its legacy DVR technology, focusing instead on its branded operating system software promoting third-party content searches, recommendation, including free ad-supported streaming options and more for smart televisions. From a report: "As of Oct. 1, 2025, TiVo has stopped selling Edge DVR hardware products," the company said in an AI-based message. The recording said that the company and its associates no longer manufacture DVR hardware, "and our remaining inventory is now depleted." TiVo said it remains "committed to providing support for our DVR customers and will continue to provide support for the foreseeable future." TiVo in 1999 created the first set-top device enabling users to record and skip ads within television programming.
AI

What If Vibe Coding Creates More Programming Jobs? (msn.com) 82

Vibe coding tools "are transforming the job experience for many tech workers," writes the Los Angeles Times. But Gartner analyst Philip Walsh said the research firm's position is that AI won't replace software engineers and will actually create a need for more. "There's so much software that isn't created today because we can't prioritize it," Walsh said. "So it's going to drive demand for more software creation, and that's going to drive demand for highly skilled software engineers who can do it..." The idea that non-technical people in an organization can "vibe-code" business-ready software is a misunderstanding [Walsh said]... "That's simply not happening. The quality is not there. The robustness is not there. The scalability and security of the code is not there," Walsh said. "These tools reward highly skilled technical professionals who already know what 'good' looks like."
"Economists, however, are also beginning to worry that AI is taking jobs that would otherwise have gone to young or entry-level workers," the article points out. "In a report last month, researchers at Stanford University found "substantial declines in employment for early-career workers'' — ages 22-25 — in fields most exposed to AI. Stanford researchers also found that AI tools by 2024 were able to solve nearly 72% of coding problems, up from just over 4% a year earlier."

And yet Cat Wu, project manager of Anthropic's Claude Code, doesn't even use the term vibe coding. "We definitely want to make it very clear that the responsibility, at the end of the day, is in the hands of the engineers." Wu said she's told her younger sister, who's still in college, that software engineering is still a great career and worth studying. "When I talk with her about this, I tell her AI will make you a lot faster, but it's still really important to understand the building blocks because the AI doesn't always make the right decisions," Wu said. "A lot of times the human intuition is really important."
AI

NYT Podcast On Job Market For Recent CS Grads Raises Ire of Code.org (geekwire.com) 71

Longtime Slashdot reader theodp writes: Big Tech Told Kids to Code. The Jobs Didn't Follow, a New York Times podcast episode discussing how the promise of a six-figure salary for those who study computer science is turning out to be an empty one for recent grads in the age of AI, drew the ire of the co-founders of nonprofit Code.org, which -- ironically -- is pivoting to AI itself with the encouragement of, and millions from, its tech-giant backers.

In a LinkedIn post, Code.org CEO and co-founder Hadi Partovi said the paper and its Monday episode of "The Daily" podcast were cherrypicking anecdotes "to stoke populist fears about tech corporations and AI." He also took to X, tweeting: "Today the NYTimes (falsely) claimed CS majors can't find work. The data tells the opposite story: CS grads have the highest median wage and the fifth-lowest underemployment across all majors. [...] Journalism is broken. Do better NYTimes." To which Code.org co-founder Ali Partovi (Hadi's twin), replied: "I agree 100%. That NYTimes Daily piece was deplorable -- an embarrassment for journalism."

Programming

New Claude Model Runs 30-Hour Marathon To Create 11,000-Line Slack Clone (theverge.com) 61

Anthropic's Claude Sonnet 4.5 ran autonomously for 30 hours to build a chat application similar to Slack or Teams, generating approximately 11,000 lines of code before stopping upon task completion. The model, announced today, marks a significant leap from the company's Opus 4 model, which ran for seven hours in May.

Claude Sonnet 4.5 performs three times better at browser navigation and computer use than Anthropic's October technology. Beta-tester Canva deployed the model for complex engineering tasks in its codebase and product features. Anthropic paired the release with virtual machines, memory, context management, and multi-agent support tools, enabling developers to build their own AI agents using the same building blocks that power Claude Code.
AI

Professor Warns CS Graduates are Struggling to Find Jobs (yahoo.com) 77

"Computer science went from a future-proof career to an industry in upheaval in a shockingly small amount of time," writes Business Insider, citing remarks from UC Berkeley professor Hany Farid said during a recent episode of Nova's "Particles of Thought" podcast.

"Our students typically had five internship offers throughout their first four years of college," Farid said. "They would graduate with exceedingly high salaries, multiple offers. They had the run of the place. That is not happening today. They're happy to get one job offer...." It's too easy to just blame AI, though, Farid said. "Something is happening in the industry," he said. "I think it's a confluence of many things. I think AI is part of it. I think there's a thinning of the ranks that's happening, that's part of it, but something is brewing..."

Farid, one of the world's experts on deepfake videos, said he is often asked for advice. He said what he tells students has changed... "Now, I think I'm telling people to be good at a lot of different things because we don't know what the future holds."

Like many in the AI space, Farid said that those who use breakthrough technologies will outlast those who don't. "I don't think AI is going to put lawyers out of business, but I think lawyers who use AI will put those who don't use AI out of business," he said. "And I think you can say that about every profession."

Programming

Will AI Mean Bring an End to Top Programming Language Rankings? (ieee.org) 51

IEEE Spectrum ranks the popularity of programming languages — but is there a problem? Programmers "are turning away from many of these public expressions of interest. Rather than page through a book or search a website like Stack Exchange for answers to their questions, they'll chat with an LLM like Claude or ChatGPT in a private conversation." And with an AI assistant like Cursor helping to write code, the need to pose questions in the first place is significantly decreased. For example, across the total set of languages evaluated in the Top Programming Languages, the number of questions we saw posted per week on Stack Exchange in 2025 was just 22% of what it was in 2024...

However, an even more fundamental problem is looming in the wings... In the same way most developers today don't pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail... [T]he popularity of different computer languages could become as obscure a topic as the relative popularity of railway track gauges... But if an AI is soothing our irritations with today's languages, will any new ones ever reach the kind of critical mass needed to make an impact? Will the popularity of today's languages remain frozen in time?

That's ultimately the larger question. "how much abstraction and anti-foot-shooting structure will a sufficiently-advanced coding AI really need...?" [C]ould we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future? True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks. And instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh.

What's the role of the programmer in a future without source code? Architecture design and algorithm selection would remain vital skills... How should a piece of software be interfaced with a larger system? How should new hardware be exploited? In this scenario, computer science degrees, with their emphasis on fundamentals over the details of programming languages, rise in value over coding boot camps.

Will there be a Top Programming Language in 2026? Right now, programming is going through the biggest transformation since compilers broke onto the scene in the early 1950s. Even if the predictions that much of AI is a bubble about to burst come true, the thing about tech bubbles is that there's always some residual technology that survives. It's likely that using LLMs to write and assist with code is something that's going to stick. So we're going to be spending the next 12 months figuring out what popularity means in this new age, and what metrics might be useful to measure.

Having said that, IEEE Spectrum still ranks programming language popularity three ways — based on use among working programmers, demand from employers, and "trending" in the zeitgeist — using seven different metrics.

Their results? Among programmers, "we see that once again Python has the top spot, with the biggest change in the top five being JavaScript's drop from third place last year to sixth place this year. As JavaScript is often used to create web pages, and vibe coding is often used to create websites, this drop in the apparent popularity may be due to the effects of AI... In the 'Jobs' ranking, which looks exclusively at what skills employers are looking for, we see that Python has also taken 1st place, up from second place last year, though SQL expertise remains an incredibly valuable skill to have on your resume."
Programming

Bundler's Lead Maintainer Asserts Trademark in Ongoing Struggle with Ruby Central (arko.net) 7

After the nonprofit Ruby Central removed all RubyGems' maintainers from its GitHub repository, André Arko — who helped build Bundler — wrote a new blog post on Thursday "detailing Bundler's relationship with Ruby Central," according to this update from The New Stack. "In the last few weeks, Ruby Central has suddenly asserted that they alone own Bundler," he wrote. "That simply isn't true. In order to defend the reputation of the team of maintainers who have given so much time and energy to the project, I have registered my existing trademark on the Bundler project."

He adds that trademarks do not affect copyright, which stays with the original contributors unchanged. "Trademarks only impact one thing: Who is allowed say that what they make is named 'Bundler,'" he wrote. "Ruby Central is welcome to the code, just like everyone else. They are not welcome to the project name that the Bundler maintainers have painstakingly created over the last 15 years."

He is, however, not seeking the trademark for himself, noting that the "idea of Bundler belongs to the Ruby community." "Once there is a Ruby organization that is accountable to the maintainers, and accountable to the community, with openly and democratically elected board members, I commit to transfer my trademark to that organization," he said. "I will not license the trademark, and will instead transfer ownership entirely. Bundler should belong to the community, and I want to make sure that is true for as long as Bundler exists."

The blog It's FOSS also has an update on Spinel, the new worker-owned collective founded by Arko, Samuel Giddins [who Giddins led RubyGems security efforts], and Kasper Timm Hansen (who served served on the Rails core team from 2016 to 2022 and was one of its top contributors): These guys aren't newcomers but some of the architects behind Ruby's foundational infrastructure. Their flagship offering is rv ["the Ruby swiss army knife"], a tool that aims to replace the fragmented Ruby tooling ecosystem. It promises to [in the future] handle everything from rvm, rbenv, chruby, bundler, rubygems, and others — all at once while redefining how Ruby development tools should work... Spinel operates on retainer agreements with companies needing Ruby expertise instead of depending on sponsors who can withdraw support or demand control. This model maintains independence while ensuring sustainability for the maintainers.
The Register had reported Thursday: Spinel's 'rv' project aims to supplant elements of RubyGems and Bundler with a more modular, version-aware manager. Some in the Ruby community have already accused core Rails figures of positioning Spinel as a threat. For example, Rafael FranÃa of Shopify commented that admins of the new project should not be trusted to avoid "sabotaging rubygems or bundler."
Ruby

Open Source Turmoil: RubyGems Maintainers Kicked Off GitHub 75

Ruby Central, a non-profit organization committed to "driving innovation and building community within the Ruby programming ecosystem since 2001," removed all RubyGems maintainers from the project's GitHub repository on September 18, granting administrative access exclusively to its employees and contractors following alleged pressure from Shopify, one of its biggest backers, according to Ruby developer Joel Drapper. The nonprofit organization, which operates RubyConf and RailsConf, cited fiduciary responsibility and supply chain security concerns following a recent audit.

The controversy began September 9 when HSBT (Hiroshi Shibata), a Ruby infrastructure maintainer, renamed the RubyGems GitHub enterprise to "Ruby Central" and added Director of Open Source Marty Haught as owner while demoting other maintainers. The action allegedly followed Shopify's threat to cut funding unless Ruby Central assumed full ownership of RubyGems and Bundler. Ruby Central had reportedly become financially dependent on Shopify after Sidekiq withdrew $250,000 annual sponsorship over the organization platforming Rails creator DHH at RailsConf 2025. Andre Arko, a veteran contributor on-call for RubyGems.org at the time, was among those removed.

Maintainer Ellen Dash has characterized the action as a "hostile takeover" and also resigned. Executive Director Shan Cureton acknowledged poor communication in a YouTube video Monday, stating removals were temporary while finalizing operator agreements. Arko and others are launching Spinel, an alternative Ruby tooling project, though Shopify's Rafael Franca commented that Spinel admins shouldn't be trusted to avoid "sabotaging rubygems or bundler."
Programming

Dedicated Mobile Apps For Vibe Coding Have So Far Failed To Gain Traction (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: While many vibe-coding startups have become unicorns, with valuations in the billions, one area where AI-assisted coding has not yet taken off is on mobile devices. Despite the numerous apps now available that offer vibe-coding tools on mobile platforms, none are gaining noticeable downloads, and few are generating any revenue at all. According to an analysis of global app store trends by the app intelligence provider Appfigures, only a small handful of mobile apps offering vibe-coding tools have seen any downloads, let alone generated revenue.

The largest of these is Instance: AI App Builder, which has seen only 16,000 downloads and $1,000 in consumer spending. The next largest app, Vibe Studio, has pulled in just 4,000 downloads but has made no money. This situation could still change, of course. The market is young, and vibe-coding apps continue to improve and work out the bugs. New apps in this space are arriving all the time, too. This year, a startup called Vibecode launched with $9.4 million in seed funding from Reddit co-founder Alexis Ohanian's Seven Seven Six. The company's service allows users to create mobile apps using AI within its own iOS app. Vibecode is so new, Appfigures doesn't yet have data on it. For now, most people who want to toy around with vibe-coding technology are doing so on the desktop.

Education

Why One Computer Science Professor is 'Feeling Cranky About AI' in Education (acm.org) 64

Long-time Slashdot reader theodp writes: Over at the Communications of the ACM, Bard College CS Prof Valerie Barr explains why she's Feeling Cranky About AI and CS Education. Having seen CS education go through a number of we-have-to-teach-this moments over the decades — introductory programming languages, the Web, Data Science, etc. — Barr turns her attention to the next hand-wringing "what will we do" CS education moment with AI.

"We're jumping through hoops without stopping first to question the run-away train," Barr writes...

Barr calls for stepping back from "the industry assertion that the ship has sailed, every student needs to use AI early and often, and there is no future application that isn't going to use AI in some way" and instead thoughtfully "articulate what sort of future problem solvers and software developers we want to graduate from our programs, and determine ways in which the incorporation of AI can help us get there."

From the article: In much discussion about CS education:

a.) There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers.

b.) There's little interest in considering the extent to which, by incorporating generative AI into our teaching, we end up supporting a handful of companies that are burning billions in a vain attempt to each achieve performance that is a scintilla better than everyone else's.

c.) There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold—but we've perturbed education to such an extent that our students can no longer function without their AI helpers.

Slashdot Top Deals