×
IT

Some San Francisco Tech Workers are Renting Cheap 'Bed Pods' (sfgate.com) 85

An anonymous reader shared this report from SFGate: Late last year, tales of tech workers paying $700 a month for tiny "bed pods" in downtown San Francisco went viral. The story provided a perfect distillation of SF's wild (and wildly expensive) housing market — and inspired schadenfreude when the city deemed the situation illegal. But the provocative living situation wasn't an anomaly, according to a city official.

"We've definitely seen an uptick of these 'pod'-type complaints," Kelly Wong, a planner with San Francisco's code enforcement and zoning and compliance team, told SFGATE... Wong stressed that it's not that San Francisco is inherently against bed pod-type arrangements, but that the city is responsible for making sure these spaces are safe and legally zoned.


So Brownstone Shared Housing is still renting one bed pod location — but not accepting new tenants — after citations for failing to get proper permits and having a lock on the front door that required a key to exit.

And SFGate also spoke to Alex Akel, general manager of Olive Rooms, which opened up a co-living and co-working space in SoMa earlier this year (and also faced "a flurry of complaints.") "Unfortunately, we had complaints from neighbors because of foot traffic and noise, and since then we cut the number of people to fit the ordinance by the city," Akel wrote. Olive Rooms describes its space as targeted at "tech founders from Central Asia, giving them opportunities to get involved in the current AI boom." Akel added that its residents are "bringing new energy to SF," but that the program "will not accept new residents before we clarify the status with the city."

In April, the city also received a complaint about a group called Let's Be Buds, which rents out 14 pods in a loft on Divisadero Street that start at $575 per month for an upper bunk.

While this recent burst of complaints is new, bed pods in San Francisco have been catching flak for years... a company called PodShare, which rents — you guessed it — bed pods, squared itself away with the city and has operated in SF since 2019.

Brownstone's CEO told SFGate "A lot of people want to be here for AI, or for school, or different opportunities." He argues that "it's literally impossible without a product like ours," and that their residents had said the option "positively changed the trajectory of their lives."
AI

AI-Operated F-16 Jet Carries Air Force Official Into 550-MPH Aerial Combat Test (apnews.com) 31

The Associated Press reports that an F-16 performing aerial combat tests at 550 miles per hour was "controlled by artificial intelligence, not a human pilot."

And riding in the front seat was the U.S. Secretary of the Air Force... AI marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s, and the Air Force has aggressively leaned in. Even though the technology is not fully developed, the service is planning for an AI-enabled fleet of more than 1,000 unmanned warplanes, the first of them operating by 2028.

It was fitting that the dogfight took place at [California's] Edwards Air Force Base, a vast desert facility where Chuck Yeager broke the speed of sound and the military has incubated its most secret aerospace advances. Inside classified simulators and buildings with layers of shielding against surveillance, a new test-pilot generation is training AI agents to fly in war. [U.S. Secretary of the Air Force] Frank Kendall traveled here to see AI fly in real time and make a public statement of confidence in its future role in air combat.

"It's a security risk not to have it. At this point, we have to have it," Kendall said in an interview with The Associated Press after he landed... At the end of the hourlong flight, Kendall climbed out of the cockpit grinning. He said he'd seen enough during his flight that he'd trust this still-learning AI with the ability to decide whether or not to launch weapons in war... [T]he software first learns on millions of data points in a simulator, then tests its conclusions during actual flights. That real-world performance data is then put back into the simulator where the AI then processes it to learn more.

"Kendall said there will always be human oversight in the system when weapons are used," the article notes.

But he also said looked for to the cost-savings of smaller and cheaper AI-controlled unmanned jets.

Slashdot reader fjo3 shared a link to this video. (More photos at Sky.com.)
AI

Microsoft Details How It's Developing AI Responsibly (theverge.com) 32

Thursday the Verge reported that a new report from Microsoft "outlines the steps the company took to release responsible AI platforms last year." Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model.

The company says it's given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model.

It's also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models.

Microsoft's chief Responsible AI officer told the Washington Post this week that "We work with our engineering teams from the earliest stages of conceiving of new features that they are building." "The first step in our processes is to do an impact assessment, where we're asking the team to think deeply about the benefits and the potential harms of the system. And that sets them on a course to appropriately measure and manage those risks downstream. And the process by which we review the systems has checkpoints along the way as the teams are moving through different stages of their release cycles...

"When we do have situations where people work around our guardrails, we've already built the systems in a way that we can understand that that is happening and respond to that very quickly. So taking those learnings from a system like Bing Image Creator and building them into our overall approach is core to the governance systems that we're focused on in this report."

They also said " it would be very constructive to make sure that there were clear rules about the disclosure of when content is synthetically generated," and "there's an urgent need for privacy legislation as a foundational element of AI regulatory infrastructure."
Government

The US Just Mandated Automated Emergency Braking Systems By 2029 (caranddriver.com) 197

Come 2029, all cars sold in the U.S. "must be able to stop and avoid contact with a vehicle in front of them at speeds up to 62 mph," reports Car and Driver.

"Additionally, the system must be able to detect pedestrians in both daylight and darkness. As a final parameter, the federal standard will require the system to apply the brakes automatically up to 90 mph when a collision is imminent, and up to 45 mph when a pedestrian is detected." Notably, the federal standardization of automated emergency braking systems includes pedestrian-identifying emergency braking, too. Once implemented, the NHTSA projects that this standard will save at least 360 lives a year and prevent at least 24,000 injuries annually. Specifically, the federal agency claims that rear-end collisions and pedestrian injuries will both go down significantly...

"Automatic emergency braking is proven to save lives and reduce serious injuries from frontal crashes, and this technology is now mature enough to require it in all new cars and light trucks. In fact, this technology is now so advanced that we're requiring these systems to be even more effective at higher speeds and to detect pedestrians," said NHTSA deputy administrator Sophie Shulman.

Thanks to long-time Slashdot reader sinij for sharing the article.
AI

AI-Powered 'HorseGPT' Fails to Predict This Year's Kentucky Derby Winner (decrypt.co) 40

In 2016, an online "swarm intelligence" platform generated a correct prediction for the Kentucky Derby — naming all four top finishers, in order. (But the next year their predictions weren't even close, with TechRepublic suggesting 2016's race had an unusual cluster of just a few top racehorses.)

So this year Decrypt.co tried crafting their own system "that can be called up when the next Kentucky Derby draws near. There are a variety of ways to enlist artificial intelligence in horse racing. You could process reams of data based on your own methodology, trust a third-party pre-trained model, or even build a bespoke solution from the ground up. We decided to build a GPT we named HorseGPT to crunch the numbers and make the picks for us...

We carefully curated prompts to instill HorseGPT with expertise in data science specific to horse racing: how weather affects times, the role of jockeys and riding styles, the importance of post positions, and so on. We then fed it a mix of research papers and blogs covering the theoretical aspects of wagering, and layered on practical knowledge: how to read racing forms, what the statistics mean, which factors are most predictive, expert betting strategies, and more. Finally, we gave HorseGPT a wealth of historical Kentucky Derby data, arming it with the raw information needed to put its freshly imparted skills to use.

We unleashed HorseGPT on official racing forms for this year's Derby. We asked HorseGPT to carefully analyze each race's form, identify the top contenders, and recommend wager types and strategies based on deep background knowledge derived from race statistics.

So how did it do? HorseGPT picked two horses to win — both of which failed to do so. (Sierra Leone did finish second — in a rare three-way photo finish. But Fierceness finished... 15th.) It also recommended the same two horses if you were trying to pick the top two finishers in the correct order — a losing bet, since, again, Fierceness finished 15th.

But even worse, HorseGPT recommended betting on Just a Touch to finish in either first or second place. When the race was over, that horse finished dead last. (And when asked to pick the top three finishers in correct order, HorseGPT stuck with its choices for the top two — which finished #2 and #15 — and, again, Just a Touch, who came in last.)

When Google Gemini was asked to pick the winner by The Athletic, it first chose Catching Freedom (who finished 4th). But it then gave an entirely different answer when asked to predict the winner "with an Italian accent."

"The winner of the Kentucky Derby will be... Just a Touch! Si, that's-a right, the underdog! There will be much-a celebrating in the piazzas, thatta-a I guarantee!"
Again, Just a Touch came in last.

Decrypt noticed the same thing. "Interestingly enough, our HorseGPT AI agent and the other out-of-the-box chatbots seemed to agree with each other," the site notes, adding that HorseGPT also seemed to agree "with many expert analysts cited by the official Kentucky Derby website."

But there was one glimmer of insight into the 20-horse race. When asked to choose the top four finishers in order, HorseGPT repeated those same losing picks — which finished #2, #15, and #20. But then it added two more underdogs for fourth place finishers, "based on their potential to outperform expectations under muddy conditions." One of those two horses — Domestic Product — finished in 13th place.

But the other of the two horses was Mystik Dan — who came in first.

Mystik Dan appeared in only one of the six "Top 10 Finishers" lists (created by humans) at the official Kentucky Derby site... in the #10 position.
The Military

US Official Urges China, Russia To Declare AI Will Not Control Nuclear Weapons 79

Senior Department arms control official Paul Dean on Thursday urged China and Russia to declare that artificial intelligence would never make decisions on deploying nuclear weapons. Washington had made a "clear and strong commitment" that humans had total control over nuclear weapons, said Dean. Britain and France have made similar commitments. Reuters reports: "We would welcome a similar statement by China and the Russian Federation," said Dean, principal deputy assistant secretary in the Bureau of Arms Control, Deterrence and Stability. "We think it is an extremely important norm of responsible behaviour and we think it is something that would be very welcome in a P5 context," he said, referring to the five permanent members of the United Nations Security Council.
The Internet

Humans Now Share the Web Equally With Bots, Report Warns (independent.co.uk) 31

An anonymous reader quotes a report from The Independent, published last month: Humans now share the web equally with bots, according to a major new report -- as some fear that the internet is dying. In recent months, the so-called "dead internet theory" has gained new popularity. It suggests that much of the content online is in fact automatically generated, and that the number of humans on the web is dwindling in comparison with bot accounts. Now a new report from cyber security company Imperva suggests that it is increasingly becoming true. Nearly half, 49.6 per cent, of all internet traffic came from bots last year, its "Bad Bot Report" indicates. That is up 2 percent in comparison with last year, and is the highest number ever seen since the report began in 2013. In some countries, the picture is worse. In Ireland, 71 per cent of internet traffic is automated, it said.

Some of that rise is the result of the adoption of generative artificial intelligence and large language models. Companies that build those systems use bots scrape the internet and gather data that can then be used to train them. Some of those bots are becoming increasingly sophisticated, Imperva warned. More and more of them come from residential internet connections, which makes them look more legitimate. "Automated bots will soon surpass the proportion of internet traffic coming from humans, changing the way that organizations approach building and protecting their websites and applications," said Nanhi Singh, general manager for application security at Imperva. "As more AI-enabled tools are introduced, bots will become omnipresent."

AI

AI Engineers Report Burnout, Rushed Rollouts As 'Rat Race' To Stay Competitive Hits Tech Industry (cnbc.com) 36

An anonymous reader quotes a report from CNBC: Late last year, an artificial intelligence engineer at Amazon was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday. There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job. But it was all for nothing. The project was ultimately "deprioritized," the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project.

The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes. Since code can break if the required tests are postponed, the Amazon engineer recalled periods when team members would have to call one another in the middle of the night to fix aspects of the AI feature's software. AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its "iPhone moment."

AI

Nurses Say Hospital Adoption of Half-Cooked 'AI' Is Reckless (techdirt.com) 97

An anonymous reader quotes a report from Techdirt: Last week, hundreds of nurses protested the implementation of sloppy AI into hospital systems in front of Kaiser Permanente. Their primary concern: that systems incapable of empathy are being integrated into an already dysfunctional sector without much thought toward patient care: "No computer, no AI can replace a human touch," said Amy Grewal, a registered nurse. "It cannot hold your loved one's hand. You cannot teach a computer how to have empathy."

There are certainly roles automation can play in easing strain on a sector full of burnout after COVID, particularly when it comes to administrative tasks. The concern, as with other industries dominated by executives with poor judgement, is that this is being used as a justification by for-profit hospital systems to cut corners further. From a National Nurses United blog post (spotted by 404 Media): "Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care," they added.

Kaiser Permanente, for its part, insists it's simply leveraging "state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members' and patients' needs." The company claims its "Advance Alert" AI monitoring system -- which algorithmically analyzes patient data every hour -- has the potential to save upwards of 500 lives a year. The problem is that healthcare giants' primary obligation no longer appears to reside with patients, but with their financial results. And, that's even true in non-profit healthcare providers. That is seen in the form of cut corners, worse service, and an assault on already over-taxed labor via lower pay and higher workload (curiously, it never seems to impact outsized high-level executive compensation).

AI

Microsoft Bans US Police Departments From Using Enterprise AI Tool 49

An anonymous reader quotes a report from TechCrunch: Microsoft has changed its policy to ban U.S. police departments from using generative AI through the Azure OpenAI Service, the company's fully managed, enterprise-focused wrapper around OpenAI technologies. Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used "by or for" police departments in the U.S., including integrations with OpenAI's text- and speech-analyzing models. A separate new bullet point covers "any law enforcement globally," and explicitly bars the use of "real-time facial recognition technology" on mobile cameras, like body cameras and dashcams, to attempt to identify a person in "uncontrolled, in-the-wild" environments. [...]

The new terms leave wiggle room for Microsoft. The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn't cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police). That tracks with Microsoft's and close partner OpenAI's recent approach to AI-related law enforcement and defense contracts.
Last week, taser company Axon announced a new tool that uses AI built on OpenAI's GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. It's unclear if Microsoft's updated policy is in response to Axon's product launch.
AI

Microsoft To Invest $2.2 Billion In Cloud and AI Services In Malaysia (reuters.com) 8

An anonymous reader quotes a report from Reuters: Microsoft said on Thursday it will invest $2.2 billion over the next four years in Malaysia to expand cloud and artificial intelligence (AI) services in the company's latest push to promote its generative AI technology in Asia. The investment, the largest in Microsoft's 32-year history in Malaysia, will include building cloud and AI infrastructure, creating AI-skilling opportunities for 200,000 people, and supporting the country's developers, the company said.

Microsoft will also work with the Malaysian government to establish a national AI Centre of Excellence and enhance the nation's cybersecurity capabilities, the company said in a statement. Prime Minister Anwar Ibrahim, who met Nadella on Thursday, said the investment supported Malaysia's efforts in developing its AI capabilities. Microsoft is trying to expand its support for the development of AI globally. Nadella this week announced a $1.7 billion investment in neighboring Indonesia and said Microsoft would open its first regional data centre in Thailand.
"We want to make sure we have world class infrastructure right here in the country so that every organization and start-up can benefit," Microsoft Chief Executive Satya Nadella said during a visit to Kuala Lumpur.
AI

Anthropic Brings Claude AI To the iPhone and iPad (9to5mac.com) 16

Anthropic has released its Claude AI chatbot on the App Store, bringing the company's ChatGPT competitor to the masses. Compared to OpenAI's chatbot, Claude is built with a focus on reducing harmful outputs and promoting safety, with a goal of making interactions more reliable and ethically aware. You can give it a try here. 9to5Mac reports: Anthropic highlights three launch features for Claude on iPhone:

Seamless syncing with web chats: Pick up where you left off across devices.
Vision capabilities: Use photos from your library, take new photos, or upload files so you can have real-time image analysis, contextual understanding, and mobile-centric use cases on the go.
Open access: Users across all plans, including Pro and Team, can download the app free of charge.

The app is also capable of analyzing things that you show it like objects, images, and your environment.

AI

National Archives Bans Employee Use of ChatGPT (404media.co) 10

The National Archives and Records Administration (NARA) told employees Wednesday that it is blocking access to ChatGPT on agency-issued laptops to "protect our data from security threats associated with use of ChatGPT," 404 Media reported Wednesday. From the report: "NARA will block access to commercial ChatGPT on NARANet [an internal network] and on NARA issued laptops, tablets, desktop computers, and mobile phones beginning May 6, 2024," an email sent to all employees, and seen by 404 Media, reads. "NARA is taking this action to protect our data from security threats associated with use of ChatGPT."

The move is particularly notable considering that this directive is coming from, well, the National Archives, whose job is to keep an accurate historical record. The email explaining the ban says the agency is particularly concerned with internal government data being incorporated into ChatGPT and leaking through its services. "ChatGPT, in particular, actively incorporates information that is input by its users in other responses, with no limitations. Like other federal agencies, NARA has determined that ChatGPT's unrestricted approach to reusing input data poses an unacceptable risk to NARA data security," the email reads. The email goes on to explain that "If sensitive, non-public NARA data is entered into ChatGPT, our data will become part of the living data set without the ability to have it removed or purged."

Google

Google Urges US To Update Immigration Rules To Attract More AI Talent (theverge.com) 98

The US could lose out on valuable AI and tech talent if some of its immigration policies are not modernized, Google says in a letter sent to the Department of Labor. From a report: Google says policies like Schedule A, a list of occupations the government "pre-certified" as not having enough American workers, have to be more flexible and move faster to meet demand in technologies like AI and cybersecurity. The company says the government must update Schedule A to include AI and cybersecurity and do so more regularly.

"There's wide recognition that there is a global shortage of talent in AI, but the fact remains that the US is one of the harder places to bring talent from abroad, and we risk losing out on some of the most highly sought-after people in the world," Karan Bhatia, head of government affairs and public policy at Google, tells The Verge. He noted that the occupations in Schedule A have not been updated in 20 years.

Companies can apply for permanent residencies, colloquially known as green cards, for employees. The Department of Labor requires companies to get a permanent labor certification (PERM) proving there is a shortage of workers in that role. That process may take time, so the government "pre-certified" some jobs through Schedule A. The US Citizenship and Immigration Services lists Schedule A occupations as physical therapists, professional nurses, or "immigrants of exceptional ability in the sciences or arts." While the wait time for a green card isn't reduced, Google says Schedule A cuts down the processing time by about a year.

Microsoft

Microsoft Concern Over Google's Lead Drove OpenAI Investment (yahoo.com) 10

Microsoft's motivation for investing heavily and partnering with OpenAI came from a sense of falling badly behind Google, according to an internal email released Tuesday as part of the Justice Department's antitrust case against the search giant. Bloomberg: The Windows software maker's chief technology officer, Kevin Scott, was "very, very worried" when he looked at the AI model-training capability gap between Alphabet's efforts and Microsoft's, he wrote in a 2019 message to Chief Executive Officer Satya Nadella and co-founder Bill Gates. The exchange shows how the company's top executives privately acknowledged they lacked the infrastructure and development speed to catch up to the likes of OpenAI and Google's DeepMind.

[...] Scott, who also serves as executive vice president of artificial intelligence at Microsoft, observed that Google's search product had improved on competitive metrics because of the Alphabet company's advancements in AI. The Microsoft executive wrote that he made a mistake by dismissing some of the earlier AI efforts of its competitors. "We are multiple years behind the competition in terms of machine learning scale," Scott said in the email. Significant portions of the message, titled 'Thoughts on OpenAI,' remain redacted. Nadella endorsed Scott's email, forwarding it to Chief Financial Officer Amy Hood and saying it explains "why I want us to do this."

AI

Mysterious 'gpt2-chatbot' AI Model Appears Suddenly, Confuses Experts (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: On Sunday, word began to spread on social media about a new mystery chatbot named "gpt2-chatbot" that appeared in the LMSYS Chatbot Arena. Some people speculate that it may be a secret test version of OpenAI's upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo. Currently, the new model is only available for use through the Chatbot Arena website, although in a limited way. In the site's "side-by-side" arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day -- dramatically limiting people's ability to test it in detail. [...] On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting, "i do have a soft spot for gpt2." [...]

OpenAI's fingerprints seem to be all over the new bot. "I think it may well be an OpenAI stealth preview of something," AI researcher Simon Willison told Ars Technica. But what "gpt2" is exactly, he doesn't know. After surveying online speculation, it seems that no one apart from its creator knows precisely what the model is, either. Willison has uncovered the system prompt for the AI model, which claims it is based on GPT-4 and made by OpenAI. But as Willison noted in a tweet, that's no guarantee of provenance because "the goal of a system prompt is to influence the model to behave in certain ways, not to give it truthful information about itself."

Open Source

Bruce Perens Emits Draft Post-Open Zero Cost License (theregister.com) 73

After convincing the world to buy open source and give up the Morse Code test for ham radio licenses, Bruce Perens has a new gambit: develop a license that ensures software developers receive compensation from large corporations using their work. The new Post-Open Zero Cost License seeks to address the financial disparities in open source software use and includes provisions against using content to train AI models, aligning its enforcement with non-profit performing rights organizations like ASCAP. Here's an excerpt from an interview The Register conducted with Perens: The license is one component among several -- the paid license needs to be hammered out -- that he hopes will support his proposed Post-Open paradigm to help software developers get paid when their work gets used by large corporations. "There are two paradigms that you can use for this," he explains in an interview. "One is Spotify and the other is ASCAP, BMI, and SESAC. The difference is that Spotify is a for-profit corporation. And they have to distribute profits to their stockholders before they pay the musicians. And as a result, the musicians complain that they're not getting very much at all."

"There are two paradigms that you can use for this," he explains in an interview. "One is Spotify and the other is ASCAP, BMI, and SESAC. The difference is that Spotify is a for-profit corporation. And they have to distribute profits to their stockholders before they pay the musicians. And as a result, the musicians complain that they're not getting very much at all." Perens wants his new license -- intended to complement open source licensing rather than replace it -- to be administered by a 501(c)(6) non-profit. This entity would handle payments to developers. He points to the music performing rights organizations as a template, although among ASCAP, BMI, SECAC, and GMR, only ASCAP remains non-profit. [...]

The basic idea is companies making more than $5 million annually by using Post-Open software in a paid-for product would be required to pay 1 percent of their revenue back to this administrative organization, which would distribute the funds to the maintainers of the participating open source project(s). That would cover all Post-Open software used by the organization. "The license that I have written is long -- about as long as the Affero GPL 3, which is now 17 years old, and had to deal with a lot more problems than the early licenses," Perens explains. "So, at least my license isn't excessively long. It handles all of the abuses of developers that I'm conscious of, including things I was involved in directly like Open Source Security v. Perens, and Jacobsen v. Katzer."

"It also makes compliance easier for companies than it is today, and probably cheaper even if they do have to pay. It creates an entity that can sue infringers on behalf of any developer and gets the funding to do it, but I'm planning the infringement process to forgive companies that admit the problem and cure the infringement, so most won't ever go to court. It requires more infrastructure than open source developers are used to. There's a central organization for Post-Open (or it could be three organizations if we divided all of the purposes: apportioning money to developers, running licensing, and enforcing compliance), and an outside CPA firm, and all of that has to be structured so that developers can trust it."
You can read the full interview here.
Microsoft

Bill Gates Is Still Pulling the Strings At Microsoft (businessinsider.com) 46

theodp writes: Reports of the death of Bill Gates' influence at Microsoft have been greatly exaggerated: "Publicly, [Bill] Gates has been almost entirely out of the picture at Microsoft since 2021, following allegations that he had behaved inappropriately toward female employees. In fact, Business Insider has learned, Gates has been quietly orchestrating much of Microsoft's AI revolution from behind the scenes. Current and former executives say Gates remains intimately involved in the company's operations -- advising on strategy, reviewing products, recruiting high-level executives, and nurturing Microsoft's crucial relationship with Sam Altman, the cofounder and CEO of OpenAI.

In early 2023, when Microsoft debuted a version of its search engine Bing turbocharged by the same technology as ChatGPT, throwing down the gauntlet against competitors like Google, Gates, executives said, was pivotal in setting the plan in motion. While Nadella might be the public face of the company's AI success [...] Gates has been the man behind the curtain."[...] "Today, Gates remains close with Altman, who visits his home a few times a year, and OpenAI seeks his counsel on developments. There's a 'tight coupling' between Gates and OpenAI, a person familiar with the relationship said. 'Sam and Bill are good friends. OpenAI takes his opinion and consult overall seriously.' OpenAI spokesperson Kayla Wood confirmed OpenAI continues to meet with Gates."

Microsoft

Major US Newspapers Sue OpenAI, Microsoft For Copyright Infringement (axios.com) 74

Eight prominent U.S. newspapers owned by investment giant Alden Global Capital are suing OpenAI and Microsoft for copyright infringement, in a complaint filed Tuesday in the Southern District of New York. From a report: Until now, the Times was the only major newspaper to take legal action against AI firms for copyright infringement. Many other news publishers, including the Financial Times, the Associated Press and Axel Springer, have instead opted to strike paid deals with AI companies for millions of dollars annually, undermining the Times' argument that it should be compensated billions of dollars in damages.

The lawsuit is being filed on behalf of some of the most prominent regional daily newspapers in the Alden portfolio, including the New York Daily News, Chicago Tribune, Orlando Sentinel, South Florida Sun Sentinel, San Jose Mercury News, Denver Post, Orange County Register and St. Paul Pioneer Press.

Google

Apple Targets Google Staff To Build AI Team 4

Apple has poached dozens of AI experts from Google and has created a secretive European laboratory in Zurich, as the tech giant builds a team to battle rivals in developing new AI models and products. From a report: According to a Financial Times analysis of hundreds of LinkedIn profiles as well as public job postings and research papers, the $2.7tn company has undertaken a hiring spree over recent years to expand its global AI and machine learning team. The iPhone maker has particularly targeted workers from Google, attracting at least 36 specialists from its rival since it poached John Giannandrea to be its top AI executive in 2018.

While the majority of Apple's AI team work from offices in California and Seattle, the tech group has also expanded a significant outpost in Zurich. Professor Luc Van Gool from Swiss university ETH Zurich said Apple's acquisitions of two local AI start-ups -- virtual reality group FaceShift and image recognition company Fashwell -- led Apple to build a research laboratory, known as its "Vision Lab," in the city.

Slashdot Top Deals