Crime

Was the Arrest of Telegram's CEO Inevitable? (platformer.news) 174

Casey Newton, former senior editor at the Verge, weighs in on Platformer about the arrest of Telegram CEO Pavel Durov.

"Fending off onerous speech regulations and overzealous prosecutors requires that platform builders act responsibly. Telegram never even pretended to." Officially, Telegram's terms of service prohibit users from posting illegal pornographic content or promotions of violence on public channels. But as the Stanford Internet Observatory noted last year in an analysis of how CSAM spreads online, these terms implicitly permit users who share CSAM in private channels as much as they want to. "There's illegal content on Telegram. How do I take it down?" asks a question on Telegram's FAQ page. The company declares that it will not intervene in any circumstances: "All Telegram chats and group chats are private amongst their participants," it states. "We do not process any requests related to them...."

Telegram can look at the contents of private messages, making it vulnerable to law enforcement requests for that data. Anticipating these requests, Telegram created a kind of jurisdictional obstacle course for law enforcement that (it says) none of them have successfully navigated so far. From the FAQ again:

To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data. [...] To this day, we have disclosed 0 bytes of user data to third parties, including governments.

As a result, investigation after investigation finds that Telegram is a significant vector for the spread of CSAM.... The company's refusal to answer almost any law enforcement request, no matter how dire, has enabled some truly vile behavior. "Telegram is another level," Brian Fishman, Meta's former anti-terrorism chief, wrote in a post on Threads. "It has been the key hub for ISIS for a decade. It tolerates CSAM. Its ignored reasonable [law enforcement] engagement for YEARS. It's not 'light' content moderation; it's a different approach entirely.

The article asks whether France's action "will embolden countries around the world to prosecute platform CEOs criminally for failing to turn over user data." On the other hand, Telegram really does seem to be actively enabling a staggering amount of abuse. And while it's disturbing to see state power used indiscriminately to snoop on private conversations, it's equally disturbing to see a private company declare itself to be above the law.

Given its behavior, a legal intervention into Telegram's business practices was inevitable. But the end of private conversation, and end-to-end encryption, need not be.

Open Source

Open Source Redis Fork 'Valkey' Has Momentum, Improvements, and Speed, Says Dirk Hohndel (thenewstack.io) 16

"Dirk Hohndel, a Linux kernel developer and long-time open source leader, wanted his audience at KubeCon + CloudNativeCon + Open Source Summit China 2024 Summit China to know he's not a Valkey developer," writes Steven J. Vaughan-Nichols. "He's a Valkey user and fan." [Hohndel] opened his speech by recalling how the open source, high-performance key/value datastore Valkey had been forked from Redis... Hohndel emphasized that "forks are good. Forks are one of the key things that open source licenses are for. So, if the maintainer starts doing things you don't like, you can fork the code under the same license and do better..." In this case, though, Redis had done a "bait-and-switch" with the Redis code, Hohndale argued. This was because they had made an all-too-common business failure: They hadn't realized that "open source is not a business model...."

While the licensing change is what prompted the fork, Hohndel sees leadership and technical reasons why the Valkey fork is likely to succeed. First, two-thirds of the formerly top Redis maintainers and developers have switched to Valkey. In addition, AWS, Google Cloud, and Oracle, under the Linux Foundation's auspices, all support Valkey. When both the technical and money people agree, good things can happen.

The other reason is that Valkey already looks like it will be the better technical choice. That's because the recently announced Valkey 8.0, which builds upon the last open source version of Redis, 7.2.4, introduces serious speed improvements and new features that Redis users have wanted for some time. As [AWS principal engineer Madelyn] Olson said at Open Source Summit North America earlier this year, "Redis really didn't want to break anything." Valkey wants to move a bit faster. How much faster? A lot. Valkey 8.0 overhauls Redis's single-threaded event loop threading model with a more sophisticated multithreaded approach to I/O operations. Hohndel reported that on his small Valkey-powered aircraft tracking system, "I see roughly a threefold improvement in performance, and I stream a lot of data, 60 million data points a day."

The article notes that Valkey is already being supported by major Linux distros including AlmaLinux, Fedora, and Alpine.
IT

How Not To Hire a North Korean IT Spy (csoonline.com) 17

CSO Online reports that North Korea "is actively infiltrating Western companies using skilled IT workers who use fake identities to pose as remote workers with foreign companies, typically but not exclusively in the U.S."

Slashdot reader snydeq shares their report, which urges information security officers "to carry out tighter vetting of new hires to ward off potential 'moles' — who are increasingly finding their way onto company payrolls and into their IT systems." The schemes are part of illicit revenue generation efforts by the North Korean regime, which faces financial sanctions over its nuclear weapons program, as well as a component of the country's cyberespionage activities.

The U.S. Treasury department first warned about the tactic in 2022. Thosands of highly skilled IT workers are taking advantage of the demand for software developers to obtain freelance contracts from clients around the world, including in North America, Europe, and East Asia. "Although DPRK [North Korean] IT workers normally engage in IT work distinct from malicious cyber activity, they have used the privileged access gained as contractors to enable the DPRK's malicious cyber intrusions," the Treasury department warned... North Korean IT workers present themselves as South Korean, Chinese, Japanese, or Eastern European, and as U.S.-based teleworkers. In some cases, DPRK IT workers further obfuscate their identities by creating arrangements with third-party subcontractors.

Christina Chapman, a resident of Arizona, faces fraud charges over an elaborate scheme that allegedly allowed North Korean IT workers to pose as U.S. citizens and residents using stolen identities to obtain jobs at more than 300 U.S. companies. U.S. payment platforms and online job site accounts were abused to secure jobs at more than 300 companies, including a major TV network, a car manufacturer, a Silicon Valley technology firm, and an aerospace company... According to a U.S. Department of Justice indictment, unsealed in May 2024, Chapman ran a "laptop farm," hosting the overseas IT workers' computers inside her home so it appeared that the computers were located in the U.S. The 49-year-old received and forged payroll checks, and she laundered direct debit payments for salaries through bank accounts under her control. Many of the overseas workers in her cell were from North Korea, according to prosecutors. An estimated $6.8 million were paid for the work, much of which was falsely reported to tax authorities under the name of 60 real U.S. citizens whose identities were either stolen or borrowed...

Ukrainian national Oleksandr Didenko, 27, of Kyiv, was separately charged over a years-long scheme to create fake accounts at U.S. IT job search platforms and with U.S.-based money service transmitters. "Didenko sold the accounts to overseas IT workers, some of whom he believed were North Korean, and the overseas IT workers used the false identities to apply for jobs with unsuspecting companies," according to the U.S. Department of Justice. Didenko, who was arrested in Poland in May, faces U.S. extradition proceedings...

How this type of malfeasance plays out from the perspective of a targeted firm was revealed by security awareness vendor KnowBe4's candid admission in July that it unknowingly hired a North Korean IT spy... A growing and substantial body of evidence suggests KnowBe4 is but one of many organizations targeted by illicit North Korean IT workers. Last November security vendor Palo Alto reported that North Korean threat actors are actively seeking employment with organizations based in the U.S. and other parts of the world...

Mandiant, the Google-owned threat intel firm, reported last year that "thousands of highly skilled IT workers from North Korea" are hunting work. More recently, CrowdStrike reported that a North Korean group it dubbed "Famous Chollima" infiltrated more than 100 companies with imposter IT pros.

The article notes the infiltrators use chatbots to tailor the perfect resume "and further leverage AI-created deepfakes to pose as real people." And the article includes this quote from a former intelligence analyst for the U.S. Air Force turned cybersecurity strategist at Sysdig. "In some cases, they may try to get jobs at tech companies in order to steal their intellectual property before using it to create their own knock-off technologies."

The article closes with its suggested "countermeasures," including live video-chats with prospective remote-work applicants — and confirming an applicant's home address.
Android

Google Play Store Can Finally Update Multiple Apps At Once 22

The Google Play Store is now rolling out support for downloading up to three Android app updates simultaneously, addressing a long-standing limitation where apps could only be downloaded one at a time. 9to5Google reports: We're seeing simultaneous app update downloads working in the Google Play Store today across multiple devices, and a few of our readers are seeing the same behavior this week as well. It's unclear if this is a server-side change on Google's part or an update to the Play Store itself, but the functionality is much appreciated. As far as we can tell, you can download up to three app updates at once through the Play Store. The apps will start to download, with only anything beyond three showing the "Pending" status that we're all so used to seeing in the Play Store. This matches the App Store on iOS which can also download up to three apps at once. The same limit of three also now applies to new app installs, which was previously limited to two at a time.
Google

Google is Developing AI That Can Hear If You're Sick (qz.com) 29

A new AI model being developed by Google could make diagnosing tuberculosis and other respiratory ailments as easy as recording a voice note. From a report: Google is training one of its foundational AI models to listen for signs of disease using sound signals, like coughing, sneezing, and sniffling. This tech, which would work using people's smartphone microphones, could revolutionize diagnoses for communities where advanced diagnostic tools are difficult to come by.

The tech giant is collaborating with Indian respiratory health care AI startup, Salcit Technologies. The tech, which was introduced earlier this year as Health Acoustic Representations, or HeAR, is what's known as a bioacoustic foundation model. HeAR was then trained on 300 million pieces of audio data, including 100 million cough sounds, to learn to pick out patterns in the sounds. Salcit is then using this AI model, in combination with its own product Swaasa, which uses AI to analyze cough sounds and assess lung health, to help research and improve early detection of TB based solely on cough sounds.

Encryption

Feds Bust Alaska Man With 10,000+ CSAM Images Despite His Many Encrypted Apps (arstechnica.com) 209

A recent indictment (PDF) of an Alaska man stands out due to the sophisticated use of multiple encrypted communication tools, privacy-focused apps, and dark web technology. "I've never seen anyone who, when arrested, had three Samsung Galaxy phones filled with 'tens of thousands of videos and images' depicting CSAM, all of it hidden behind a secrecy-focused, password-protected app called 'Calculator Photo Vault,'" writes Ars Technica's Nate Anderson. "Nor have I seen anyone arrested for CSAM having used all of the following: [Potato Chat, Enigma, nandbox, Telegram, TOR, Mega NZ, and web-based generative AI tools/chatbots]." An anonymous reader shares the report: According to the government, Seth Herrera not only used all of these tools to store and download CSAM, but he also created his own -- and in two disturbing varieties. First, he allegedly recorded nude minor children himself and later "zoomed in on and enhanced those images using AI-powered technology." Secondly, he took this imagery he had created and then "turned to AI chatbots to ensure these minor victims would be depicted as if they had engaged in the type of sexual contact he wanted to see." In other words, he created fake AI CSAM -- but using imagery of real kids.

The material was allegedly stored behind password protection on his phone(s) but also on Mega and on Telegram, where Herrera is said to have "created his own public Telegram group to store his CSAM." He also joined "multiple CSAM-related Enigma groups" and frequented dark websites with taglines like "The Only Child Porn Site you need!" Despite all the precautions, Herrera's home was searched and his phones were seized by Homeland Security Investigations; he was eventually arrested on August 23. In a court filing that day, a government attorney noted that Herrera "was arrested this morning with another smartphone -- the same make and model as one of his previously seized devices."

The government is cagey about how, exactly, this criminal activity was unearthed, noting only that Herrera "tried to access a link containing apparent CSAM." Presumably, this "apparent" CSAM was a government honeypot file or web-based redirect that logged the IP address and any other relevant information of anyone who clicked on it. In the end, given that fatal click, none of the "I'll hide it behind an encrypted app that looks like a calculator!" technical sophistication accomplished much. Forensic reviews of Herrera's three phones now form the primary basis for the charges against him, and Herrera himself allegedly "admitted to seeing CSAM online for the past year and a half" in an interview with the feds.

Security

Russian Government Hackers Found Using Exploits Made By Spyware Companies NSO and Intellexa (techcrunch.com) 44

Google says it has evidence that Russian government hackers are using exploits that are "identical or strikingly similar" to those previously made by spyware makers Intellexa and NSO Group. From a report: In a blog post on Thursday, Google said it is not sure how the Russian government acquired the exploits, but said this is an example of how exploits developed by spyware makers can end up in the hands of "dangerous threat actors." In this case, Google says the threat actors are APT29, a group of hackers widely attributed to Russia's Foreign Intelligence Service, or the SVR. APT29 is a highly capable group of hackers, known for its long-running and persistent campaigns aimed at conducting espionage and data theft against a range of targets, including tech giants Microsoft and SolarWinds, as well as foreign governments.

Google said it found the hidden exploit code embedded on Mongolian government websites between November 2023 and July 2024. During this time, anyone who visited these sites using an iPhone or Android device could have had their phone hacked and data stolen, including passwords, in what is known as a "watering hole" attack. The exploits took advantage of vulnerabilities in the iPhone's Safari browser and Google Chrome on Android that had already been fixed at the time of the suspected Russian campaign. Still, those exploits nevertheless could be effective in compromising unpatched devices.

The Courts

Yelp Sues Google For Antitrust Violations (theverge.com) 23

Yelp has filed an antitrust lawsuit against Google, accusing the search giant of maintaining its local search monopoly by preferencing its own services over competitors, harming competition and reducing quality. "Yelp claims that the way Google directs users toward its own local search vertical from its general search engine results page should be considered illegal tying of separate products to keep rivals from reaching scale," adds The Verge. From the report: Yelp wants the court to order Google to stop the allegedly anticompetitive conduct and to pay it damages. It demanded a jury trial and filed the suit in the Northern District of California, where a different jury found that Google had an illegal monopoly through its app store in its fight against Epic Games.

The company was emboldened to bring its own lawsuit against Google after the DOJ's win in its antitrust case about the company's allegedly exclusionary practices around the distribution of search services. Yelp CEO Jeremy Stoppelman told The New York Times that following that decision, "the winds on antitrust have shifted dramatically." Previously, he told the Times, he'd hesitated to bring a suit because of the resources it would require and because he saw it as the government's job to enforce the antitrust laws.
"Yelp's claims are not new," Google spokesperson Peter Schottenfels said in a statement. "Similar claims were thrown out years ago by the FTC, and recently by the judge in the DOJ's case. On the other aspects of the decision to which Yelp refers, we are appealing. Google will vigorously defend against Yelp's meritless claims."
Google

Google To Relaunch Tool For Creating AI-Generated Images of People 35

Google announced that it will reintroduce AI image generation capabilities through its Gemini tool, with early access to the new Imagen 3 generator available for select users in the coming days. The company pulled the feature shortly after it launched in February when users discovered historical inaccuracies and questionable responses. CNBC reports: "We've worked to make technical improvements to the product, as well as improved evaluation sets, red-teaming exercises and clear product principles," [wrote Dave Citron, a senior director of product on Gemini, in a blog post]. Red-teaming refers to a practice companies use to test products for vulnerabilities.

Citron said Imagen 3 doesn't support photorealistic identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes. "Of course, as with any generative AI tool, not every image Gemini creates will be perfect, but we'll continue to listen to feedback from early users as we keep improving," Citron wrote. "We'll gradually roll this out, aiming to bring it to more users and languages soon."
Open Source

How Do You Define 'Open Source AI'? (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: The Open Source Initiative (OSI) recently unveiled its latest draft definition for "open source AI," aiming to clarify the ambiguous use of the term in the fast-moving field. The move comes as some companies like Meta release trained AI language model weights and code with usage restrictions while using the "open source" label. This has sparked intense debates among free-software advocates about what truly constitutes "open source" in the context of AI. For instance, Meta's Llama 3 model, while freely available, doesn't meet the traditional open source criteria as defined by the OSI for software because it imposes license restrictions on usage due to company size or what type of content is produced with the model. The AI image generator Flux is another "open" model that is not truly open source. Because of this type of ambiguity, we've typically described AI models that include code or weights with restrictions or lack accompanying training data with alternative terms like "open-weights" or "source-available."

To address the issue formally, the OSI -- which is well-known for its advocacy for open software standards -- has assembled a group of about 70 participants, including researchers, lawyers, policymakers, and activists. Representatives from major tech companies like Meta, Google, and Amazon also joined the effort. The group's current draft (version 0.0.9) definition of open source AI emphasizes "four fundamental freedoms" reminiscent of those defining free software: giving users of the AI system permission to use it for any purpose without permission, study how it works, modify it for any purpose, and share with or without modifications. [...] OSI's project timeline indicates that a stable version of the "open source AI" definition is expected to be announced in October at the All Things Open 2024 event in Raleigh, North Carolina.

Google

Ex-Googlers Discover That Startups Are Hard 61

Dozens of former AI researchers from Google who struck out on their own are learning that startups are tricky. The Information: The latest example is French AI agent developer H, which lost three of its five cofounders (four of whom are ex-Googlers) just months after announcing they had raised $220 million from investors in a "seed" round, as our colleagues reported Friday. The founders had "operational and business disagreements," one of them told us.

The drama at H follows the quasi-acquisitions of Inflection, Adept and Character, which were each less than three years old and founded mostly by ex-Google AI researchers. Reka, another AI developer in this category, was in talks to be acquired by Snowflake earlier this year. Those talks, which could have valued the company at $1 billion, fell apart in May. AI image developer Ideogram, also cofounded by four ex-Googlers, has spoken with at least one later-stage tech startup about potential sale opportunities, though the talks didn't seem to go anywhere, according to someone involved in the discussions.

Cohere, whose CEO co-authored a seminal Google research paper about transformers with Noam Shazeer, the ex-CEO of Character, has also faced growing questions about its relatively meager revenue compared to its rivals. For now, though, it has a lot of money in the bank. Has someone put a curse on startups founded by ex-Google AI researchers?
Businesses

Internal AWS Sales Guidelines Spread Doubt About OpenAI's Capabilities (businessinsider.com) 14

An anonymous reader shares a report: OpenAI lacks advanced security and customer support. It's just a research company, not an established cloud provider. The ChatGPT-maker is not focused enough on corporate customers. These are just some of the talking points Amazon Web Services' salespeople are told to follow when dealing with customers using, or close to buying, OpenAI's products, according to internal sales guidelines obtained by Business Insider. Other talking points from the documents include OpenAI's lack of access to third-party AI models and weak enterprise-level contracts. AWS salespeople should dispel the hype around AI chatbots like ChatGPT, and steer the conversation toward AWS's strength of running the cloud infrastructure behind popular AI services, the guidelines added.

[...] The effort to criticize OpenAI is also unusual for Amazon, which often says it's so customer-obsessed that it pays little attention to competitors. This is the latest sign that suggests Amazon knows it has work to do to catch up in the AI race. OpenAI, Microsoft, and Google have taken an early lead and could become the main platforms where developers build new AI products and tools. Though Amazon created a new AGI team last year, the company's existing AI models are considered less powerful than those made by its biggest competitors. Instead, Amazon has prioritized selling AI tools like Bedrock, which gives customers access to third-party AI models. AWS also offers cloud access to in-house AI chips that compete with Nvidia GPUs, with mixed results so far.

Google

'Don't Trust Google for Customer Service Numbers. It Might Be a Scam.' (msn.com) 52

Google may be the most successful company in the world. But a Washington Post reporter argues that Google "makes you largely responsible for dodging the criminals who are hurting legitimate businesses and swindling people." On Monday, I found what appeared to be impostors of customer service for Delta and Coinbase, the cryptocurrency company, in the "People also ask" section high up in Google. A group of people experienced in Google's intricacies also said this week that it took about 22 minutes to fool Google into highlighting a bogus business phone number in a prominent spot in search results...

If you look at the two impostor phone numbers in Google for Delta and Coinbase, there are red flags. There are odd fonts and a website below the bogus numbers that wasn't for either company. (I notified Google about the apparent scams on Monday and I still saw them 24 hours later.) The correct customer help numbers did appear at the very top, and Google says businesses have clear instructions to make their customer service information visible to people searching Google.

The larger issue is "a persistent pattern of bad guys finding ways to trick Google into showing scammers' numbers for airlines, hotels, local repair companies, banks or other businesses." The toll can be devastating when people are duped by these bogus business numbers. Fortune recently reported on a man who called what a Google listing said was Coinbase customer support, and instead it was an impostor who Fortune said tricked the man and stole $100,000...

Most of the time, you will find correct customer service numbers by Googling. But the company doesn't say how often people are tricked out of time and money by bogus listings — nor why Google can't stop the scams from recurring.

The article makes two points.
  • Google says when they identify listings violating their rules, they move quickly against them.

AI

AI To Go Nuclear? Data Center Deals Say It's Inevitable (cio.com) 90

To build the massive datacenters generative AI requires, major companies like Amazon and Microsoft "are going nuclear," reports CIO magazine. AWS: Earlier this year, AWS paid $650 million to purchase Talen Energy's Cumulus Data Assets, a 960-megawatt nuclear-powered data center on site at Talen's Susquehanna, Pennsylvania, nuclear plant, with additional data centers planned — pending approval by the Nuclear Regulatory Agency... In addition to its purchase of the Cumulus data center, AWS will have access to nuclear energy as part of a 10-year Power Purchase Agreement (PPA) from the Susquehanna site.
Microsoft: Last year, Constellation signed a deal giving Microsoft the rights to receive up to 35% of its power from nuclear sources in addition to its existing solar and wind purchases from Constellation for Microsoft's Boydton, Va., data center. Microsoft has also signed a nuclear carbon credits deal with Ontario Power Generation for its operations in Canada.
The broader industry: Many of the deals under discussion are with existing nuclear power providers for hyperscalars [large-scale datacenters] to access energy or to employ small module nuclear reactors (SMRs) with smaller carbon footprints that will be annexed to existing nuclear power plants. Nucor, Oklo, Rolls-Royce SMR, Westinghouse Electric, Moltex Energy, Terrestrial Energy, General Electric, Hitachi Nuclear Energy, and X-energy are among the roster of companies with SMRs under development to meet the growing needs of AI data centers...

One energy analyst does not expect nuclear SMRs to be operational until 2030, yet he and many others acknowledge the need for sustainable, carbon-free alternatives to electricity, wind, and solar is very pressing. "Today's electric grids are struggling to keep up with demand, even as datacenter companies are planning huge new additions to their fleets to power generative AI applications. As a result, companies like Google, Amazon, and Microsoft are increasingly taking matters into their own hands and getting creative. They are now looking at on-site nuclear-based SMRs, and even fusion reactors," says Peter Kelly-Detwiler, principal of Northbridge Energy Partners. "This global arms race for power arose pretty quickly, and it's like nothing we have ever seen before."

Thanks to Slashdot reader snydeq for sharing the news.
Social Networks

How Reddit Challenges Google and Meta with Ads Based on Topics - Not User Data (yahoo.com) 47

Six months after going public, Reddit "is winning over advertisers," reports Bloomberg, "by showing that it's different than other internet platforms, which often rely on users' identities and personal information to target ads." Instead, Reddit is targeting people based on their interests, relying on the site's [100,000+] deeply detailed communities — called subreddits — to match advertisers with potential customers... Early returns on that strategy have been promising. The text-based site easily surpassed expectations in its first two earnings reports this year, disclosing strong sales and better-than-expected projected growth. The stock is up 66% from its $34 initial public offering price in March.

Beyond targeting subreddits, the company also can use specific keywords to sell what it calls conversation ads. If a Redditor in r/HydroHomies — a community about the benefits of drinking water that has more than 1.2 million users — asks for advice about a specific brand of water bottle, an ad for that exact product could appear next to that user's post. These conversation ads are the fastest-growing ad format on the platform, the company said. They also give marketers a chance to appear in subreddits where customers are already talking about them...

Despite being around for close to 20 years, Reddit only started investing heavily in its advertising business in 2018, and is now hoping that marketers and investors are ready to acknowledge the site has grown up. Executives often point to its unique form of content moderation as proof that it's a safer place for brands than other sites. Reddit largely relies on a group of more than 60,000 human moderators — users who volunteer to serve as a sort of content police — to flag or take down unsavory content. On top of that, the site has a voting system so users can rate the quality of content. "From everything we're seeing, they have a level of brand safety and content safety for advertisers that is very comparable to most other social platforms," said Jack Johnston, senior social innovation director at performance marketing agency Tinuiti, which buys ads on Meta, Pinterest, X and Reddit. "That wasn't necessarily the case a couple years ago."

Those improvements have paid dividends. Reddit recently signed new content partnerships with major sports leagues, including the NFL, NBA and MLB, and the majority of Reddit's advertising revenue comes from Fortune 500 companies. Last year, the site made close to $800 million in ad sales, and counts marquee brands like Toyota, Disney, Samsung and Ulta Beauty among its advertisers. This year, analysts expect Reddit's overall advertising business to eclipse $1.1 billion in revenue and see the company reaching $2 billion in sales as soon as 2027, according to data compiled by Bloomberg. To get there, Reddit will need to court smaller marketers, too. The company makes more than 25% of its revenue from just 10 advertisers, meaning any unexpected pullback from a key partner could have a significant impact on the company's business, said Dan Salmon, lead analyst at New Street Research. "This army of small businesses — that's the most important thing for all of those platforms, for Reddit, for Pinterest, for X," he said...

Advertisers large and small say they're already planning to spend more on Reddit in the coming quarters.

The article points out that more than 90 million people visit Reddit each day.
Security

'Invasive' Iranian Intelligence Group Believed to Be The Ones Who Breached Trump's Campaign (reuters.com) 98

Reuters reports that the Iranian hacking team which compromised the campaign of U.S. presidential candidate Donald Trump "is known for placing surveillance software on the mobile phones of its victims, enabling them to record calls, steal texts and silently turn on cameras and microphones, according to researchers and experts who follow the group." Known as APT42 or CharmingKitten by the cybersecurity research community, the accused Iranian hackers are widely believed to be associated with an intelligence division inside Iran's military, known as the Intelligence Organization of the Islamic Revolutionary Guard Corps or IRGC-IO. Their appearance in the U.S. election is noteworthy, sources told Reuters, because of their invasive espionage approach against high-value targets in Washington and Israel. "What makes (APT42) incredibly dangerous is this idea that they are an organization that has a history of physically targeting people of interest," said John Hultquist, chief analyst with U.S. cybersecurity firm Mandiant, who referenced past research that found the group surveilling the cell phones of Iranian activists and protesters... Hultquist said the hackers commonly use mobile malware that allows them to "record phone calls, room audio recordings, pilfer SMS (text) inboxes, take images off of a machine," and gather geolocation data...

APT42 also commonly impersonates journalists and Washington think tanks in complex, email-based social engineering operations that aim to lure their targeting into opening booby-trapped messages, which let them takeover systems. The group's "credential phishing campaigns are highly targeted and well-researched; the group typically targets a small number of individuals," said Josh Miller, a threat analyst with email security company Proofpoint. They often target anti-Iran activists, reporters with access to sources inside Iran, Middle Eastern academics and foreign-policy advisers. This has included the hacking of western government officials and American defense contractors. For example, in 2018, the hackers targeted nuclear workers and U.S. Treasury department officials around the time the United States formally withdrew from the Joint Comprehensive Plan of Action (JCPOA), said Allison Wikoff, a senior cyber intelligence analyst with professional services company PricewaterhouseCoopers.

"APT42 is still actively targeting campaign officials and former Trump administration figures critical of Iran, according to a blog post by Google's cybersecurity research team."
The Military

Workers at Google DeepMind Push Company to Drop Military Contracts (time.com) 143

Nearly 200 Google DeepMind workers signed a letter urging Google to cease its military contracts, expressing concerns that the AI technology they develop is being used in warfare, which they believe violates Google's own AI ethics principles. "The letter is a sign of a growing dispute within Google between at least some workers in its AI division -- which has pledged to never work on military technology -- and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States," reports TIME Magazine. "The signatures represent some 5% of DeepMind's overall headcount -- a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand." From the report: The DeepMind letter, dated May 16 of this year, begins by stating that workers are "concerned by recent reports of Google's contracts with military organizations." It does not refer to any specific militaries by name -- saying "we emphasize that this letter is not about the geopolitics of any particular conflict." But it links out to an April report in TIME which revealed that Google has a direct contract to supply cloud computing and AI services to the Israeli Military Defense, under a wider contract with Israel called Project Nimbus. The letter also links to other stories alleging that the Israeli military uses AI to carry out mass surveillance and target selection for its bombing campaign in Gaza, and that Israeli weapons firms are required by the government to buy cloud services from Google and Amazon.

"Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles," the letter that circulated inside Google DeepMind says. (Those principles state the company will not pursue applications of AI that are likely to cause "overall harm," contribute to weapons or other technologies whose "principal purpose or implementation" is to cause injury, or build technologies "whose purpose contravenes widely accepted principles of international law and human rights.") The letter says its signatories are concerned with "ensuring that Google's AI Principles are upheld," and adds: "We believe [DeepMind's] leadership shares our concerns." [...]

The letter calls on DeepMind's leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter's circulation, Google has done none of those things, according to four people with knowledge of the matter. "We have received no meaningful response from leadership," one said, "and we are growing increasingly frustrated."

Google

Google is Shoving Its Apps Onto New Windows Laptops (theverge.com) 25

Google is making a new desktop app called Essentials that packages a few Google services, like Messages and Photos, and includes links to download many others. The app will be included with many new Windows laptops, with the first ones coming from HP. From a report: The Essentials app lets you "discover and install many of our best Google services," according to Google's announcement, and lets you browse Google Photos as well as send and receive Google Messages in the app. A full list of apps has not yet been announced, but Google's announcement art showcases icons including Google Sheets, Google Drive, Nearby Share, and Google One (a two-month free trial is offered through Essentials for new subscribers).

HP will start including Google Essentials across its computer brands, like Envy, Pavilion, Omen, and more. Google says you're "in control of your experience" and can uninstall any part of Essentials or the whole thing.

Android

Google Play Will No Longer Pay To Discover Vulnerabilities In Popular Android Apps (androidauthority.com) 19

Android Authority's Mishaal Rahman reports: Security vulnerabilities are lurking in most of the apps you use on a day-to-day basis; there's just no way for most companies to preemptively fix every possible security issue because of human error, deadlines, lack of resources, and a multitude of other factors. That's why many organizations run bug bounty programs to get external help with fixing these issues. The Google Play Security Reward Program (GPSRP) is an example of a bug bounty program that paid security researchers to find vulnerabilities in popular Android apps, but it's being shut down later this month. Google announced the Google Play Security Reward Program back in October 2017 as a way to incentivize security searchers to find and, most importantly, responsibly disclose vulnerabilities in popular Android apps distributed through the Google Play Store. [...]

The purpose of the Google Play Security Reward Program was simple: Google wanted to make the Play Store a more secure destination for Android apps. According to the company, vulnerability data they collected from the program was used to help create automated checks that scanned all apps available in Google Play for similar vulnerabilities. In 2019, Google said these automated checks helped more than 300,000 developers fix more than 1,000,000 apps on Google Play. Thus, the downstream effect of the GPSRP is that fewer vulnerable apps are distributed to Android users.

However, Google has now decided to wind down the Google Play Security Reward Program. In an email to participating developers, such as Sean Pesce, the company announced that the GPSRP will end on August 31st. The reason Google gave is that the program has seen a decrease in the number of actionable vulnerabilities reported. The company credits this success to the "overall increase in the Android OS security posture and feature hardening efforts."

Google

Google Agrees To $250 Million Deal To Fund California Newsrooms, AI (politico.com) 33

Google has reached a groundbreaking deal with California lawmakers to contribute millions to local newsrooms, aiming to support journalism amid its decline as readers migrate online and advertising dollars evaporate. The agreement also includes a controversial provision for artificial intelligence funding. Politico reports: California emulated a strategy that other countries like Canada have used to try and reverse the journalism industry's decline as readership migrated online and advertising dollars evaporated. [...] Under the deal, the details of which were first reported by POLITICO on Monday, Google and the state of California would jointly contribute a minimum of $125 million over five years to support local newsrooms through a nonprofit public charity housed at UC Berkeley's journalism school. Google would contribute at least $55 million, and state officials would kick in at least $70 million. The search giant would also commit $50 million over five years to unspecified "existing journalism programs."

The deal would also steer millions in tax-exempt private dollars toward an artificial intelligence initiative that people familiar with the negotiations described as an effort to cultivate tech industry buy-in. Funding for artificial intelligence was not included in the bill at the core of negotiations, authored by Assemblymember Buffy Wicks. The agreement has drawn criticism from a journalists' union that had so far championed Wicks' effort. Media Guild of the West President Matt Pearce in an email to union members Sunday evening said such a deal would entrench "Google's monopoly power over our newsrooms."
"This public-private partnership builds on our long history of working with journalism and the local news ecosystem in our home state, while developing a national center of excellence on AI policy," said Kent Walker, chief legal officer for Alphabet, the parent company of Google.

Media Guild of the West President Matt Pearce wasn't so chipper. He criticized the plan in emails with union members, calling it a "total rout of the state's attempts to check Google's stranglehold over our newsrooms."

Slashdot Top Deals