Welcome back
Curated from 200+ sources across AI & machine learning

Fresh attention on Comcast (CMCSA) is being driven by its recent push into AI-powered edge computing with NVIDIA, new cybersecurity bundled plans for small businesses, and continued multi-state network expansions that are widening its high-speed internet reach. See our latest analysis for Comcast. Despite a stream of new AI partnerships, cybersecurity bundles, and multi-state network builds, Comcast’s recent share price performance has been weak, with a 30-day share price return of 9.93% and...



OpenAI has acquired tech talk show TBPN. The show will supposedly remain editorially independent but report to OpenAI's communications department. That's as contradictory as it sounds. So what's OpenAI really after? The article OpenAI decides the best way to fight critical AI coverage is to own a newsroom appeared first on The Decoder.

Google in November announced Project Suncatcher, with plans to launch prototype satellites to test AI hardware in 2027.

Article URL: https://efficienist.com/netflix-just-release-its-first-public-ai-model-because-why-not/ Comments URL: https://news.ycombinator.com/item?id=47625451 Points: 2 # Comments: 0

Article URL: https://fortune.com/2026/04/02/mercor-ai-startup-security-incident-10-billion/ Comments URL: https://news.ycombinator.com/item?id=47624961 Points: 5 # Comments: 1

OpenAI is acquiring TBPN, a business talk show that’s popular among Silicon Valley elites, as it continues to battle its negative public image.

TBPN, Silicon Valley's cult-favorite tech podcast, will operate independently, even as it's overseen by chief political operative Chris Lehane.
![[AINews] Gemma 4: The best small Multimodal Open Models, dramatically better than Gemma 3 in every way](https://zmstgxtziqmvvwzllahg.supabase.co/storage/v1/object/public/article-images/latent-space/fa720f60-8788-4875-b929-bbce28b94862.jpg)
A welcome update from Google!

MAI released models that can transcribe voice into text as well as generate audio and images after the group's formation six months ago.

Gemma 4 brings the first major update to Google's open models in a year.

Also: All the news and watercooler chat from Fortune.

Article URL: https://github.com/hamtun24/openuma Comments URL: https://news.ycombinator.com/item?id=47624865 Points: 2 # Comments: 0

Microsoft on Wednesday launched three new foundational AI models it built entirely in-house — a state-of-the-art speech transcription system, a voice generation engine, and an upgraded image creator — marking the most concrete evidence yet that the $3 trillion software giant intends to compete directly with OpenAI, Google, and other frontier labs on model development, not just distribution. The trio of models — MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 — are available immediately through Microsoft Foundry and a new MAI Playground. They span three of the most commercially valuable modalities in enterprise AI: converting speech to text, generating realistic human voice, and creating images. Together, they represent the opening salvo from Microsoft's superintelligence team, which Suleyman formed just six months ago to pursue what he calls "AI self-sufficiency." "I'm very excited that we've now got the first models out, which are the very best in the world for transcription," Suleyman

OpenAI acquires TBPN to accelerate global conversations around AI and support independent media, expanding dialogue with builders, businesses, and the broader tech community.

Part museum, part performance venue, the ambitious Museum of Narratives aims to make Shinagawa a cultural hub.

Teams talk about NLP – natural language processing – in test automation, yet many still ask what it truly means. They hear about tools that turn plain language into test scripts, but they want clear facts. This topic matters now because software teams face tight release cycles and constant change. NLP in test automation means […]

arXiv:2604.01305v1 Announce Type: new Abstract: Reconstructing high-dimensional spatiotemporal fields from sparse sensor measurements is critical in a wide range of scientific applications. The SHallow REcurrent Decoder (SHRED) architecture is a recent state-of-the-art architecture that reconstructs high-quality spatial domain from hyper-sparse sensor measurement streams. An important limitation of SHRED is that in complex, data-scarce, high-frequency, or stochastic systems, portions of the spatiotemporal field must be modeled with valid uncertainty estimation. We introduce UQ-SHRED, a distributional learning framework for sparse sensing problems that provides uncertainty quantification through a neural network-based distributional regression called engression. UQ-SHRED models the uncertainty by learning the predictive distribution of the spatial state conditioned on the sensor history. By injecting stochastic noise into sensor inputs and training with an energy score loss, UQ-SHRED p
AI news from 200+ sources
Get Started Free
Version 3 of the AI coding tool Cursor introduces a completely redesigned interface built to move developers from manual code editing to running multiple AI agents in parallel. The article New Cursor 3 ditches the classic IDE layout for an "agent-first" interface built around parallel AI fleets appeared first on The Decoder.

For the past two years, enterprises evaluating open-weight models have faced an awkward trade-off. Google's Gemma line consistently delivered strong performance, but its custom license — with usage restrictions and terms Google could update at will — pushed many teams toward Mistral or Alibaba's Qwen instead. Legal review added friction. Compliance teams flagged edge cases. And capable as Gemma 3 was, "open" with asterisks isn't the same as open. Gemma 4 eliminates that friction entirely. Google DeepMind's newest open model family ships under a standard Apache 2.0 license — the same permissive terms used by Qwen, Mistral, Arcee, and most of the open-weight ecosystem. No custom clauses, no "Harmful Use" carve-outs that required legal interpretation, no restrictions on redistribution or commercial deployment. For enterprise teams that had been waiting for Google to play on the same licensing terms as the rest of the field, the wait is over. The timing is notable. As some Chinese AI lab

As Cursor launches the next generation of its product, the AI coding startup has to compete with OpenAI and Anthropic more directly than ever.

Article URL: https://arxiv.org/abs/2604.01202 Comments URL: https://news.ycombinator.com/item?id=47622971 Points: 1 # Comments: 0

arXiv:2604.01322v1 Announce Type: new Abstract: Trampoline gymnastics involves extreme human poses and uncommon viewpoints, on which state-of-the art pose estimation models tend to under-perform. We demonstrate that this problem can be addressed by fine-tuning a pose estimation model on a dataset of synthetic trampoline poses (STP). STP is generated from motion capture recordings of trampoline routines. We develop a pipeline to fit noisy motion capture data to a parametric human model, then generate multiview realistic images. We use this data to fine-tune a ViTPose model, and test it on real multi-view trampoline images. The resulting model exhibits accuracy improvements in 2D which translates to improved 3D triangulation. In 2D, we obtain state-of-the-art results on such challenging data, bridging the performance gap between common and extreme poses. In 3D, we reduce the MPJPE by 12.5 mm with our best model, which represents an improvement of 19.6% compared to the pretrained ViTPose

The baton of open source AI models has been passed on between several companies over the years since ChatGPT debuted in late 2022, from Meta with its Llama family to Chinese labs like Qwen and z.ai. But lately, Chinese companies have started pivoting back towards proprietary models even as some U.S. labs like Cursor and Nvidia release their own variants of the Chinese models, leaving a question mark about who will originate this branch of technology going forward. One answer: Arcee, a San Francisco based lab, which this week released AI Trinity-Large-Thinking—a 399-billion parameter text-only reasoning model released under the uncompromisingly open Apache 2.0 license, allowing for full customizability and commercial usage by anyone from indie developers to large enterprises. The release represents more than just a new set of weights on AI code sharing community Hugging Face; it is a strategic bet that "American Open Weights" can provide a sovereign alternative to the increasingly clo

Anthropic has announced a new feature for its AI assistant Claude: the ability to directly operate a user's computer, handling tasks people would normally do themselves at their desk. The article Claude Code and Cowork now let Anthropic's AI take control of your Mac or Windows desktop appeared first on The Decoder.

Article URL: https://www.tomshardware.com/tech-industry/artificial-intelligence/the-largest-programming-community-on-reddit-just-banned-all-content-related-to-ai-llms-r-programming-is-prioritizing-only-high-quality-discussions-about-ai Comments URL: https://news.ycombinator.com/item?id=47625204 Points: 2 # Comments: 1

Mercor confirmed it was hit by a supply-chain attack targeting LiteLLM, a widely used AI developer tool. Extortion gang Lapsus$ claims to have walked away with four terabytes of data.

We’re mostly focused on research and writing for our next big scenario, but we’re also continuing to think about AI timelines and takeoff speeds, monitoring the evidence as it comes in, and adjusting our expectations accordingly. We’re tentatively planning on making quarterly updates to our timelines and takeoff forecasts. Since we published the AI Futures Model 3 months ago, we’ve updated towards shorter timelines. Daniel’s Automated Coder (AC) median has moved from late 2029 to mid 2028, and Eli’s forecast has moved a similar amount. The AC milestone is the point at which an AGI company would rather lay off all of their human software engineers than stop using AIs for software engineering. The reasons behind this change include:1 We switched to METR Time Horizon version 1.1. We included data from newly evaluated models (Gemini 3, GPT-5.2, and Claude Opus 4.6). Daniel and Eli revised their estimates for the present doubling time of the METR time horizon to be faster, from a

arXiv:2604.00016v1 Announce Type: new Abstract: The validity of online behavioral research relies on study participants being human rather than machine. In the past, it was possible to detect machines by posing simple challenges that were easily solved by humans but not by machines. General-purpose agents based on large language models (LLMs) can now solve many of these challenges, threatening the validity of online behavioral research. Here we explore the idea of detecting humanness by using tasks that machines can solve too well to be human. Specifically, we probe for the existence of an established human cognitive constraint: limited working memory capacity. We show that cognitive modeling on a standard serial recall task can be used to distinguish online participants from LLMs even when the latter are specifically instructed to mimic human working memory constraints. Our results demonstrate that it is viable to use well-established cognitive phenomena to distinguish LLMs from huma