Welcome back
Curated from 200+ sources across AI & machine learning

In recent weeks, IREN has been highlighted as a pure‑play data center operator after securing a five‑year, US$9.70 billion agreement with Microsoft, underpinning its push into AI‑focused infrastructure alongside its existing Bitcoin mining operations. This long‑term hyperscale contract, combined with management’s US$3.40 billion annualized revenue goal for 2026, underscores both the opportunity in AI data centers and the heavy capital commitments required to build out capacity. We’ll now...

![[AINews] ImageGen is on the Path to AGI](https://zmstgxtziqmvvwzllahg.supabase.co/storage/v1/object/public/article-images/latent-space/05d8d561-0cf8-4ea9-ab70-d3c43ce05ce3.jpg)
Article URL: https://avc.com/2016/04/an-ai-first-world/ Comments URL: https://news.ycombinator.com/item?id=47929692 Points: 3 # Comments: 0

The move comes as the trial for Elon Musk’s lawsuit against OpenAI kicks off in federal court in Oakland.

Article URL: https://techcrunch.com/2026/04/27/deepminds-david-silver-just-raised-1-1b-to-build-an-ai-that-learns-without-human-data/ Comments URL: https://news.ycombinator.com/item?id=47927804 Points: 1 # Comments: 0

On Monday, the courtroom battle between Elon Musk and Sam Altman over alleged broken promises at OpenAI started, as usual, with jury selection. The only tricky part? A lot of the prospective jurors already have an opinion about Elon Musk, and it's not a good one. The Verge reporter Elizabeth Lopatto, who was there at the courthouse, quoted statements from some of the juror questionnaires: "Elon Musk is a greedy, racist, homophobic piece of garbage." "Elon Musk is a world-class jerk." "I very much dislike Tesla. As a woman of color, I am very aware of the damaging statements and actions Elon Musk has enacted and been a part of." M … Read the full story at The Verge.

The AI chipmaker has rebounded from its recent correction, but is there additional upside ahead?

Microsoft and OpenAI on Monday announced a sweeping overhaul of the partnership that has defined the commercial AI era, dismantling key pillars of exclusivity and revenue-sharing that bound the two companies together for years and replacing them with a looser, time-limited arrangement that gives both sides far more freedom to pursue rival relationships. The amended agreement, disclosed simultaneously in blog posts from both companies, marks the most significant restructuring since Microsoft first invested $1 billion in OpenAI in 2019 — and it transforms what was once the most consequential exclusive technology alliance in a generation into something that more closely resembles a strategic but arm's-length commercial relationship. Under the new terms, Microsoft will no longer pay any revenue share to OpenAI when customers access OpenAI models through Azure. OpenAI, meanwhile, will continue paying a revenue share to Microsoft through 2030 — at the same 20 percent rate — but that obligat

Musk’s lawsuit challenges OpenAI’s evolution under Sam Altman. But during jury selection, several potential jurors voiced negative views of Musk himself.

Amended agreement clears the way for OpenAI models to run on Amazon Bedrock.

After a yearslong legal feud, Elon Musk and OpenAI CEO Sam Altman are heading to trial this week in Northern California in a case that could have sweeping consequences. Ahead of OpenAI’s highly anticipated IPO, the court could rule on whether the company is allowed to exist as a for-profit enterprise and might even oust…

Google is trying out an AI Mode-like search experience for YouTube. The company is now testing "a new way to search on YouTube that feels more like a conversation," with results pulling in things like longform videos, YouTube Shorts, and text about what you're searching for. The "experiment" is now available if you're a YouTube Premium subscriber in the US who is 18 or older. I turned it on for my account. Now, in the search bar, I see an "Ask YouTube" button, and clicking the search bar shows prompts to ask like "funny baby elephant playing clips," "summary of the rules of volleyball," and "short history of the Apollo 11 moon landing." If … Read the full story at The Verge.

The billionaire said the SaaS giant would hire recent grads to help build its AI platforms.

“You don’t just want to be able to code. You want to be able to have a conversation, form relationships and be able to think critically, because at the end of the day, that’s the thing that AI can’t replace,” said Josephine Timperman, a student at Miami University in Ohio.

Takaichi argues the solution is investment — with Aida proposing a "high-pressure economy" where demand outstrips supply and the government breaks the deadlock.
Article URL: https://sci-bot.ru/ Comments URL: https://news.ycombinator.com/item?id=47918570 Points: 1 # Comments: 0

Skye's new AI app attracted investors before it even launched — a sign of interest in a more AI-aware iPhone.
AI news from 200+ sources
Get Started Free
Article URL: https://theaidigest.org/time-horizons Comments URL: https://news.ycombinator.com/item?id=47930595 Points: 1 # Comments: 0

Article URL: https://www.scmp.com/tech/tech-trends/article/3351595/chinas-deepseek-prices-new-v4-ai-model-97-below-openais-gpt-55 Comments URL: https://news.ycombinator.com/item?id=47928644 Points: 4 # Comments: 0

E-commerce and cloud computing giant Amazon (AMZN) is scheduled to announce its first-quarter results after the market closes on Wednesday, April 29. AMZN stock has risen over 32% in the past month, driven by accelerating growth in the company’s AWS (Amazon Web Services) cloud unit, a deal with Meta Platforms (META) to power agentic AI on AWS’ Graviton chips, and other strategic partnerships, including a new deal to invest up to $25 billion in Anthropic. According to TipRanks’ Options Tool, opti

Microsoft (MSFT) and OpenAI (OPAI.PVT) have amended their agreement to allow the private AI developer to partner with Microsoft's competitors. Baird senior research analyst of cloud software William Power comes on Market Domination to talk about what this could mean for Microsoft's — which holds a 27% stake in OpenAI — access to the latter's large-language model products.

AI R&D runs on a cycle of hypothesis, experiment, and analysis — each step demanding substantial manual engineering effort. A new framework from researchers at SII-GAIR aims to close that bottleneck by automating the full optimization loop for training data, model architectures, and learning algorithms. A new framework called ASI-EVOLVE, developed by researchers at the Generative Artificial Intelligence Research Lab (SII-GAIR), aims to solve this bottleneck. Designed as an agentic system for AI-for-AI research, it uses a continuous "learn-design-experiment-analyze" cycle to automate the optimization of the foundational AI stack. In experiments, this self-improvement loop autonomously discovered novel designs that significantly outperformed state-of-the-art human baselines. The system generated novel language model architectures, improved pretraining data pipelines to boost benchmark scores by over 18 points, and designed highly efficient reinforcement learning algorithms. For enterpri
arXiv:2604.23993v1 Announce Type: cross Abstract: Product mapping, the task of deciding whether two e-commerce listings refer to the same product, is a core problem for price monitoring and channel visibility. In real marketplaces, however, sellers frequently inject promotional keywords, platform-specific tags, and bundle descriptions into titles, causing the same product to appear under many different names. Recent LLM-based and multi-agent frameworks improve robustness and interpretability on such hard cases, but they often rely on expensive external APIs, repeated retrieval, and complex inference-time orchestration, making large-scale deployment costly and difficult in privacy-sensitive enterprise settings. To address these issues, we present EPM-RL, a reinforcement-learning-based framework for building an accurate and efficient on-premise e-commerce product mapping model. Our central idea is to distill high-cost agentic reasoning into a trainable in-house model. Starting from a cu

Xiaomi, the Chinese firm best known for its smartphones and electric vehicles, has lately been shipping some incredibly affordable and high-powered open source AI large language models. The trend continued today with the release of Xiaomi MiMo-V2.5 and Xiaomi MiMo-V2.5-Pro, both available under the permissive, enterprise-friendly MIT License, making them suitable for use in production in commercial applications. Enterprises and individual/independent developers can now download either of the models (and more Xiaomi open source options) directly from Hugging Face, modify them as needed, and run them locally or on virtual private clouds as they see fit. The most notable attribute of these models besides the open source licensing is that, according to Xiaomi's published benchmarks, they are among the most efficient available for agentic "claw" tasks, that is, powering systems such as OpenClaw, NanoClaw and Hermes Agent, in which users can communicate with them directly over third-party m

One of the most popular Linux distributions is about to get an influx of AI features. As reported by Phoronix, Jon Seager, VP of engineering at Ubuntu developer Canonical, shared a blog post on Monday detailing plans to add AI features to the Linux distro over the next year. As the post states, the AI features "will come in two forms: first as a means of enhancing existing OS functionality with AI models in the background, and latterly in the form of 'AI native' features and workflows for those who want them." These features will range from accessibility tools like improved speech-to-text and text-to-speech to agentic AI features for tasks … Read the full story at The Verge.

Google is bringing back its 5-Day AI Agents Intensive Course with Kaggle and registration is open.
Source Article excerpt: With a single PCIe card — powered by six HTX301 chips and 384 GB of memory — enterprises can now run 700B-parameter model inference locally at just ~240W per card. The memory-bandwidth-intensive token generation that dominates real-world inference latency. Existing GPUs handle compute-dense prefill; HTX301 cards handle decode. Each silicon matched to its phase. This is a really interesting approach. It only lets the GPU handle the prefill stage, while everything else, including the model weights and decoding, runs entirely on this card. That way, you can run huge billion parameter models without needing to chase after graphics cards with massive VRAM. As for how the actual product will perform in real life, we'll have to wait until early June at Computex to find out. submitted by /u/lurenjia_3x [link] [comments]

Sereact has raised a $110 million Series B round led by Headline, with participation from Bullhound Capital, Daphni, and Felix Capital. Existing investors Air Street Capital, Creandum (who led the company’s 2025 Series A round), and Point Nine once again invested in Sereact. The round funds two priorities: scaling Cortex 2 and entering the United […]

SquareMind, an AI and robotics company developing solutions for dermatology, has announced $18 million in funding, including previously undisclosed pre-Series A financing, to enable high-quality, consistent skin exams and make them accessible at scale. The round was led by Sonder Capital, a California-based venture fund co-founded by medical robotics pioneer and Intuitive Surgical founder Fred […]