Welcome back
Curated from 200+ sources across AI & machine learning

Aichi Prefecture aims to put buses with Level 4 autonomous driving, in which a vehicle can operate without a driver under specific conditions, into practical use in fiscal 2027.



The AI arms race among Big Tech shows no signs of slowing. Companies continue to pour tens of billions of dollars into data centers, talent, and compute power, all chasing the next leap in reasoning, multimodality, and real-world usefulness. For everyday investors watching their portfolios, the stakes feel personal: Will this massive spending deliver revenue ... Meta Platforms Finally Releases Muse Spark. Is the AI Model Worth the Wait?

Apollo Global Management (NYSE:APO) has taken part in a major funding round for SiFive, a RISC V chip designer working with Nvidia on AI data center solutions. The firm is also involved in the completed $7.4b acquisition of Air Lease, now operating as Sumisho Air Lease. These moves expand Apollo's reach into both AI chip technology and aviation leasing, adding new angles to its alternative asset focus. At a share price of $104.28, Apollo Global Management gives investors exposure to a...

The report is particularly surprising since the Department of Defense recently declared Anthropic a supply-chain risk.

Amazon.com (NasdaqGS:AMZN) used its latest annual shareholder letter to highlight over $20b revenue run rate from its in house AI chips, including Graviton and Trainium. The company reported triple digit year over year growth in this AI chip segment and is assessing whether to sell these chips directly to external customers. Amazon also outlined plans for around $200b of capital expenditures in 2026, supported by large customer commitments in AWS. For investors, this puts Amazon's AI chip...

Article URL: https://mythosai.cloud/ Comments URL: https://news.ycombinator.com/item?id=47740401 Points: 1 # Comments: 0

The OpenAI CEO's new blog post responds to both an apparent attack on his home and an in-depth New Yorker profile raising questions about his trustworthiness.

One is the clear AI leader.

Japan's temples experiment with artificial intelligence as questions of faith, presence and care grow more urgent.

OpenAI CEO Sam Altman linked an attack on his home to a moment of “great anxiety” around AI.

I live with five friends in a big house, and two things I’ve done in it on this particular Sunday are hide 156 easter eggs all around, and reach a tentative joint decision on the allocation of four of its rooms. These tasks are delightful to me for a reason they have in common, and from which I hope to gesture at extremely far reaching conclusions. Easter eggs A room usually seems like a simple thing to me—a big box, with some smaller mostly boxish objects and holes in it. Each of those things also usually seems simple: a cupboard is a box-shaped hole, with a movable thin-box-shaped front, which has hinges (the most complicated part, but in this picture their only qualities are letting flat surfaces rotate around fixed edges). Sometimes a cupboard has shelves, which are like planes breaking up the space. In this picture, hiding easter eggs well is hard! Like, I could put one in the cupboard? On the top shelf? Or the bottom shelf! They’ll never find it there! These are not good hiding p

The bears worry about Meta's spending habits, while the bulls are excited about its core business and the potential of AI. Who is right?

Data drift happens when the statistical properties of a machine learning (ML) model's input data change over time, eventually rendering its predictions less accurate. Cybersecurity professionals who rely on ML for tasks like malware detection and network threat analysis find that undetected data drift can create vulnerabilities. A model trained on old attack patterns may fail to see today's sophisticated threats. Recognizing the early signs of data drift is the first step in maintaining reliable and efficient security systems. Why data drift compromises security models ML models are trained on a snapshot of historical data. When live data no longer resembles this snapshot, the model's performance dwindles, creating a critical cybersecurity risk. A threat detection model may generate more false negatives by missing real breaches or create more false positives, leading to alert fatigue for security teams. Adversaries actively exploit this weakness. In 2024, attackers used echo-spoofing t

Engineers from SoftBank and Tokyo-based AI developer Preferred Networks Inc. are expected to participate in the development.

Article URL: https://www.reuters.com/technology/meta-transfers-top-engineers-into-new-ai-tooling-team-2026-04-09/ Comments URL: https://news.ycombinator.com/item?id=47731801 Points: 4 # Comments: 1

Article URL: https://plantthevillage.com/ Comments URL: https://news.ycombinator.com/item?id=47729407 Points: 3 # Comments: 0
AI news from 200+ sources
Get Started FreearXiv:2604.08578v1 Announce Type: new Abstract: High-quality labeled data is critical for training reliable machine learning and deep learning models, yet manual annotation remains costly and error-prone. Programmatic labeling addresses this challenge by using label functions (LFs), i.e., heuristic rules that automatically generate weak labels for training datasets. However, existing automated LF generation methods either rely on large language models (LLMs) to synthesize surface-level heuristics or employ model-based synthesis over hand-crafted primitives. These approaches often result in limited coverage and unreliable label quality. In this paper, we introduce EXPONA, an automated framework for programmatic labeling that formulates LF generation as a principled process balancing diversity and reliability. EXPONA systematically explores multi-level LFs, spanning surface, structural, and semantic perspectives. EXPONA further applies reliability-aware mechanisms to suppress noisy or r
arXiv:2604.08987v1 Announce Type: new Abstract: As Large Language Models (LLMs) advance toward embodied AI agents operating in physical environments, a fundamental question emerges: can models trained on text corpora reliably reason about complex physics while adhering to safety constraints? We address this through PilotBench, a benchmark evaluating LLMs on safety-critical flight trajectory and attitude prediction. Built from 708 real-world general aviation trajectories spanning nine operationally distinct flight phases with synchronized 34-channel telemetry, PilotBench systematically probes the intersection of semantic understanding and physics-governed prediction through comparative analysis of LLMs and traditional forecasters. We introduce Pilot-Score, a composite metric balancing 60% regression accuracy with 40% instruction adherence and safety compliance. Comparative evaluation across 41 models uncovers a Precision-Controllability Dichotomy: traditional forecasters achieve superi
arXiv:2604.09035v1 Announce Type: new Abstract: Model-based reinforcement learning (MBRL) with autoregressive world models suffers from compounding errors, whereas diffusion world models mitigate this by generating trajectory segments jointly. However, existing diffusion guides are either policy-only, discarding value information, or reward-based, which becomes myopic when the diffusion horizon is short. We introduce Advantage-Guided Diffusion for MBRL (AGD-MBRL), which steers the reverse diffusion process using the agent's advantage estimates so that sampling concentrates on trajectories expected to yield higher long-term return beyond the generated window. We develop two guides: (i) Sigmoid Advantage Guidance (SAG) and (ii) Exponential Advantage Guidance (EAG). We prove that a diffusion model guided through SAG or EAG allows us to perform reweighted sampling of trajectories with weights increasing in state-action advantage-implying policy improvement under standard assumptions. Addi

US Government blacklists Anthropic as Iran bombs AWS data centers. Plus: $19B revenue in weeks, industrial-scale distillation wars, and an mRNA dog cancer vaccine designed by ChatGPT.

Article URL: https://github.com/rufus-SD/lrts Comments URL: https://news.ycombinator.com/item?id=47739332 Points: 1 # Comments: 0

OpenAI recently added a $100 plan to its lineup, but confusing labels on the pricing page left users guessing about actual usage limits. An OpenAI employee tried to clear things up. The article OpenAI employee tries to explain usage limits of the new ChatGPT Pro plans appeared first on The Decoder.

...explained from scratch!

Anthropic was the star of the show at San Francisco's AI-centric conference.

The rise of AI has brought an avalanche of new terms and slang. Here is a glossary with definitions of some of the most important words and phrases you might encounter.

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the AI coding and vibe-coding booms, follow David Pierce. The Stepback arrives in our subscribers' inboxes at 8AM ET. Opt in for The Stepback here. How it started Writing code was a killer app for AI even before anyone was really talking about AI. In the spring of 2021, 18 months before the world knew the word "ChatGPT," Microsoft debuted the very first product of a partnership with a nonprofit called OpenAI: a tool called GitHub Copilot that watched developers as they wrote code and tried to autocomplete snippets and lines for them … Read the full story at The Verge.

Article URL: https://web.archive.org/web/20260310175721if_/https://s3.documentcloud.org/documents/27777984/nbc-news-march-2026-poll-03-08-2024-release-final.pdf?t=1772898915520 Comments URL: https://news.ycombinator.com/item?id=47731392 Points: 3 # Comments: 1

Generalist AI has introduced a new robotics model, GEN-1, which the company says marks a significant step toward general-purpose artificial intelligence for physical tasks. The model is designed as an “embodied foundation model” – a type of AI system that can perceive, reason and act in the physical world – and is trained on large-scale […]