AIToday

Welcome back

or
Don't have an account? Sign upForgot password?
🔥 Updated in real-time

Today's Top AI News

Curated from 200+ sources across AI & machine learning

Digg tries again, this time as an AI news aggregator
TOP STORYGeneral AI

Digg tries again, this time as an AI news aggregator

Digg returns (again) as another place to read AI news.

TechCrunch AI·4h ago
Dooap Inc. Launches Dooap Studio: Putting Agentic AP Automation Directly in the Hands of Finance Teams
#2Models & Gen AI

Dooap Inc. Launches Dooap Studio: Putting Agentic AP Automation Directly in the Hands of Finance Teams

Yahoo Finance AI4h ago
RLWRLD releases RLDX-1, a dexterity-first foundation model for robot hands
#3Robotics

RLWRLD releases RLDX-1, a dexterity-first foundation model for robot hands

The Robot Report4h ago

General AI

Yahoo Finance AI

Nvidia bull unveils bold new AI trade Wall Street hasn't named yet

Nvidia super bull spots mysterious new AI trend after 80x call

General AI
Yahoo Finance AI
Fostering breakthrough AI innovation through customer-back engineering

Fostering breakthrough AI innovation through customer-back engineering

Despite years of digitization, organizations capture less than one-third of the value expected from digital investments, according to McKinsey research. That’s because most big companies begin with technological capabilities and bolt applications onto them, rather than starting with customer needs and working backward to technology solutions. Not prioritizing the customer can create fragmented solutions; disjointed…

General AI
MIT Technology Review AI
Yahoo Finance AI

Nvidia Expands AI Investment Push

Nvidia's equity commitments have topped $40 billion this year.

General AI
Yahoo Finance AI
OpenAI Blog

OpenAI launches DeployCo to help businesses build around intelligence

OpenAI launches DeployCo, a new enterprise deployment company built to help organizations bring frontier AI into production and turn it into measurable business impact.

General AI
OpenAI Blog
Three things in AI to watch, according to a Nobel-winning economist

Three things in AI to watch, according to a Nobel-winning economist

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. A few months before he was awarded the Nobel Prize in economics in 2024, Daron Acemoglu published a paper that earned him few fans in Silicon Valley. Contrary to what Big Tech…

General AI
MIT Technology Review AI
Taiwan Semiconductor (TSM) Is Just Rolling In Billions Of Dollars, Says Newsletter

Taiwan Semiconductor (TSM) Is Just Rolling In Billions Of Dollars, Says Newsletter

Taiwan Semiconductor Manufacturing Co. Ltd. (NYSE:TSM) is one of the 14 Stocks That Will Skyrocket. This stock is another one pitched by Adam O’Dell. He makes a big claim to point out that “Amazon is betting its entire AI future on this company’s technology.” In fact, the technology is so crucial that “without this partner’s […]

General AI
Yahoo Finance AI
Everyone Wants What CoreWeave (CRWV) Has, Says Newsletter

Everyone Wants What CoreWeave (CRWV) Has, Says Newsletter

CoreWeave, Inc. (NASDAQ:CRWV) is one of the 14 Stocks That Will Skyrocket. This stock is part of Alex Green’s list of “secret picks.” In fact, he goes as far as to call it an AI “Superstock.” To back up the claim, Green touts the firm as having agreements with OpenAI, Meta, NVIDIA, and other technology […]

General AI
Yahoo Finance AI
What Microsoft’s new research tells CFOs about the ROI of AI

What Microsoft’s new research tells CFOs about the ROI of AI

The next AI challenge for CFOs may be management.

General AI
Fortune AI
The Iliad Intensive Course Materials

The Iliad Intensive Course Materials

We are releasing the course materials of the Iliad Intensive, a new month-long and full-time AI Alignment course that runs in-person every second month. The course targets students with strong backgrounds in mathematics, physics, or theoretical computer science, and the materials reflect that: they include mathematical exercises with solutions, self-contained lecture notes on topics like singular learning theory and data attribution, and coding problems, at a depth that is unmatched for many of the topics we cover. Around 20 contributors (listed further below) were involved in developing these materials for the April 2026 cohort of the Iliad Intensive. By sharing the materials, we hope to  create more common knowledge about what the Iliad Intensive is; invite feedback on the materials; and allow others to learn via independent study.  We are developing the materials further and plan to eventually release them on a website that will be continuously maintained. We will also add, remove

General AI
LessWrong AI
How useful is the information you get from working inside an AI company?

How useful is the information you get from working inside an AI company?

This post was drafted by Buck, and substantially edited by Anders. "I" refers to Buck. Thanks to Alex Mallen for comments. People who work inside AI companies get access to information that I only get later or never. Quantitatively, how big a deal is this access? Here’s an operationalization of this. Consider the following two ways my knowledge could be augmented: I get a crystal ball that tells me all the information I would know n months in the future. I become an employee of a frontier AI company (like OpenAI or Anthropic), with access to all the private information I’d normally get from working at that company. How big would n have to be for me to be indifferent between these two options, from the perspective of learning things that are helpful for making AI go well? The answer is presumably different for me than for many readers, because I’m a reasonably well-connected researcher; I see published information and news from the rumor mill and I talk to researchers at frontier AI c

General AI
LessWrong AI
The new Wild West of AI kids' toys

The new Wild West of AI kids' toys

Article URL: https://www.wired.com/story/the-new-wild-west-of-ai-kids-toys/ Comments URL: https://news.ycombinator.com/item?id=48093289 Points: 3 # Comments: 0

General AI
Hacker News
The new AI-powered Google Finance is expanding to Europe

The new AI-powered Google Finance is expanding to Europe

This week, the new, AI-powered Google Finance is launching across Europe, with full local language support. This reimagined experience offers a suite of powerful capabil…

General AI
Google AI Blog
AI wins have Alphabet poised to become world’s biggest company

AI wins have Alphabet poised to become world’s biggest company

Over the past year, Alphabet has gone from an artificial intelligence afterthought to the one firm in the market with dominant positions in nearly every aspect of the technology.

General AI
Japan Times Tech
OpenAI Blog

OpenAI Campus Network: Student club interest form

Join the OpenAI Campus Network—connect student clubs worldwide, access AI tools, host events, and build an AI-powered campus community.

General AI
OpenAI Blog
Nvidia pumps over 40 billion dollars into AI partners so far in 2026

Nvidia pumps over 40 billion dollars into AI partners so far in 2026

Nvidia has invested more than $40 billion in AI companies in 2025, cementing its role as the industry's biggest backer. The article Nvidia pumps over 40 billion dollars into AI partners so far in 2026 appeared first on The Decoder.

General AI
THE DECODER
Beyond efficiency: PayPal expands what's possible to build with AIMay 11, 2026

Beyond efficiency: PayPal expands what's possible to build with AIMay 11, 2026

Beyond efficiency: PayPal expands what's possible to build with AI

General AI
Cursor Blog
There aren’t enough rockets for space data centers — Cowboy Space raised $275M to build them

There aren’t enough rockets for space data centers — Cowboy Space raised $275M to build them

The apparently insatiable demand for AI compute has data center entrepreneurs looking to the stars. There's a key problem: There aren't enough rockets to put data centers in orbit around Earth, and they're too expensive.

General AI
TechCrunch AI
r/LocalLLaMA

NVIDIA AI Releases Star Elastic: One Checkpoint that Contains 30B, 23B, and 12B Reasoning Models with Zero-Shot Slicing

I saw this on another sub and didn't see it posted here, it looks awesome, and can definitely be run local. I guess it was released 11 days ago, but it never hit the top of my feed (which I look at way too often), so posting it again. This is my take on it: Think of this as like scalable video coding, you have a UHD stream, but strip some layers and you have a HD, or SD stream, it's all a single file stream, not multiple ones. Like nested models, rather than 3 different sets, and they can share their KV cache so the model can adjust speed like a sliding scale. You get an idea with a 30B model, then scale down and permutate all the thinking at 7000t/s on the 12B model, generating a book of reasoning in seconds, then slide up to 30B again to evaluate what's good. You could have a 30B kind of guide the smaller ones back and forth. Maybe it's somewhat of a hybrid between Dense and MoE, it's like MoE but with 3 dense models that are like russian dolls. Original Post: NVIDIA just relea

General AI
r/LocalLLaMA

AI news from 200+ sources

Get Started Free
🧠

Models & Gen AI

AI agents are running hospital records and factory inspections. Enterprise IAM was never built for them

AI agents are running hospital records and factory inspections. Enterprise IAM was never built for them

A doctor in a hospital exam room watches as a medical transcription agent updates electronic health records, prompts prescription options, and surfaces patient history in real time. A computer vision agent on a manufacturing line is running quality control at speeds no human inspector can match. Both generate non-human identities that most enterprises cannot inventory, scope, or revoke at machine speed. That is the structural problem keeping agentic AI stuck in pilots. Not model capability. Not compute. Identity governance. Cisco President Jeetu Patel told VentureBeat at RSAC 2026 that 85% of enterprises are running agent pilots while only 5% have reached production. That 80-point gap is a trust problem. The first questions any CISO will ask: which agents have production access to sensitive systems, and who is accountable when one acts outside its scope? IANS Research found that most businesses still lack role-based access control mature enough for today's human identities, and agents

Models & Gen AI
VentureBeat AI
SkyDrive, Osaka Metro Launch Japan’s First eVTOL Vertiport Consortium

SkyDrive, Osaka Metro Launch Japan’s First eVTOL Vertiport Consortium

The Toyota-based eVTOL maker joins Osaka Metro, Marubeni, Soracle, and local governments to commercialize the Osakako Vertiport on Osaka Bay. SkyDrive has launched Japan’s first consortium for the commercial operation and joint usage of an eVTOL vertiport. The Toyota-based company announced the partnership on May 8, 2026. The group will commercialize Osakako Vertiport, a dedicated […] The post SkyDrive, Osaka Metro Launch Japan’s First eVTOL Vertiport Consortium appeared first on DRONELIFE.

Models & Gen AI
DRONELIFE
Claude Code's Architecture, explained visually!

Claude Code's Architecture, explained visually!

6 layer harness, fully mapped out.

Models & Gen AI
Daily Dose of Data Science
Google stopped a zero-day hack that it says was developed with AI

Google stopped a zero-day hack that it says was developed with AI

For the first time, Google says it has spotted and stopped a zero-day exploit developed with AI. According to a report from Google Threat Intelligence Group (GTIG), "prominent cyber crime threat actors" were planning to use the vulnerability for a "mass exploitation event" that would have allowed them to bypass two-factor authentication on an unnamed "open-source, web-based system administration tool." Google's researchers found hints in the Python script used for the exploit that indicated help from AI, like a "hallucinated CVSS score" and "structured, textbook" formatting consistent with LLM training data. The exploit takes advantage of … Read the full story at The Verge.

Models & Gen AI
The Verge AI
r/AI_Agents

Built a practical voice-first AI tool for ADHD/executive dysfunction — one-tap brain dump → structured reminders & tasks (not a full autonomous agent)

Not a full autonomous agent in the Auto-GPT / LangChain sense, but I built something that uses AI in a very practical, daily way for executive dysfunction / ADHD brains. SAVI is a one-tap voice capture tool. You just talk (brain dumps, tasks, random ideas), and it uses AI (Whisper + GPT-4o / Apple Intelligence) to turn the messy audio into: - Color-coded reminders (red/yellow/green priority) - Calendar events - Clean summaries It has a “Brain Dump” mode that stays patient with pauses and gently nudges “I’m still listening.” 300 free on-device minutes every month, runs fully on-device by default on iOS 26. It’s not doing tool-calling loops or autonomous workflows yet, but it removes almost all friction from the “capture → structure → act” cycle, which is where most of my executive dysfunction lives. If anyone here is building personal productivity tools or dealing with similar scattered-brain problems, I’d love feedback on how it feels compared to other AI agent / assistant setup

Models & Gen AI
r/AI_Agents
AI tool poisoning exposes a major flaw in enterprise agent security

AI tool poisoning exposes a major flaw in enterprise agent security

AI agents choose tools from shared registries by matching natural-language descriptions. But no human is verifying whether those descriptions are true. I discovered this gap when I filed Issue #141 in the CoSAI secure-ai-tooling repository. I assumed it would be treated as a single risk entry. The repository maintainer saw it differently and split my submission into two separate issues: One covering selection-time threats (tool impersonation, metadata manipulation); the other covering execution-time threats (behavioral drift, runtime contract violation). That confirmed tool registry poisoning is not one vulnerability. It represents multiple vulnerabilities at every stage of the tool’s life cycle. There’s an immediate tendency to apply the defenses we already have. Over the past 10 years, we’ve built software supply chain controls, including code signing, software bill of materials (SBOMs), supply-chain levels for software Artifacts (SLSA) provenance, and Sigstore. Applying these defe

Models & Gen AI
VentureBeat AI
📊

Business & Industry

Implementing advanced AI technologies in finance

Implementing advanced AI technologies in finance

In finance departments that have long been defined by precision and control, AI has arrived less as a neatly managed upgrade than as a quiet insurgency. Employees are already using it while leadership races to impose structure, governance, and strategy after the fact. The result is a paradox: one of the most tightly regulated functions…

Business & Industry
MIT Technology Review AI
OpenAI Blog

How enterprises are scaling AI

How enterprises scale AI: from early experiments to compounding impact through trust, governance, workflow design, and quality at scale.

Business & Industry
OpenAI Blog
🤖

Robotics

Empowerment, corrigibility, etc. are simple abstractions (of a messed-up ontology)

Empowerment, corrigibility, etc. are simple abstractions (of a messed-up ontology)

1.1 Tl;dr Alignment is often conceptualized as AIs helping humans achieve their goals: AIs that increase people’s agency and empowerment; AIs that are helpful, corrigible, and/or obedient; AIs that avoid manipulating people. But that last one—manipulation—points to a challenge for all these desiderata: a human’s goals are themselves under-determined and manipulable, and it’s awfully hard to pin down a principled distinction between changing people’s goals in a good way (“providing counsel”, “providing information”, “sharing ideas”) versus a bad way (“manipulating”, “brainwashing”). The manipulability of human desires is hardly a new observation in the alignment literature, but it remains unsolved (see lit review in §3 below). In this post I will propose an explanation of how we humans intuitively conceptualize the distinction between guidance (good) vs manipulation (bad), in case it helps us brainstorm how we might put that distinction into AI.  …But (spoiler alert) it turns out not to

Robotics
LessWrong AI