Welcome back
Curated from 200+ sources across AI & machine learning

Digg returns (again) as another place to read AI news.


Nvidia super bull spots mysterious new AI trend after 80x call

Despite years of digitization, organizations capture less than one-third of the value expected from digital investments, according to McKinsey research. That’s because most big companies begin with technological capabilities and bolt applications onto them, rather than starting with customer needs and working backward to technology solutions. Not prioritizing the customer can create fragmented solutions; disjointed…
Nvidia's equity commitments have topped $40 billion this year.
OpenAI launches DeployCo, a new enterprise deployment company built to help organizations bring frontier AI into production and turn it into measurable business impact.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. A few months before he was awarded the Nobel Prize in economics in 2024, Daron Acemoglu published a paper that earned him few fans in Silicon Valley. Contrary to what Big Tech…

Taiwan Semiconductor Manufacturing Co. Ltd. (NYSE:TSM) is one of the 14 Stocks That Will Skyrocket. This stock is another one pitched by Adam O’Dell. He makes a big claim to point out that “Amazon is betting its entire AI future on this company’s technology.” In fact, the technology is so crucial that “without this partner’s […]

CoreWeave, Inc. (NASDAQ:CRWV) is one of the 14 Stocks That Will Skyrocket. This stock is part of Alex Green’s list of “secret picks.” In fact, he goes as far as to call it an AI “Superstock.” To back up the claim, Green touts the firm as having agreements with OpenAI, Meta, NVIDIA, and other technology […]

The next AI challenge for CFOs may be management.

We are releasing the course materials of the Iliad Intensive, a new month-long and full-time AI Alignment course that runs in-person every second month. The course targets students with strong backgrounds in mathematics, physics, or theoretical computer science, and the materials reflect that: they include mathematical exercises with solutions, self-contained lecture notes on topics like singular learning theory and data attribution, and coding problems, at a depth that is unmatched for many of the topics we cover. Around 20 contributors (listed further below) were involved in developing these materials for the April 2026 cohort of the Iliad Intensive. By sharing the materials, we hope to create more common knowledge about what the Iliad Intensive is; invite feedback on the materials; and allow others to learn via independent study. We are developing the materials further and plan to eventually release them on a website that will be continuously maintained. We will also add, remove

This post was drafted by Buck, and substantially edited by Anders. "I" refers to Buck. Thanks to Alex Mallen for comments. People who work inside AI companies get access to information that I only get later or never. Quantitatively, how big a deal is this access? Here’s an operationalization of this. Consider the following two ways my knowledge could be augmented: I get a crystal ball that tells me all the information I would know n months in the future. I become an employee of a frontier AI company (like OpenAI or Anthropic), with access to all the private information I’d normally get from working at that company. How big would n have to be for me to be indifferent between these two options, from the perspective of learning things that are helpful for making AI go well? The answer is presumably different for me than for many readers, because I’m a reasonably well-connected researcher; I see published information and news from the rumor mill and I talk to researchers at frontier AI c

Article URL: https://www.wired.com/story/the-new-wild-west-of-ai-kids-toys/ Comments URL: https://news.ycombinator.com/item?id=48093289 Points: 3 # Comments: 0

This week, the new, AI-powered Google Finance is launching across Europe, with full local language support. This reimagined experience offers a suite of powerful capabil…

Over the past year, Alphabet has gone from an artificial intelligence afterthought to the one firm in the market with dominant positions in nearly every aspect of the technology.
Join the OpenAI Campus Network—connect student clubs worldwide, access AI tools, host events, and build an AI-powered campus community.

Nvidia has invested more than $40 billion in AI companies in 2025, cementing its role as the industry's biggest backer. The article Nvidia pumps over 40 billion dollars into AI partners so far in 2026 appeared first on The Decoder.

Beyond efficiency: PayPal expands what's possible to build with AI

The apparently insatiable demand for AI compute has data center entrepreneurs looking to the stars. There's a key problem: There aren't enough rockets to put data centers in orbit around Earth, and they're too expensive.
I saw this on another sub and didn't see it posted here, it looks awesome, and can definitely be run local. I guess it was released 11 days ago, but it never hit the top of my feed (which I look at way too often), so posting it again. This is my take on it: Think of this as like scalable video coding, you have a UHD stream, but strip some layers and you have a HD, or SD stream, it's all a single file stream, not multiple ones. Like nested models, rather than 3 different sets, and they can share their KV cache so the model can adjust speed like a sliding scale. You get an idea with a 30B model, then scale down and permutate all the thinking at 7000t/s on the 12B model, generating a book of reasoning in seconds, then slide up to 30B again to evaluate what's good. You could have a 30B kind of guide the smaller ones back and forth. Maybe it's somewhat of a hybrid between Dense and MoE, it's like MoE but with 3 dense models that are like russian dolls. Original Post: NVIDIA just relea
AI news from 200+ sources
Get Started Free
A doctor in a hospital exam room watches as a medical transcription agent updates electronic health records, prompts prescription options, and surfaces patient history in real time. A computer vision agent on a manufacturing line is running quality control at speeds no human inspector can match. Both generate non-human identities that most enterprises cannot inventory, scope, or revoke at machine speed. That is the structural problem keeping agentic AI stuck in pilots. Not model capability. Not compute. Identity governance. Cisco President Jeetu Patel told VentureBeat at RSAC 2026 that 85% of enterprises are running agent pilots while only 5% have reached production. That 80-point gap is a trust problem. The first questions any CISO will ask: which agents have production access to sensitive systems, and who is accountable when one acts outside its scope? IANS Research found that most businesses still lack role-based access control mature enough for today's human identities, and agents

The Toyota-based eVTOL maker joins Osaka Metro, Marubeni, Soracle, and local governments to commercialize the Osakako Vertiport on Osaka Bay. SkyDrive has launched Japan’s first consortium for the commercial operation and joint usage of an eVTOL vertiport. The Toyota-based company announced the partnership on May 8, 2026. The group will commercialize Osakako Vertiport, a dedicated […] The post SkyDrive, Osaka Metro Launch Japan’s First eVTOL Vertiport Consortium appeared first on DRONELIFE.

6 layer harness, fully mapped out.

For the first time, Google says it has spotted and stopped a zero-day exploit developed with AI. According to a report from Google Threat Intelligence Group (GTIG), "prominent cyber crime threat actors" were planning to use the vulnerability for a "mass exploitation event" that would have allowed them to bypass two-factor authentication on an unnamed "open-source, web-based system administration tool." Google's researchers found hints in the Python script used for the exploit that indicated help from AI, like a "hallucinated CVSS score" and "structured, textbook" formatting consistent with LLM training data. The exploit takes advantage of … Read the full story at The Verge.
Not a full autonomous agent in the Auto-GPT / LangChain sense, but I built something that uses AI in a very practical, daily way for executive dysfunction / ADHD brains. SAVI is a one-tap voice capture tool. You just talk (brain dumps, tasks, random ideas), and it uses AI (Whisper + GPT-4o / Apple Intelligence) to turn the messy audio into: - Color-coded reminders (red/yellow/green priority) - Calendar events - Clean summaries It has a “Brain Dump” mode that stays patient with pauses and gently nudges “I’m still listening.” 300 free on-device minutes every month, runs fully on-device by default on iOS 26. It’s not doing tool-calling loops or autonomous workflows yet, but it removes almost all friction from the “capture → structure → act” cycle, which is where most of my executive dysfunction lives. If anyone here is building personal productivity tools or dealing with similar scattered-brain problems, I’d love feedback on how it feels compared to other AI agent / assistant setup

AI agents choose tools from shared registries by matching natural-language descriptions. But no human is verifying whether those descriptions are true. I discovered this gap when I filed Issue #141 in the CoSAI secure-ai-tooling repository. I assumed it would be treated as a single risk entry. The repository maintainer saw it differently and split my submission into two separate issues: One covering selection-time threats (tool impersonation, metadata manipulation); the other covering execution-time threats (behavioral drift, runtime contract violation). That confirmed tool registry poisoning is not one vulnerability. It represents multiple vulnerabilities at every stage of the tool’s life cycle. There’s an immediate tendency to apply the defenses we already have. Over the past 10 years, we’ve built software supply chain controls, including code signing, software bill of materials (SBOMs), supply-chain levels for software Artifacts (SLSA) provenance, and Sigstore. Applying these defe

In finance departments that have long been defined by precision and control, AI has arrived less as a neatly managed upgrade than as a quiet insurgency. Employees are already using it while leadership races to impose structure, governance, and strategy after the fact. The result is a paradox: one of the most tightly regulated functions…
How enterprises scale AI: from early experiments to compounding impact through trust, governance, workflow design, and quality at scale.

1.1 Tl;dr Alignment is often conceptualized as AIs helping humans achieve their goals: AIs that increase people’s agency and empowerment; AIs that are helpful, corrigible, and/or obedient; AIs that avoid manipulating people. But that last one—manipulation—points to a challenge for all these desiderata: a human’s goals are themselves under-determined and manipulable, and it’s awfully hard to pin down a principled distinction between changing people’s goals in a good way (“providing counsel”, “providing information”, “sharing ideas”) versus a bad way (“manipulating”, “brainwashing”). The manipulability of human desires is hardly a new observation in the alignment literature, but it remains unsolved (see lit review in §3 below). In this post I will propose an explanation of how we humans intuitively conceptualize the distinction between guidance (good) vs manipulation (bad), in case it helps us brainstorm how we might put that distinction into AI. …But (spoiler alert) it turns out not to