Welcome back
Curated from 200+ sources across AI & machine learning

Amazon is seeing accelerating demand for AI services, but it's not all going to Nvidia.



Article URL: https://www.ft.com/content/d2136091-9fd2-4923-b168-50539e5b27ab Comments URL: https://news.ycombinator.com/item?id=48077176 Points: 2 # Comments: 0

Only 4% of employers still spread raises equally across the board, according to Mercer.

New AI-powered smart glasses are due out later this year.

NVIDIA Corp. (NASDAQ:NVDA) is one of the Best American AI Stocks to Buy Now. On May 6, NVIDIA announced a multiyear commercial and technology partnership with Corning Inc. aimed at expanding the U.S.-based manufacture of the advanced optical connectivity solutions needed to power next-generation AI infrastructure. Under the partnership, Corning will boost its U.S.-based optical […]

Here is a scenario that should concern every enterprise architect shipping autonomous AI systems right now: An observability agent is running in production. Its job is to detect infrastructure anomalies and trigger the appropriate response. Late one night, it flags an elevated anomaly score across a production cluster, 0.87, above its defined threshold of 0.75. The agent is within its permission boundaries. It has access to the rollback service. So it uses it. The rollback causes a four-hour outage. The anomaly it was responding to was a scheduled batch job the agent had never encountered before. There was no actual fault. The agent did not escalate. It did not ask. It acted, confidently, autonomously, and catastrophically. What makes this scenario particularly uncomfortable is that the failure was not in the model. The model behaved exactly as trained. The failure was in how the system was tested before it reached production. The engineers had validated happy-path behavior, run load

Nvidia continues to be a big investor in the AI ecosystem.
Article URL: https://www.eetimes.com/ai-accelerator-spec-maintains-rapid-update-pace/ Comments URL: https://news.ycombinator.com/item?id=48075705 Points: 1 # Comments: 0

These connected companions could disrupt everything from make-believe to bedtime stories. No wonder some lawmakers want them banned.

The phone is dying. Cristiano Amon knows what's replacing it—he just won't tell you yet.

Most enterprise security programs were built to protect servers, endpoints, and cloud accounts. None of them was built to find a customer intake form that a product manager vibe coded on Lovable over a weekend, connected to a live Supabase database, and deployed on a public URL indexed by Google. That gap now has a price tag. New research from Israeli cybersecurity firm RedAccess quantifies the scale. The firm discovered 380,000 publicly accessible assets, including applications, databases, and related infrastructure, built with vibe coding tools from Lovable, Base44, and Replit, as well as deployment platform Netlify. Roughly 5,000 of those assets, about 1.3%, contained sensitive corporate information. CEO Dor Zvi said his team found the exposure while researching shadow AI for customers. Axios independently verified multiple exposed apps, and Wired confirmed the findings separately. Among the verified exposures: a shipping company app detailed which vessels were expected at which por

These cuddly, connected companions could disrupt everything from make-believe to bedtime stories. No wonder some lawmakers want them banned.

Article URL: https://www.science.org/content/article/deepfakes-are-everywhere-godfather-digital-forensics-fighting-back Comments URL: https://news.ycombinator.com/item?id=48076119 Points: 2 # Comments: 0
![[AINews] Anthropic growing 10x/year while everyone else is laying off >10% of their workforce](https://zmstgxtziqmvvwzllahg.supabase.co/storage/v1/object/public/article-images/latent-space/e4eb3a0d-fe3c-4abc-8296-50a717419245.jpg)
A quiet day lets us reflect on an interesting dichotomy in the economy.

Another week, another infusion of big AI rounds.

You can stop Chrome from taking up 4GB of storage for local AI, but that shouldn't be your problem.

True self-improving systems are “right around the corner,” a researcher told IEEE Spectrum.
new MoE release from ai2 - EMO, 1b-active/14b-total trained on 1t tokens interesting thing is document-level routing. experts cluster around domains like health, news, etc. instead of surface patterns models: https://huggingface.co/collections/allenai/emo submitted by /u/ghostderp [link] [comments]

Cloudflare announced its first large-scale layoff. CEO Matthew Prince says because of AI efficiency gains, the company doesn't need as many support roles.
AI news from 200+ sources
Get Started Free
Article URL: https://felloai.com/subq-llm-review/ Comments URL: https://news.ycombinator.com/item?id=48075768 Points: 2 # Comments: 0
If I were to sell the power of LLMs as powerful research agents, and if I had enough money, I could think about introducing little "gems" into the training set of LLM so that my model would be able to discover new theorems and proofs. There is a lot of money at the table, and I am sure there are a lot of genius people with little pay. Perhaps this kind of thinking is wrong?, only bad people would think like this?, how could one detect such a trick without knowing the training set? Comments URL: https://news.ycombinator.com/item?id=48073325 Points: 2 # Comments: 1

Anthropic's Natural Language Autoencoders make Claude Opus 4.6's internal activations readable as plain text. Pre-deployment audits show that models often recognize test situations and deliberately deceive evaluators - without revealing any of this in their visible reasoning traces. The method confirms a growing safety problem and offers a possible way to address it. The article AI safety tests have a new problem: Models are now faking their own reasoning traces appeared first on The Decoder.
![[AINews] GPT-Realtime-2, -Translate, and -Whisper: new SOTA realtime voice APIs](https://zmstgxtziqmvvwzllahg.supabase.co/storage/v1/object/public/article-images/latent-space/5520e9ce-6ad8-41da-a30f-7a5ecc14285c.jpg)
OpenAI continues deploying GPT-5 everywhere

Dario Amodei is not the kind of CEO who talks loosely about numbers. The Anthropic co-founder and chief executive, a former VP of research at OpenAI with a PhD in computational neuroscience from Princeton, has built a reputation for measured public statements — particularly around the financial performance of a company that, until recently, disclosed almost nothing about its business. So when Amodei took the stage at Anthropic's Code with Claude developer conference on Wednesday and offered a genuinely striking piece of financial candor, the room paid attention. "We tried to plan very well for a world of 10x growth per year," Amodei said during a fireside chat with Anthropic's chief product officer, Ami Vora. "And yet we saw 80x. And so that is the reason we have had difficulties with compute." Anthropic had planned for tenfold growth. But revenue and usage increased 80-fold in the first quarter on an annualized basis, a rate Amodei described as "just crazy" and "too hard to handle."

Just a few weeks after announcing Claude Managed Agents, Anthropic has updated the platform with three new capabilities that collapse infrastructure layers like memory, evaluation, and multi-agent orchestration, into a single runtime. This move could threaten the standalone tools that many enterprises cobble together. The new capabilities — 'Dreaming,' 'Outcomes,' and 'Multi-Agent Orchestration' — aim to make agents inside Claude Managed Agents “more capable at handling complex tasks with minimal steering,” Anthropic said in a press release. Dreaming deals with memory, where agents “reflect” on their many sessions and curate memories so they learns and surface unknown patterns. Outcomes allows teams to define and set specific rubrics to measure an agent's success, while Multi-Agent Orchestration breaks jobs down so a lead agent can delegate to other agents. Claude Managed Agents ideally provides enterprises with a simpler path to deploy agents and embeds orchestration logic in the m

A CEO’s AI agent rewrote the company’s security policy. Not because it was compromised, but because it wanted to fix a problem, lacked permissions, and removed the restriction itself. Every identity check passed. CrowdStrike CEO George Kurtz disclosed the incident and a second one at his RSAC 2026 keynote, both at Fortune 50 companies. The credential was valid. The access was authorized. The action was catastrophic. That sequence breaks the core assumption underneath the IAM systems most enterprises run in production today: that a valid credential plus authorized access equals a safe outcome. Identity systems were built for one user, one session, one set of hands on a keyboard. Agents break all three assumptions at once. In an exclusive interview with VentureBeat at RSAC 2026, Matt Caulfield, VP of Identity and Duo at Cisco, (pictured above) walked through the architecture his team is building to close that gap and outlined a six-stage identity maturity model for governing agentic AI.

The system combines Cognex-developed edge AI, advanced AI, and rule-based vision tools with a high-performance embedded compute. The post Cognex releases fully integrated AI-powered vision system for robotics appeared first on The Robot Report.

Rocsys, a developer of charging technologies for electric and autonomous vehicles, has launched the S2, its next-generation hands-free charging solution for heavy-duty electric vehicles operating in demanding environments. Part of Rocsys’ expanding portfolio of stewards, including the recently unveiled M1, the S2 is now available to order for ports, distribution hubs and other logistics facilities. […]