Welcome back
Curated from 200+ sources across AI & machine learning

TSMC benefits from demand for all AI chips -- not just those from one designer.


Microsoft (NasdaqGS:MSFT) and Stellantis announced a multi year AI and cybersecurity collaboration that includes rolling out Microsoft Copilot and Azure across global operations. Stellantis plans over 100 joint AI initiatives and aims to give all employees access to Copilot for daily work and decision support. At Hannover Messe, Microsoft highlighted new industrial software partnerships with Schneider Electric, Aras, Teradata and others tied to its AI and cloud platforms. The announcements...
Abstract A computer-implemented system and method for structuring human–AI interaction without autonomous goal pursuit is disclosed. The system does not operate as an agent or decision-making entity. Instead, it functions as an interaction-layer regulator that controls how information is introduced, maintained, and resolved during exchange. Rather than optimizing for immediate answers or task completion, the system maintains a dynamic interaction field that: preserves multiple interpretive pathways regulates premature convergence supports the formation of human-side understanding Core Components The system comprises: (1) Liminal Holding Layer Maintains pre-articulated signal states prior to collapse into fixed meaning. This allows partial structure to persist long enough for interpretation to stabilize. (2) Resolution Control Mechanism (N-Spoke Model) Controls the number of active interpretive pathways at any given moment. Prevents early narrowing into a single frame

Just 10 to 15 minutes with an AI assistant is enough to measurably weaken problem-solving ability and persistence on later tasks done without AI, according to a new study from researchers in the US and UK. The article Just ten minutes of using AI as an answer machine can measurably erode problem-solving skills, new study finds appeared first on The Decoder.

NVIDIA Corp. (NASDAQ:NVDA) is one of the 10 Best Data Center Stocks To Buy For the Long Term. In the past year, the stock grew 95.44%, while it posted 5.03% year-to-date growth. On April 14, NVIDIA announced the world’s first family of open source quantum AI models, NVIDIA Ising, designed to help researchers and enterprises […]
Vertiv Holdings Co (NYSE:VRT) is one of the Unstoppable Growth Stocks to Invest In According to Reddit. On April 13, Vertiv Holdings Co (NYSE:VRT) announced the acquisition of BMarko Structures, which is a US-based company that builds custom steel and wood frames for heavy-duty structures. Management noted that this deal targets the booming AI data […]

Article URL: https://techcrunch.com/2026/03/23/littlebird-raises-11m-to-capture-context-from-your-computer-so-you-can-query-your-data/ Comments URL: https://news.ycombinator.com/item?id=47811198 Points: 2 # Comments: 0

In recent months, the company announced an agreement with Amazon Web Services to use Cerebras chips in Amazon data centers, as well as a deal with OpenAI reportedly worth more than $10 billion.

Despite recently being designated a supply-chain risk by the Pentagon, Anthropic is still talking to high-level members of the Trump administration.

New data from Appfigures shows a swell of new app launches in 2026, suggesting AI tools could be fueling a mobile software boom.

Deepseek is reportedly ready to give up its independence. The Chinese AI startup is seeking outside backers for the first time, aiming to raise at least $300 million. The shift comes after delayed model releases, top researchers being poached by rivals, and mounting pressure from deep-pocketed tech giants. The article Deepseek reportedly seeks outside funding for the first time at $10 billion valuation appeared first on The Decoder.


Article URL: https://iamalex-afk.github.io/human-os-patch-33-protocols/ Comments URL: https://news.ycombinator.com/item?id=47810895 Points: 3 # Comments: 0

The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic.

The US intelligence agency may have been the first to write a report using AI, with no human involvement.

Google is pushing AI mode deeper into Chrome: websites will soon open directly next to the AI response. That makes the traditional page visit even less relevant. The article Google finds new ways to keep you from ever clicking a link again appeared first on The Decoder.

"We launched 2.5 months ago, and right now, we have $300,000 in ARR."

Schematik is a program that aims to help people vibe code for physical devices. Hopefully, it won’t blow anything up.
AI news from 200+ sources
Get Started Free
The company says Claude Design is intended to help people like founders and product managers without a design background share their ideas more easily.

So... I'm way behind. Just got into Claude this week... it's already doing most of my coding and bug fixes. Crazy stuff. Some background on my company: Mature (14 yrs) Ruby on Rails app, Sidekiq, Redis, PG, AWS lambda/eventbridge, react/preact, swift, and others. Hosted on Heroku. Very database heavy. Solo guy, owner/operator. Current stack: Datadog logging (dabbled in APM, metrics, and others, but the build pack for Heroku is so bloated I had to remove it), so now it's a simple Heroku log drain to datadog. I really miss the statsd host and APM stuff... but it took my slug size from ~250mb to over 350 and made booting and deploys much slower. I'd also like to get off Heroku.... some day... but I can't event fathom it. Bugsnag for errors. Already moved this to Sentry to try out the AI stuff. A little fed up with Datadog and receiving $600 monthly bills on top of my enterprise commitment. biggest pain points: indexing is priced per line, not by gb (discourages me from simple logging, so

Salesforce is opening its entire platform to AI agents. With "Headless 360," the API becomes the user interface and the browser becomes obsolete. CEO Marc Benioff is putting into practice exactly what OpenAI's Sam Altman recently called an inevitable shift. The article Salesforce CEO Marc Benioff says APIs are the new UI for AI agents appeared first on The Decoder.

The Trump administration has spent nearly two months fighting with AI company Anthropic. It's dubbed the company a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic's buzzy new cybersecurity-focused model: Claude Mythos Preview. Anthropic's relationship with the Pentagon soured quickly in late February after the company refused to budge on two red lines: using its technology for domestic mass surveillance or lethal fully autonomous weapons with no human in the loop. Anthropic's tech has in the past been used heavily b … Read the full story at The Verge.

So this happened mere hours ago and I feel like I genuinely stumbled onto something worth documenting for people interested in AI behavior. I'm going to try to be as precise as possible about the sequence because the order of events is everything here. Full chat if you want to read it yourself: https://g.co/gemini/share/0cb9f054ca58 Background I was using Gemini paid most advanced model to analyze a live crypto trade on AAVE. The token had dropped 7–9% out of nowhere in the last hour with zero news to explain it. I've been trading crypto for over a decade and something felt off, so I asked Gemini to dig into it. It came back very bullish - told me this was just normal market maker activity and that there were, quote, "absolutely zero indications of an exploit, hack, or insider dump." I even pushed back multiple times and it kept doubling down. So I moved on and started discussing trading strategy with it. Then it caught something mid-response Out of nowhere, mid-conversation,

Article URL: https://agenticdev.blog/ Comments URL: https://news.ycombinator.com/item?id=47811916 Points: 2 # Comments: 0

AKA scalable oversight of value drift TL;DR LLMs could be aligned but then corrupted through RL, instrumentally converging on deep consequentialism. If LLMs are sufficiently aligned and can properly oversee their training updates, we they can prevent this. SOTA models can arguably be considered ~aligned,[1] but this isn't my main concern. It's not when models are trained on human data that messes up (I mean, we can still mess that part up), it's when you try to go above the human level. Models like AlphaGo learned through self-play, not human imitation. RL selects for strategies through the reward function, but we can't design perfect reward functions for complex settings[2]. However, we can use LLMs to be the reward function instead, if they're aligned well enough by default. This leads us to: Consent Based RL Imagine you're being trained to make deliveries as fast as possible in an RL environment, but we need exploration, you know? So your actions are sampled, until you end up cutti

The former Instagram VP is departing the ChatGPT-maker, which is folding the AI science application he led into Codex.

Kevin Weil and Bill Peebles are leaving OpenAI as the company shuts down Sora and folds its science team, signaling a sharp pivot away from consumer moonshots toward enterprise AI.

A rogue AI agent at Meta passed every identity check and still exposed sensitive data to unauthorized employees in March. Two weeks later, Mercor, a $10 billion AI startup, confirmed a supply-chain breach through LiteLLM. Both are traced to the same structural gap. Monitoring without enforcement, enforcement without isolation. A VentureBeat three-wave survey of 108 qualified enterprises found that the gap is not an edge case. It is the most common security architecture in production today. Gravitee’s State of AI Agent Security 2026 survey of 919 executives and practitioners quantifies the disconnect. 82% of executives say their policies protect them from unauthorized agent actions. Eighty-eight percent reported AI agent security incidents in the last twelve months. Only 21% have runtime visibility into what their agents are doing. Arkose Labs’ 2026 Agentic AI Security Report found 97% of enterprise security leaders expect a material AI-agent-driven incident within 12 months. Only 6% of