Welcome back
Curated from 200+ sources across AI & machine learning

Beth Kindig of the I/O Fund has Nvidia reaching a $20 trillion valuation by 2030.



Article URL: https://www.theregister.com/2026/04/18/atlassians_new_data_collection_policy/ Comments URL: https://news.ycombinator.com/item?id=47823679 Points: 2 # Comments: 0

TSMC benefits from demand for all AI chips -- not just those from one designer.

AMD and Oracle are important players in the AI infrastructure space, and they are likely to clock impressive earnings growth through the end of the decade.
Article URL: https://www.stb.gov.sg/about-stb/media-publications/media-centre/singapore-tourism-board-launches-ai-powered-robodog-guides-at-sentosa-and-the-mandai-wildlife-reserve-in-partnership-with-mafengwo/ Comments URL: https://news.ycombinator.com/item?id=47819808 Points: 1 # Comments: 0

Microsoft (NasdaqGS:MSFT) and Stellantis announced a multi year AI and cybersecurity collaboration that includes rolling out Microsoft Copilot and Azure across global operations. Stellantis plans over 100 joint AI initiatives and aims to give all employees access to Copilot for daily work and decision support. At Hannover Messe, Microsoft highlighted new industrial software partnerships with Schneider Electric, Aras, Teradata and others tied to its AI and cloud platforms. The announcements...
Abstract A computer-implemented system and method for structuring human–AI interaction without autonomous goal pursuit is disclosed. The system does not operate as an agent or decision-making entity. Instead, it functions as an interaction-layer regulator that controls how information is introduced, maintained, and resolved during exchange. Rather than optimizing for immediate answers or task completion, the system maintains a dynamic interaction field that: preserves multiple interpretive pathways regulates premature convergence supports the formation of human-side understanding Core Components The system comprises: (1) Liminal Holding Layer Maintains pre-articulated signal states prior to collapse into fixed meaning. This allows partial structure to persist long enough for interpretation to stabilize. (2) Resolution Control Mechanism (N-Spoke Model) Controls the number of active interpretive pathways at any given moment. Prevents early narrowing into a single frame

NVIDIA Corp. (NASDAQ:NVDA) is one of the 10 Best Data Center Stocks To Buy For the Long Term. In the past year, the stock grew 95.44%, while it posted 5.03% year-to-date growth. On April 14, NVIDIA announced the world’s first family of open source quantum AI models, NVIDIA Ising, designed to help researchers and enterprises […]

Article URL: https://techcrunch.com/2026/03/23/littlebird-raises-11m-to-capture-context-from-your-computer-so-you-can-query-your-data/ Comments URL: https://news.ycombinator.com/item?id=47811198 Points: 2 # Comments: 0

In recent months, the company announced an agreement with Amazon Web Services to use Cerebras chips in Amazon data centers, as well as a deal with OpenAI reportedly worth more than $10 billion.

Despite recently being designated a supply-chain risk by the Pentagon, Anthropic is still talking to high-level members of the Trump administration.


The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic.

The US intelligence agency may have been the first to write a report using AI, with no human involvement.

"We launched 2.5 months ago, and right now, we have $300,000 in ARR."

Schematik is a program that aims to help people vibe code for physical devices. Hopefully, it won’t blow anything up.

New data from Appfigures shows a swell of new app launches in 2026, suggesting AI tools could be fueling a mobile software boom.
AI news from 200+ sources
Get Started Free
Anthropic's Opus 4.7 matches its predecessor's per-token price, but each request ends up costing significantly more. The reason: a new tokenizer that breaks the same text into up to 47 percent more tokens. Early measurements show what that shift means in practice for Claude Code users. The article First token counts reveal Opus 4.7 costs significantly more than 4.6 despite Anthropic's flat pricing appeared first on The Decoder.

Article URL: https://track-hacker-news.com/reports/llm-launches Comments URL: https://news.ycombinator.com/item?id=47823438 Points: 2 # Comments: 0

Google's A2UI 0.9 is a framework-agnostic standard that lets AI agents generate UI elements on the fly, tapping into an app's existing components across web, mobile, and other platforms. The article Google launches generative UI standard for AI agents appeared first on The Decoder.

The company says Claude Design is intended to help people like founders and product managers without a design background share their ideas more easily.

So... I'm way behind. Just got into Claude this week... it's already doing most of my coding and bug fixes. Crazy stuff. Some background on my company: Mature (14 yrs) Ruby on Rails app, Sidekiq, Redis, PG, AWS lambda/eventbridge, react/preact, swift, and others. Hosted on Heroku. Very database heavy. Solo guy, owner/operator. Current stack: Datadog logging (dabbled in APM, metrics, and others, but the build pack for Heroku is so bloated I had to remove it), so now it's a simple Heroku log drain to datadog. I really miss the statsd host and APM stuff... but it took my slug size from ~250mb to over 350 and made booting and deploys much slower. I'd also like to get off Heroku.... some day... but I can't event fathom it. Bugsnag for errors. Already moved this to Sentry to try out the AI stuff. A little fed up with Datadog and receiving $600 monthly bills on top of my enterprise commitment. biggest pain points: indexing is priced per line, not by gb (discourages me from simple logging, so

The Trump administration has spent nearly two months fighting with AI company Anthropic. It's dubbed the company a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic's buzzy new cybersecurity-focused model: Claude Mythos Preview. Anthropic's relationship with the Pentagon soured quickly in late February after the company refused to budge on two red lines: using its technology for domestic mass surveillance or lethal fully autonomous weapons with no human in the loop. Anthropic's tech has in the past been used heavily b … Read the full story at The Verge.

So this happened mere hours ago and I feel like I genuinely stumbled onto something worth documenting for people interested in AI behavior. I'm going to try to be as precise as possible about the sequence because the order of events is everything here. Full chat if you want to read it yourself: https://g.co/gemini/share/0cb9f054ca58 Background I was using Gemini paid most advanced model to analyze a live crypto trade on AAVE. The token had dropped 7–9% out of nowhere in the last hour with zero news to explain it. I've been trading crypto for over a decade and something felt off, so I asked Gemini to dig into it. It came back very bullish - told me this was just normal market maker activity and that there were, quote, "absolutely zero indications of an exploit, hack, or insider dump." I even pushed back multiple times and it kept doubling down. So I moved on and started discussing trading strategy with it. Then it caught something mid-response Out of nowhere, mid-conversation,

Salesforce is opening its entire platform to AI agents. With "Headless 360," the API becomes the user interface and the browser becomes obsolete. CEO Marc Benioff is putting into practice exactly what OpenAI's Sam Altman recently called an inevitable shift. The article Salesforce CEO Marc Benioff says APIs are the new UI for AI agents appeared first on The Decoder.

AKA scalable oversight of value drift TL;DR LLMs could be aligned but then corrupted through RL, instrumentally converging on deep consequentialism. If LLMs are sufficiently aligned and can properly oversee their training updates, we they can prevent this. SOTA models can arguably be considered ~aligned,[1] but this isn't my main concern. It's not when models are trained on human data that messes up (I mean, we can still mess that part up), it's when you try to go above the human level. Models like AlphaGo learned through self-play, not human imitation. RL selects for strategies through the reward function, but we can't design perfect reward functions for complex settings[2]. However, we can use LLMs to be the reward function instead, if they're aligned well enough by default. This leads us to: Consent Based RL Imagine you're being trained to make deliveries as fast as possible in an RL environment, but we need exploration, you know? So your actions are sampled, until you end up cutti

The former Instagram VP is departing the ChatGPT-maker, which is folding the AI science application he led into Codex.

Kevin Weil and Bill Peebles are leaving OpenAI as the company shuts down Sora and folds its science team, signaling a sharp pivot away from consumer moonshots toward enterprise AI.