Welcome back
Curated from 200+ sources across AI & machine learning

A Vantage data center in Loudoun County — dubbed ‘Data Center Alley’ — could contribute to 33 premature deaths over five years, researchers say.



Article URL: https://www.theregister.com/2026/04/18/atlassians_new_data_collection_policy/ Comments URL: https://news.ycombinator.com/item?id=47823679 Points: 2 # Comments: 0

Beth Kindig of the I/O Fund has Nvidia reaching a $20 trillion valuation by 2030.

The RealChart2Code benchmark puts 14 leading AI models to the test on complex visualizations built from real-world datasets. Even the top proprietary models lose nearly half their performance compared to simpler tests. The article Even the best AI models lose about half their performance when charts get complicated, new benchmark finds appeared first on The Decoder.

TSMC benefits from demand for all AI chips -- not just those from one designer.

AMD and Oracle are important players in the AI infrastructure space, and they are likely to clock impressive earnings growth through the end of the decade.
Article URL: https://www.stb.gov.sg/about-stb/media-publications/media-centre/singapore-tourism-board-launches-ai-powered-robodog-guides-at-sentosa-and-the-mandai-wildlife-reserve-in-partnership-with-mafengwo/ Comments URL: https://news.ycombinator.com/item?id=47819808 Points: 1 # Comments: 0

Article URL: https://techcrunch.com/2026/03/23/littlebird-raises-11m-to-capture-context-from-your-computer-so-you-can-query-your-data/ Comments URL: https://news.ycombinator.com/item?id=47811198 Points: 2 # Comments: 0

Abstract A computer-implemented system and method for structuring human–AI interaction without autonomous goal pursuit is disclosed. The system does not operate as an agent or decision-making entity. Instead, it functions as an interaction-layer regulator that controls how information is introduced, maintained, and resolved during exchange. Rather than optimizing for immediate answers or task completion, the system maintains a dynamic interaction field that: preserves multiple interpretive pathways regulates premature convergence supports the formation of human-side understanding Core Components The system comprises: (1) Liminal Holding Layer Maintains pre-articulated signal states prior to collapse into fixed meaning. This allows partial structure to persist long enough for interpretation to stabilize. (2) Resolution Control Mechanism (N-Spoke Model) Controls the number of active interpretive pathways at any given moment. Prevents early narrowing into a single frame

In recent months, the company announced an agreement with Amazon Web Services to use Cerebras chips in Amazon data centers, as well as a deal with OpenAI reportedly worth more than $10 billion.


The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic.

The US intelligence agency may have been the first to write a report using AI, with no human involvement.

Despite recently being designated a supply-chain risk by the Pentagon, Anthropic is still talking to high-level members of the Trump administration.

"We launched 2.5 months ago, and right now, we have $300,000 in ARR."

Schematik is a program that aims to help people vibe code for physical devices. Hopefully, it won’t blow anything up.

New data from Appfigures shows a swell of new app launches in 2026, suggesting AI tools could be fueling a mobile software boom.
AI news from 200+ sources
Get Started Free
Article URL: https://track-hacker-news.com/reports/llm-launches Comments URL: https://news.ycombinator.com/item?id=47823438 Points: 2 # Comments: 0

Google's A2UI 0.9 is a framework-agnostic standard that lets AI agents generate UI elements on the fly, tapping into an app's existing components across web, mobile, and other platforms. The article Google launches generative UI standard for AI agents appeared first on The Decoder.

A research team developed an OpenClaw agent for smart glasses to find out how continuously perceiving AI changes the way people use agentic AI systems. The article Always-on Ray-Ban Meta glasses powered by OpenClaw speed up everyday tasks in new study appeared first on The Decoder.

Anthropic's Opus 4.7 matches its predecessor's per-token price, but each request ends up costing significantly more. The reason: a new tokenizer that breaks the same text into up to 47 percent more tokens. Early measurements show what that shift means in practice for Claude Code users. The article First token counts reveal Opus 4.7 costs significantly more than 4.6 despite Anthropic's flat pricing appeared first on The Decoder.

So... I'm way behind. Just got into Claude this week... it's already doing most of my coding and bug fixes. Crazy stuff. Some background on my company: Mature (14 yrs) Ruby on Rails app, Sidekiq, Redis, PG, AWS lambda/eventbridge, react/preact, swift, and others. Hosted on Heroku. Very database heavy. Solo guy, owner/operator. Current stack: Datadog logging (dabbled in APM, metrics, and others, but the build pack for Heroku is so bloated I had to remove it), so now it's a simple Heroku log drain to datadog. I really miss the statsd host and APM stuff... but it took my slug size from ~250mb to over 350 and made booting and deploys much slower. I'd also like to get off Heroku.... some day... but I can't event fathom it. Bugsnag for errors. Already moved this to Sentry to try out the AI stuff. A little fed up with Datadog and receiving $600 monthly bills on top of my enterprise commitment. biggest pain points: indexing is priced per line, not by gb (discourages me from simple logging, so

So this happened mere hours ago and I feel like I genuinely stumbled onto something worth documenting for people interested in AI behavior. I'm going to try to be as precise as possible about the sequence because the order of events is everything here. Full chat if you want to read it yourself: https://g.co/gemini/share/0cb9f054ca58 Background I was using Gemini paid most advanced model to analyze a live crypto trade on AAVE. The token had dropped 7–9% out of nowhere in the last hour with zero news to explain it. I've been trading crypto for over a decade and something felt off, so I asked Gemini to dig into it. It came back very bullish - told me this was just normal market maker activity and that there were, quote, "absolutely zero indications of an exploit, hack, or insider dump." I even pushed back multiple times and it kept doubling down. So I moved on and started discussing trading strategy with it. Then it caught something mid-response Out of nowhere, mid-conversation,

The Trump administration has spent nearly two months fighting with AI company Anthropic. It's dubbed the company a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic's buzzy new cybersecurity-focused model: Claude Mythos Preview. Anthropic's relationship with the Pentagon soured quickly in late February after the company refused to budge on two red lines: using its technology for domestic mass surveillance or lethal fully autonomous weapons with no human in the loop. Anthropic's tech has in the past been used heavily b … Read the full story at The Verge.

Salesforce is opening its entire platform to AI agents. With "Headless 360," the API becomes the user interface and the browser becomes obsolete. CEO Marc Benioff is putting into practice exactly what OpenAI's Sam Altman recently called an inevitable shift. The article Salesforce CEO Marc Benioff says APIs are the new UI for AI agents appeared first on The Decoder.

AKA scalable oversight of value drift TL;DR LLMs could be aligned but then corrupted through RL, instrumentally converging on deep consequentialism. If LLMs are sufficiently aligned and can properly oversee their training updates, we they can prevent this. SOTA models can arguably be considered ~aligned,[1] but this isn't my main concern. It's not when models are trained on human data that messes up (I mean, we can still mess that part up), it's when you try to go above the human level. Models like AlphaGo learned through self-play, not human imitation. RL selects for strategies through the reward function, but we can't design perfect reward functions for complex settings[2]. However, we can use LLMs to be the reward function instead, if they're aligned well enough by default. This leads us to: Consent Based RL Imagine you're being trained to make deliveries as fast as possible in an RL environment, but we need exploration, you know? So your actions are sampled, until you end up cutti

The former Instagram VP is departing the ChatGPT-maker, which is folding the AI science application he led into Codex.

Kevin Weil and Bill Peebles are leaving OpenAI as the company shuts down Sora and folds its science team, signaling a sharp pivot away from consumer moonshots toward enterprise AI.