Welcome back
Curated from 200+ sources across AI & machine learning

Microsoft Corp's Fairwater data center in Mount Pleasant, Wisconsin, is now going live earlier than expected, CEO Satya Nadella announced on Thursday. Microsoft Touts ‘World's Most Powerful' AI Facility Nadella announced the development on X, calling it "the world's most powerful AI datacenter" that will connect "hundreds of thousands of GB200s into a single seamless cluster." "Congrats to all the teams who made this possible!" he added. Our Fairwater datacenter in Wisconsin is going live, ahead



On the latest episode of Equity, we discuss OpenAI's latest acquisitions and whether they address "two big existential problems" for the company.

Vercel, a major development platform that hosts and deploys web apps, was compromised, and the hackers are attempting to sell stolen data. A person claiming to be a member of ShinyHunters, which was behind the recent hack of Rockstar Games, posted some data online, including employee names, email addresses, and activity time stamps. Vercel confirmed in a post on X that a "security incident" had occurred, and that it impacted a "limited subset" of its customers. Vercel said that a compromised third-party AI tool was the avenue for attack, though it did not specify which third-party was involved. We've identified a security incident that inv … Read the full story at The Verge.

These five under-the-radar infrastructure plays could be your second chance to invest in the backbone of the AI boom.

A Vantage data center in Loudoun County — dubbed ‘Data Center Alley’ — could contribute to 33 premature deaths over five years, researchers say.
submitted by /u/BousWakebo [link] [comments]

A lot of AI startups exist partly because the foundation models haven't expanded into their category yet. As many jokingly acknowledge, that won't last forever.

Welcome back to TechCrunch Mobility, your hub for the future of transportation and now, more than ever, how AI is playing a part.

Alphabet's Google is in talks with Marvell Technology to develop two new chips aimed at running AI models more efficiently, The Information reported on Sunday citing two people with knowledge of the discussions. One of the chips is a memory processing unit designed to work with Google's tensor processing unit (TPU) and the other chip is a new TPU built specifically for running AI models, the report said. Google has been pushing to make its TPUs a viable alternative to Nvidia's dominant GPUs. TPU sales have become a key driver of growth in Google's cloud revenue as it aims to show investors that its AI investments are generating returns.

The RealChart2Code benchmark puts 14 leading AI models to the test on complex visualizations built from real-world datasets. Even the top proprietary models lose nearly half their performance compared to simpler tests. The article Even the best AI models lose about half their performance when charts get complicated, new benchmark finds appeared first on The Decoder.

Article URL: https://www.theregister.com/2026/04/18/atlassians_new_data_collection_policy/ Comments URL: https://news.ycombinator.com/item?id=47823679 Points: 2 # Comments: 0

Abstract A computer-implemented system and method for structuring human–AI interaction without autonomous goal pursuit is disclosed. The system does not operate as an agent or decision-making entity. Instead, it functions as an interaction-layer regulator that controls how information is introduced, maintained, and resolved during exchange. Rather than optimizing for immediate answers or task completion, the system maintains a dynamic interaction field that: preserves multiple interpretive pathways regulates premature convergence supports the formation of human-side understanding Core Components The system comprises: (1) Liminal Holding Layer Maintains pre-articulated signal states prior to collapse into fixed meaning. This allows partial structure to persist long enough for interpretation to stabilize. (2) Resolution Control Mechanism (N-Spoke Model) Controls the number of active interpretive pathways at any given moment. Prevents early narrowing into a single frame

In recent months, the company announced an agreement with Amazon Web Services to use Cerebras chips in Amazon data centers, as well as a deal with OpenAI reportedly worth more than $10 billion.

Despite recently being designated a supply-chain risk by the Pentagon, Anthropic is still talking to high-level members of the Trump administration.

"We launched 2.5 months ago, and right now, we have $300,000 in ARR."

Schematik is a program that aims to help people vibe code for physical devices. Hopefully, it won’t blow anything up.

The complaint does not say exactly how the wigs and do-rag helped Luther Davis assume three different identities and get, at one point, a $4 million loan.
AI news from 200+ sources
Get Started FreeA couple of early-to-mid-stage startups I'm consulting with are asking the same question: their AI/ML team wants production Postgres data, and nobody's quite sure how to give it to them. I've handled this before for BI teams — read replica with a generous `max_standby_streaming_delay` and `hot_standby_feedback` on, accepting the occasional bloat on the primary. Worked fine. But the AI/ML ask feels different in ways I can't fully articulate yet, which is part of why I'm asking. A few things I'm trying to calibrate: Where does the agent actually connect? Primary with RLS, read replica, warehouse (Snowflake/BigQuery/Redshift), lakehouse (Iceberg/Delta on S3), or something else? If you're not doing this — is it compliance, cost fear, bad experiences (runaway queries, PII in prompts), or something else? And the one I'm most curious about: does this actually feel different from giving BI tools DB access, or is it the same problem wearing new clothes? Not looking for product recommendations.

Article URL: https://github.com/moeen-mahmud/remen Comments URL: https://news.ycombinator.com/item?id=47825712 Points: 1 # Comments: 0
the link will be in the comments plz give me advice and everything if anyone has experience with this. I am super excited to get into this world. idk if Friday is allowed its a total rip off but oh well lol submitted by /u/Time_Appeal2458 [link] [comments]

Google's A2UI 0.9 is a framework-agnostic standard that lets AI agents generate UI elements on the fly, tapping into an app's existing components across web, mobile, and other platforms. The article Google launches generative UI standard for AI agents appeared first on The Decoder.

This week: AI-enabled market entry, vision intelligence, true chemical operations autonomy, LNG tankers, Gemini embodied reasoning, smaller/cheaper/recycled EVs

A research team developed an OpenClaw agent for smart glasses to find out how continuously perceiving AI changes the way people use agentic AI systems. The article Always-on Ray-Ban Meta glasses powered by OpenClaw speed up everyday tasks in new study appeared first on The Decoder.

Anthropic's Opus 4.7 matches its predecessor's per-token price, but each request ends up costing significantly more. The reason: a new tokenizer that breaks the same text into up to 47 percent more tokens. Early measurements show what that shift means in practice for Claude Code users. The article First token counts reveal Opus 4.7 costs significantly more than 4.6 despite Anthropic's flat pricing appeared first on The Decoder.

Article URL: https://track-hacker-news.com/reports/llm-launches Comments URL: https://news.ycombinator.com/item?id=47823438 Points: 2 # Comments: 0

Salesforce is opening its entire platform to AI agents. With "Headless 360," the API becomes the user interface and the browser becomes obsolete. CEO Marc Benioff is putting into practice exactly what OpenAI's Sam Altman recently called an inevitable shift. The article Salesforce CEO Marc Benioff says APIs are the new UI for AI agents appeared first on The Decoder.

So this happened mere hours ago and I feel like I genuinely stumbled onto something worth documenting for people interested in AI behavior. I'm going to try to be as precise as possible about the sequence because the order of events is everything here. Full chat if you want to read it yourself: https://g.co/gemini/share/0cb9f054ca58 Background I was using Gemini paid most advanced model to analyze a live crypto trade on AAVE. The token had dropped 7–9% out of nowhere in the last hour with zero news to explain it. I've been trading crypto for over a decade and something felt off, so I asked Gemini to dig into it. It came back very bullish - told me this was just normal market maker activity and that there were, quote, "absolutely zero indications of an exploit, hack, or insider dump." I even pushed back multiple times and it kept doubling down. So I moved on and started discussing trading strategy with it. Then it caught something mid-response Out of nowhere, mid-conversation,

A learning-oriented workflow for understanding new open-weight model releases