Welcome back
Curated from 200+ sources across AI & machine learning

Article URL: https://gizmodo.com/salesforce-announces-huge-ai-initiative-and-calls-it-headless-360-2000748243 Comments URL: https://news.ycombinator.com/item?id=47829523 Points: 4 # Comments: 0



Microsoft Corp's Fairwater data center in Mount Pleasant, Wisconsin, is now going live earlier than expected, CEO Satya Nadella announced on Thursday. Microsoft Touts ‘World's Most Powerful' AI Facility Nadella announced the development on X, calling it "the world's most powerful AI datacenter" that will connect "hundreds of thousands of GB200s into a single seamless cluster." "Congrats to all the teams who made this possible!" he added. Our Fairwater datacenter in Wisconsin is going live, ahead

Article URL: https://9to5mac.com/2026/04/19/apple-local-ai-server-hosting-new-business-model/ Comments URL: https://news.ycombinator.com/item?id=47827682 Points: 2 # Comments: 0

These five under-the-radar infrastructure plays could be your second chance to invest in the backbone of the AI boom.

On the latest episode of Equity, we discuss OpenAI's latest acquisitions and whether they address "two big existential problems" for the company.

A lot of AI startups exist partly because the foundation models haven't expanded into their category yet. As many jokingly acknowledge, that won't last forever.

Vercel, a major development platform that hosts and deploys web apps, was compromised, and the hackers are attempting to sell stolen data. A person claiming to be a member of ShinyHunters, which was behind the recent hack of Rockstar Games, posted some data online, including employee names, email addresses, and activity time stamps. Vercel confirmed in a post on X that a "security incident" had occurred, and that it impacted a "limited subset" of its customers. Vercel said that a compromised third-party AI tool was the avenue for attack, though it did not specify which third-party was involved. We've identified a security incident that inv … Read the full story at The Verge.

A Vantage data center in Loudoun County — dubbed ‘Data Center Alley’ — could contribute to 33 premature deaths over five years, researchers say.

The RealChart2Code benchmark puts 14 leading AI models to the test on complex visualizations built from real-world datasets. Even the top proprietary models lose nearly half their performance compared to simpler tests. The article Even the best AI models lose about half their performance when charts get complicated, new benchmark finds appeared first on The Decoder.
submitted by /u/BousWakebo [link] [comments]

Welcome back to TechCrunch Mobility, your hub for the future of transportation and now, more than ever, how AI is playing a part.
I am thinking about becoming a research engineer, and want to ask your advice on how realistic it is, and which strategies make sense in my situation. About myself: I am in the US, have extensive experience as a Software Engineer (including Staff+ position at one of the top companies), have a math heavy CS degree, and have taken additional ML courses from one of schools offering them to outsiders. I also had applied ML work some time ago, but I didn't like it (that's why I am considering research engineer position, and not a fine tuner or a prompt engineer). I am also a bit over 40, which I feel might be a problem for some companies/positions. What organization hiring for these positions are looking for? What kind of experience is required? Which strategies could I use. P.S. It's realistic for me to invest into unpaid/lower paid positions at least part time, where I could get the required experience. UPD1: I thought about getting a master degree, but I don't see what it will get m

Abstract A computer-implemented system and method for structuring human–AI interaction without autonomous goal pursuit is disclosed. The system does not operate as an agent or decision-making entity. Instead, it functions as an interaction-layer regulator that controls how information is introduced, maintained, and resolved during exchange. Rather than optimizing for immediate answers or task completion, the system maintains a dynamic interaction field that: preserves multiple interpretive pathways regulates premature convergence supports the formation of human-side understanding Core Components The system comprises: (1) Liminal Holding Layer Maintains pre-articulated signal states prior to collapse into fixed meaning. This allows partial structure to persist long enough for interpretation to stabilize. (2) Resolution Control Mechanism (N-Spoke Model) Controls the number of active interpretive pathways at any given moment. Prevents early narrowing into a single frame

In recent months, the company announced an agreement with Amazon Web Services to use Cerebras chips in Amazon data centers, as well as a deal with OpenAI reportedly worth more than $10 billion.

Despite recently being designated a supply-chain risk by the Pentagon, Anthropic is still talking to high-level members of the Trump administration.

"We launched 2.5 months ago, and right now, we have $300,000 in ARR."
AI news from 200+ sources
Get Started FreearXiv:2604.15646v1 Announce Type: new Abstract: Clinicians exploring oncology trial repositories often need ad-hoc, multi-constraint queries over biomarkers, endpoints, interventions, and time, yet writing SQL requires schema expertise. We demo FD-NL2SQL, a feedback-driven clinical NL2SQL assistant for SQLite-based oncology databases. Given a natural-language question, a schema-aware LLM decomposes it into predicate-level sub-questions, retrieves semantically similar expert-verified NL2SQL exemplars via sentence embeddings, and synthesizes executable SQL conditioned on the decomposition, retrieved exemplars, and schema, with post-processing validity checks. To improve with use, FD-NL2SQL incorporates two update signals: (i) clinician edits of generated SQL are approved and added to the exemplar bank; and (ii) lightweight logic-based SQL augmentation applies a single atomic mutation (e.g., operator or column change), retaining variants only if they return non-empty results. A second LL

Article URL: https://github.com/moeen-mahmud/remen Comments URL: https://news.ycombinator.com/item?id=47825712 Points: 1 # Comments: 0
A couple of early-to-mid-stage startups I'm consulting with are asking the same question: their AI/ML team wants production Postgres data, and nobody's quite sure how to give it to them. I've handled this before for BI teams — read replica with a generous `max_standby_streaming_delay` and `hot_standby_feedback` on, accepting the occasional bloat on the primary. Worked fine. But the AI/ML ask feels different in ways I can't fully articulate yet, which is part of why I'm asking. A few things I'm trying to calibrate: Where does the agent actually connect? Primary with RLS, read replica, warehouse (Snowflake/BigQuery/Redshift), lakehouse (Iceberg/Delta on S3), or something else? If you're not doing this — is it compliance, cost fear, bad experiences (runaway queries, PII in prompts), or something else? And the one I'm most curious about: does this actually feel different from giving BI tools DB access, or is it the same problem wearing new clothes? Not looking for product recommendations.

Google's A2UI 0.9 is a framework-agnostic standard that lets AI agents generate UI elements on the fly, tapping into an app's existing components across web, mobile, and other platforms. The article Google launches generative UI standard for AI agents appeared first on The Decoder.

This week: AI-enabled market entry, vision intelligence, true chemical operations autonomy, LNG tankers, Gemini embodied reasoning, smaller/cheaper/recycled EVs

A research team developed an OpenClaw agent for smart glasses to find out how continuously perceiving AI changes the way people use agentic AI systems. The article Always-on Ray-Ban Meta glasses powered by OpenClaw speed up everyday tasks in new study appeared first on The Decoder.

Anthropic's Opus 4.7 matches its predecessor's per-token price, but each request ends up costing significantly more. The reason: a new tokenizer that breaks the same text into up to 47 percent more tokens. Early measurements show what that shift means in practice for Claude Code users. The article First token counts reveal Opus 4.7 costs significantly more than 4.6 despite Anthropic's flat pricing appeared first on The Decoder.

Article URL: https://track-hacker-news.com/reports/llm-launches Comments URL: https://news.ycombinator.com/item?id=47823438 Points: 2 # Comments: 0
the link will be in the comments plz give me advice and everything if anyone has experience with this. I am super excited to get into this world. idk if Friday is allowed its a total rip off but oh well lol submitted by /u/Time_Appeal2458 [link] [comments]
Hey guys, I've built a workflow that I want to give to my client, it goes as follows: They have hundreds of freelancers that work with kids that need special care. The freelancers are filling our forms by hand and kids do the same. Now I've built a script that reads these handwritings, combines that to a excel sheet and matches the freelancer with the kids. Since I have Anthropic api cost, I want to price that service monthly, but I don't know how much. I'm thinking to offer this for $350/month. What do you guys think? What is fair, but still acceptable? submitted by /u/lukaszadam_com [link] [comments]

Salesforce is opening its entire platform to AI agents. With "Headless 360," the API becomes the user interface and the browser becomes obsolete. CEO Marc Benioff is putting into practice exactly what OpenAI's Sam Altman recently called an inevitable shift. The article Salesforce CEO Marc Benioff says APIs are the new UI for AI agents appeared first on The Decoder.

So this happened mere hours ago and I feel like I genuinely stumbled onto something worth documenting for people interested in AI behavior. I'm going to try to be as precise as possible about the sequence because the order of events is everything here. Full chat if you want to read it yourself: https://g.co/gemini/share/0cb9f054ca58 Background I was using Gemini paid most advanced model to analyze a live crypto trade on AAVE. The token had dropped 7–9% out of nowhere in the last hour with zero news to explain it. I've been trading crypto for over a decade and something felt off, so I asked Gemini to dig into it. It came back very bullish - told me this was just normal market maker activity and that there were, quote, "absolutely zero indications of an exploit, hack, or insider dump." I even pushed back multiple times and it kept doubling down. So I moved on and started discussing trading strategy with it. Then it caught something mid-response Out of nowhere, mid-conversation,