Welcome back
Curated from 200+ sources across AI & machine learning

For three decades, the web has existed in a state of architectural denial. It is a platform originally conceived to share static physics papers, yet it is now tasked with rendering the most complex, interactive, and generative interfaces humanity has ever conceived. At the heart of this tension lies a single, invisible, and prohibitively expensive operation known as "layout reflow." Whenever a developer needs to know the height of a paragraph or the position of a line to build a modern interface, they must ask the browser’s Document Object Model (DOM), the standard by which developers can create and modify webpages. In response, the browser often has to recalculate the geometry of the entire page — a process akin to a city being forced to redraw its entire map every time a resident opens their front door. Last Friday, March 27, 2026, Cheng Lou — a prominent software engineer whose work on React, ReScript, and Midjourney has defined much of the modern frontend landscape — announced on



Mistral aims to start operating the data center by the second quarter of 2026.

arXiv:2603.26731v1 Announce Type: new Abstract: How much scene context a single object carries is a well-studied question in human scene perception, yet how this capacity is organized in vision-language models (VLMs) remains poorly understood, with direct implications for the robustness of these models. We investigate this question through a systematic behavioral and mechanistic analysis of contextual inference from single objects. Presenting VLMs with single objects on masked backgrounds, we probe their ability to infer both fine-grained scene category and coarse superordinate context (indoor vs. outdoor). We found that single objects support above-chance inference at both levels, with performance modulated by the same object properties that predict human scene categorization. Object identity, scene, and superordinate predictions are partially dissociable: accurate inference at one level neither requires nor guarantees accurate inference at the others, and the degree of coupling diff

LiteLLM had obtained two security compliance certifications via Delve and fell victim to some horrific credential-stealing malware last week.

This just showed up a couple of days ago on GitHub. Note that ANE is the NPU in all Apple Silicon, not the new 'Neural Accelerator' GPU cores that are only in M5. (ggml-org/llama.cpp#10453) - Comment by arozanov Built a working ggml ANE backend. Dispatches MUL_MAT to ANE via private API. M4 Pro results: 4.0 TFLOPS peak at N=256, 16.8x faster than CPU MIL-side transpose, kernel cache, quantized weight support ANE for prefill (N>=64), Metal/CPU for decode Code: https://github.com/arozanov/ggml-ane Based on maderix/ANE bridge. submitted by /u/PracticlySpeaking [link] [comments]

Article URL: https://newsmarvin.com/ Comments URL: https://news.ycombinator.com/item?id=47578898 Points: 1 # Comments: 0

Article URL: https://www.ft.com/content/229f4f59-d518-4e00-abd6-5a5b727cd2aa Comments URL: https://news.ycombinator.com/item?id=47571802 Points: 4 # Comments: 1

“You can deceive, manipulate, and lie. That’s an inherent property of language. It’s a feature, not a flaw,” CrowdStrike CTO Elia Zaitsev told VentureBeat in an exclusive interview at RSA Conference 2026. If deception is baked into language itself, every vendor trying to secure AI agents by analyzing their intent is chasing a problem that cannot be conclusively solved. Zaitsev is betting on context instead. CrowdStrike’s Falcon sensor walks the process tree on an endpoint and tracks what agents did, not what agents appeared to intend. “Observing actual kinetic actions is a structured, solvable problem,” Zaitsev told VentureBeat. “Intent is not.” That argument landed 24 hours after CrowdStrike CEO George Kurtz disclosed two production incidents at Fortune 50 companies. In the first, a CEO's AI agent rewrote the company's own security policy — not because it was compromised, but because it wanted to fix a problem, lacked the permissions to do so, and removed the restriction itself. Every

The latest app from the team behind Bluesky is Attie, an AI assistant that lets you build your own algorithm. At the Atmosphere conference, Bluesky's former CEO, Jay Graber, and CTO Paul Frazee, unveiled Attie, which is powered by Anthropic's Claude and built on top of Bluesky's underlying AT Protocol (atproto). Attie allows users to create custom feeds using natural language. For example, you could ask for "posts about folklore, mythology, and traditional music, especially Celtic traditions." To start these custom feeds will be confined to a standalone Attie app. But the plan is to make them available in Bluesky and other atproto apps. … Read the full story at The Verge.

Earlier this month, Microsoft launched Copilot Health, a new space within its Copilot app where users will be able to connect their medical records and ask specific questions about their health. A couple of days earlier, Amazon had announced that Health AI, an LLM-based tool previously restricted to members of its One Medical service, would…

Today, I’m talking with Todd McKinnon, who is co-founder and CEO of Okta, a platform that lets big companies manage security and identity across all the apps and services their employees use. Think of it like login management — actually, that’s a great way to think about it because the way most people encounter Okta is that it’s the thing that makes you log in again right before joining a meeting several times a week, so then you’re late for the meeting… Can you tell we use Okta? Anyhow, all of that is a big business — Okta has a $14 billion market cap. But big software as a service companies like Okta are under a lot of pressure in the age of AI. Why would you pay their fees when you can just vibe-code your own tools? This so-called Saaspocalypse is a big deal, and Todd recently said he was “paranoid” about it on Okta’s most recent earnings call. So we dug into it, and how he’s putting that paranoia into practice inside Okta — what he’s changing, and what opportunities he’s going afte

arXiv:2603.25891v1 Announce Type: new Abstract: Pre-trained vision-language models (VLMs) excel in multimodal tasks, commonly encoding images as embedding vectors for storage in databases and retrieval via approximate nearest neighbor search (ANNS). However, these models struggle with compositional queries and out-of-distribution (OOD) image-text pairs. Inspired by human cognition's ability to learn from minimal examples, we address this performance gap through few-shot learning approaches specifically designed for image retrieval. We introduce the Few-Shot Text-to-Image Retrieval (FSIR) task and its accompanying benchmark dataset, FSIR-BD - the first to explicitly target image retrieval by text accompanied by reference examples, focusing on the challenging compositional and OOD queries. The compositional part is divided to urban scenes and nature species, both in specific situations or with distinctive features. FSIR-BD contains 38,353 images and 303 queries, with 82% comprising the

arXiv:2603.26156v1 Announce Type: new Abstract: Framing continues to remain one of the most extensively applied theories in political communication. Developments in computation, particularly with the introduction of transformer architecture and more so with large language models (LLMs), have naturally prompted scholars to explore various novel computational approaches, especially for deductive frame detection, in recent years. While many studies have shown that different transformer models outperform their preceding models that use bag-of-words features, the debate continues to evolve regarding how these models compare with each other on classification tasks. By placing itself at this juncture, this study makes three key contributions: First, it comparatively performs generic news frame detection and compares the performance of five BERT-based variants (BERT, RoBERTa, DeBERTa, DistilBERT and ALBERT) to add to the debate on best practices around employing computational text analysis fo

Article URL: https://www.gendiscover.com/blog/what-is-llm-advertising Comments URL: https://news.ycombinator.com/item?id=47567938 Points: 2 # Comments: 0

Article URL: https://github.com/A561988/bitterbot-desktop Comments URL: https://news.ycombinator.com/item?id=47568393 Points: 1 # Comments: 1

arXiv:2603.25901v1 Announce Type: new Abstract: Defensive coverage schemes in the National Football League (NFL) represent complex tactical patterns requiring coordinated assignments among defenders who must react dynamically to the offense's passing concept. This paper presents a factorized attention-based transformer model applied to NFL multi-agent play tracking data to predict individual coverage assignments, receiver-defender matchups, and the targeted defender on every pass play. Unlike previous approaches that focus on post-hoc coverage classification at the team level, our model enables predictive modeling of individual player assignments and matchup dynamics throughout the play. The factorized attention mechanism separates temporal and agent dimensions, allowing independent modeling of player movement patterns and inter-player relationships. Trained on randomly truncated trajectories, the model generates frame-by-frame predictions that capture how defensive responsibilities e
AI news from 200+ sources
Get Started Free
The startup, which is planning to go public later this year, designs chips specifically for AI inference, another challenger to Nvidia's dominance.

As AI floods software development with code, Qodo is betting the real challenge is making sure it actually works.

Feds probe whether NYC Council member, Hochul aide took bribes to help migrant shelter provider AP News

Air Canada CEO will retire this year after his English-only crash message was criticized apnews.com

Iran University of Science and Technology building reduced to rubble by Israeli airstrike AP News

Article URL: https://github.com/customermates/customermates Comments URL: https://news.ycombinator.com/item?id=47573305 Points: 1 # Comments: 1
I could really use some outside perspective. I’m a senior ML/CV engineer in Canada with about 5–6 years across research and industry. Master’s in CS and a few publications. I left my previous remote startup role about five months ago. The role gradually changed, I burned out, and decided to step away. I took around two months to decompress and have been actively searching for the last three months. It’s been tough. A few interview loops and a couple of final rounds, but no offers until now. Last week I finished a four-round process with a small pre-seed AI startup in healthcare. The work is genuinely interesting and very aligned with my background. The team also seems strong. Here’s the complication. The role was posted with a salary range, but the verbal offer came in roughly 20% below the bottom of that range. On top of that, it’s structured as a 3-month contract-to-hire instead of full-time. Since I’m in Canada and they’re in the US, I would be working as a contractor. That mean

Zocdoc finds patients are increasingly arriving with AI-informed questions, giving doctors more to work with—but also changing how time gets spent in the exam room.

Last week, one of our product managers (PMs) built and shipped a feature. Not spec'd it. Not filed a ticket for it. Built it, tested it, and shipped it to production. In a day. A few days earlier, our designer noticed that the visual appearance of our IDE plugins had drifted from the design system. In the old world, that meant screenshots, a JIRA ticket, a conversation to explain the intent, and a sprint slot. Instead, he opened an agent, adjusted the layout himself, experimented, iterated, and tuned in real time, then pushed the fix. The person with the strongest design intuition fixed the design directly. No translation layer required. None of this is new in theory. Vibe coding opened the gates of software creation to millions. That was aspiration. When I shared the data on how our engineers doubled throughput, shifted from coding to validation, brought design upfront for rapid experimentation, it was still an engineering story. What changed is that the theory became practice. Here's

AI for Disaster Response in Asia: OpenAI Workshop with Gates Foundation

People don’t like that they can’t identify AI music. | Image: Cath Virginia / The Verge AI has touched every part of the music industry, from sample sourcing and demo recording, to serving up digital liner notes and building playlists. There are technical and legal challenges, fierce ethical debates, and fears that the slop will simply crush working musicians through sheer volume. Is it art or just an output? What exactly is “really active“? Whether it’s a new model or a new lawsuit, we’re covering it all to make sure you don’t miss any major developments. So follow along as we dig into the latest in AI “music.” Suno leans into customization with v5.5 The music industry has embraced a “don’t ask, don’t tell” policy about AI. North Carolina man pleads guilty to AI music streaming fraud. Apple Music adds optional labels for AI songs and visuals Qobuz is automatically detecting and labeling AI music now, too. This Chainsmokers-approved AI music producer is j

Are there any genies that can be put back in the bottle?