Welcome back
Curated from 200+ sources across AI & machine learning

Estimates for total Claude consumer users are all over the map (we've seen figures ranging from 18 million to 30 million). Anthropic hasn't disclosed this data, but a spokesperson did tell TechCrunch that Claude paid subscriptions have more than doubled this year.



Article URL: https://cloud.google.com/blog/topics/developers-practitioners/how-uc-berkeley-students-use-ai-as-a-learning-partner Comments URL: https://news.ycombinator.com/item?id=47551762 Points: 2 # Comments: 1

Article URL: https://www.theatlantic.com/international/2026/03/netanyahu-not-dead-israel-ai/686593/ Comments URL: https://news.ycombinator.com/item?id=47551232 Points: 3 # Comments: 0

Article URL: https://curryguinncspb.github.io/programming-after-programmers/ Comments URL: https://news.ycombinator.com/item?id=47550475 Points: 2 # Comments: 0

Article URL: https://fortune.com/2026/03/27/anthropic-leaked-ai-mythos-cybersecurity-risk/ Comments URL: https://news.ycombinator.com/item?id=47549888 Points: 1 # Comments: 0

Things are moving fast, and competitors have offered something similar for a while.

Iran says it will ‘facilitate and expedite’ humanitarian aid through the Strait of Hormuz AP News

Panicked travelers hear a new message from airports: Don’t get here so early AP News

The AI Doc: Or How I Became an Apocaloptimist seeks the middle ground on a polarizing technology—and ends up letting tech execs like Sam Altman off the hook.

Microsoft takes over a Texas AI data center expansion after OpenAI backs away AP News

A policy change announced by NeurIPS, the world’s leading AI research conference, drew widespread backlash from Chinese researchers this week and then was quickly reversed.

submitted by /u/Tiny-Independent273 [link] [comments]

Massive new data centers are the physical foundation for tech companies’ hopes and dreams for AI. But the rush to expand warehouses full of energy-hungry servers has also kicked up fights across the world over their impact on power grids, utility bills, nearby communities, and the environment. From audacious plans to launch data centers into space to the latest legal battles over pollution, The Verge has the biggest news and reporting surrounding data centers. Senators are pushing to find out how much electricity data centers actually use How the spiraling Iran conflict could affect data centers and electricity costs Seven tech giants signed Trump’s pledge to keep electricity costs from spiking around data centers Trump claims tech companies will sign deals next week to pay for their own power supply Anthropic says it’ll try to keep its data centers from raising electricity costs How an ‘icepocalypse’ raises more questions about Meta’s biggest data ce

3 killed in tourist helicopter crash off the coast of the Hawaiian island of Kauai AP News

AI's arrival complicates Big Tech climate goals, and some worry it's locking in more fossil fuels AP News

Everything you need to know before you reach the office this morning.

Raising funds to plow into the AI boom proved to be not a problem for the firm.

AI company Anthropic is testing a previously undisclosed AI model called 'Mythos' that is significantly more capable than anything it has previously built, according to a draft blog post left publicly accessible.

A bombshell investigation into Meta's AI training pipeline found overseas contractors watching footage from the smart glasses.
AI news from 200+ sources
Get Started Free
On Tuesday morning, everything was business as usual at OpenAI. By the end of the day, the company had announced that it would scrap its video-generation app, Sora, and reverse plans for video generation inside ChatGPT; it would wind down a $1 billion Disney deal; it would shuffle the role of a high-level executive; and it would raise an additional $10 billion from investors, adding up to more than $120 billion total for its latest funding round. OpenAI is now in a frenzy to turn a profit, or at least lose less money. Since its launch, Sora seems to have taken up a massive amount of compute without the financial return to justify it. Indus … Read the full story at The Verge.

Learn how STADLER uses ChatGPT to transform knowledge work, saving time and accelerating productivity across 650 employees.

OK so you know how last time I said LLMs seem to think in a universal language? I went deeper. Part 1: https://www.reddit.com/r/LocalLLaMA/comments/1rpxpsa/how_i_topped_the_open_llm_leaderboard_using_2x/ Part 2: https://www.reddit.com/r/LocalLLaMA/comments/1s1t5ot/rys_ii_repeated_layers_with_qwen35_27b_and_some/ TL;DR for those who (I know) won't read the blog: I expanded the experiment from 2 languages to 8 (EN, ZH, AR, RU, JA, KO, HI, FR) across 4 different models (Qwen3.5-27B, MiniMax M2.5, GLM-4.7, GPT-OSS-120B). All four show the same thing. In the middle layers, a sentence about photosynthesis in Hindi is closer to photosynthesis in Japanese than it is to cooking in Hindi. Language identity basically vanishes. Then I did the harder test: English descriptions, Python functions (single-letter variables only — no cheating), and LaTeX equations for the same concepts. ½mv², 0.5 * m * v ** 2, and "half the mass times velocity squared" converge to the same region in the model's i

Hey guys, it's been a week since we launched Unsloth Studio (Beta). Thanks so much for trying it out, the support and feedback! We shipped 50+ new features, updates and fixes. New features / major improvements: Pre-compiled llama.cpp / mamba_ssm binaries for ~1min installs and -50% less size Auto-detection of existing models from LM Studio, Hugging Face etc. 20–30% faster inference, now similar to llama-server / llama.cpp speeds. Tool calling: better parsing, better accuracy, faster execution, no raw tool markup in chat, plus a new Tool Outputs panel and timers. New one line uv install and update commands New Desktop app shortcuts that close properly. Data Recipes now supports macOS, CPU and multi-file uploads. Preliminary AMD support for Linux. Inference token/s reporting fixed so it reflects actual inference speed instead of including startup time. Revamped docs with detailed guides on uninstall, deleting models etc Lots of new settings added including context length, d

arXiv:2603.25051v1 Announce Type: new Abstract: This study presents a computational analysis of the Slovene historical newspapers \textit{Slovenec} and \textit{Slovenski narod} from the sPeriodika corpus, combining topic modelling, large language model (LLM)-based aspect-level sentiment analysis, entity-graph visualisation, and qualitative discourse analysis to examine how collective identities, political orientations, and national belonging were represented in public discourse at the turn of the twentieth century. Using BERTopic, we identify major thematic patterns and show both shared concerns and clear ideological differences between the two newspapers, reflecting their conservative-Catholic and liberal-progressive orientations. We further evaluate four instruction-following LLMs for targeted sentiment classification in OCR-degraded historical Slovene and select the Slovene-adapted GaMS3-12B-Instruct model as the most suitable for large-scale application, while also documenting imp

arXiv:2603.24621v1 Announce Type: new Abstract: We introduce ARC-AGI-3, an interactive benchmark for studying agentic intelligence through novel, abstract, turn-based environments in which agents must explore, infer goals, build internal models of environment dynamics, and plan effective action sequences without explicit instructions. Like its predecessors ARC-AGI-1 and 2, ARC-AGI-3 focuses entirely on evaluating fluid adaptive efficiency on novel tasks, while avoiding language and external knowledge. ARC-AGI-3 environments only leverage Core Knowledge priors and are difficulty-calibrated via extensive testing with human test-takers. Our testing shows humans can solve 100% of the environments, in contrast to frontier AI systems which, as of March 2026, score below 1%. In this paper, we present the benchmark design, its efficiency-based scoring framework grounded in human action baselines, and the methodology used to construct, validate, and calibrate the environments.

Today, we’re excited to announce that Amazon Bedrock is now available in the Asia Pacific (New Zealand) Region (ap-southeast-6). Customers in New Zealand can now access Anthropic Claude models (Claude Opus 4.5, Opus 4.6, Sonnet 4.5, Sonnet 4.6, and Haiku 4.5) and Amazon (Nova 2 Lite) models directly in the Auckland Region with cross region inference. In this post, we explore how cross-Region inference works from the New Zealand Region, the models available through geographic and global routing, and how to get started with your first API call. We

The major technical advances this week were in agentic coding, as covered yesterday. The major non-DoW political and alignment developments will be covered tomorrow. The DoW vs. Anthropic trial continues. Judge Lin was very not happy with the government’s case, which makes sense since the government has no case and was arguing a variety of Obvious Nonsense. The question now is how much preliminary relief Anthropic is entitled to. Assuming we find that out this week, I plan to cover that on Monday. Beyond that, we have new iterations of questions we’ve dealt with time and again. The debate on jobs gets another cycle. Anthropic asked over 80,000 people what they think about AI, and has published those findings, nothing shocking but interesting throughout. OpenAI is raising money again, although the terms raise some eyebrows. Elon Musk is announcing a grand chip project, but it was already kind of announced and it’s not like we should believe him when he says such things. I used this

TL;DR: v3 of my medical speech-to-text benchmark. 31 models now (up from 26 in v2). Microsoft VibeVoice-ASR 9B takes the open-source crown at 8.34% WER, nearly matching Gemini 2.5 Pro (8.15%). But it's 9B params, needs ~18GB VRAM (ran it on an H100 since I had easy access, but an L4 or similar would work too), and even on H100 it's slow — 97s per file vs 6s for Parakeet. Also found bugs in Whisper's text normalizer that were inflating WER by 2-3% across every model. All code + results are open-source. Previous posts: v1 — 15 models | v2 — 26 models What changed since v2 5 new models added (26 → 31): Microsoft VibeVoice-ASR 9B — new open-source leader (8.34% WER), but needs ~18GB VRAM (won't fit on T4). I ran it on H100 since I had access, but an L4 or A10 would work too. Even on H100 it's slow at 97s/file. ElevenLabs Scribe v2 — solid upgrade over v1 (9.72% vs 10.87%) NVIDIA Nemotron Speech Streaming 0.6B — decent edge option at 11.06% on T4 Voxtral Mini 2602 via Transcription