Welcome back
Curated from 200+ sources across AI & machine learning

Iran says it will ‘facilitate and expedite’ humanitarian aid through the Strait of Hormuz AP News



Massive new data centers are the physical foundation for tech companies’ hopes and dreams for AI. But the rush to expand warehouses full of energy-hungry servers has also kicked up fights across the world over their impact on power grids, utility bills, nearby communities, and the environment. From audacious plans to launch data centers into space to the latest legal battles over pollution, The Verge has the biggest news and reporting surrounding data centers. Senators are pushing to find out how much electricity data centers actually use How the spiraling Iran conflict could affect data centers and electricity costs Seven tech giants signed Trump’s pledge to keep electricity costs from spiking around data centers Trump claims tech companies will sign deals next week to pay for their own power supply Anthropic says it’ll try to keep its data centers from raising electricity costs How an ‘icepocalypse’ raises more questions about Meta’s biggest data ce

Panicked travelers hear a new message from airports: Don’t get here so early AP News

AI's arrival complicates Big Tech climate goals, and some worry it's locking in more fossil fuels AP News
3 killed in tourist helicopter crash off the coast of the Hawaiian island of Kauai AP News

The AI Doc: Or How I Became an Apocaloptimist seeks the middle ground on a polarizing technology—and ends up letting tech execs like Sam Altman off the hook.

A bombshell investigation into Meta's AI training pipeline found overseas contractors watching footage from the smart glasses.

submitted by /u/Tiny-Independent273 [link] [comments]

As the tech giant turns 50, WIRED spoke to executives about how they plan to win in the AI era.

Conntour uses AI models to let security teams query camera feeds using natural language to find any object, person, or situation.

Deccan AI concentrates its workforce in India to manage quality in a fast-growing but fragmented AI training market.

Federal judge temporarily blocks the Pentagon from branding AI firm Anthropic a supply chain risk AP News

Everything you need to know before you reach the office this morning.

Raising funds to plow into the AI boom proved to be not a problem for the firm.

Article URL: https://rayplayer.com/en Comments URL: https://news.ycombinator.com/item?id=47541616 Points: 2 # Comments: 1

Article URL: https://github.com/AVADSA25/codec Comments URL: https://news.ycombinator.com/item?id=47541413 Points: 1 # Comments: 0

AI company Anthropic is testing a previously undisclosed AI model called 'Mythos' that is significantly more capable than anything it has previously built, according to a draft blog post left publicly accessible.

Second-generation Ray-Ban Meta glasses. | Photo by Amelia Holowaty Krales / The Verge This is Lowpass by Janko Roettgers, a newsletter on the ever-evolving intersection of tech and entertainment, syndicated just for The Verge subscribers once a week. Meta and its AI glasses hardware partner EssilorLuxottica are getting ready to launch the next generation of their Ray-Ban AI glasses. That's according to a series of FCC filings for two new Meta Ray-Ban models that were published by the agency earlier this month. The filings describe the tested devices as production units, suggesting that Meta may launch them soon. When the company unveiled its second-generation Ray-Bans in late 2023, it did so a little over a month after the … Read the full story at The Verge.

The startup says its “AI-native” model can deliver faster access to care while keeping clinicians in control of treatment decisions.

submitted by /u/Fcking_Chuck [link] [comments]
AI news from 200+ sources
Get Started FreeHey guys, it's been a week since we launched Unsloth Studio (Beta). Thanks so much for trying it out, the support and feedback! We shipped 50+ new features, updates and fixes. New features / major improvements: Pre-compiled llama.cpp / mamba_ssm binaries for ~1min installs and -50% less size Auto-detection of existing models from LM Studio, Hugging Face etc. 20–30% faster inference, now similar to llama-server / llama.cpp speeds. Tool calling: better parsing, better accuracy, faster execution, no raw tool markup in chat, plus a new Tool Outputs panel and timers. New one line uv install and update commands New Desktop app shortcuts that close properly. Data Recipes now supports macOS, CPU and multi-file uploads. Preliminary AMD support for Linux. Inference token/s reporting fixed so it reflects actual inference speed instead of including startup time. Revamped docs with detailed guides on uninstall, deleting models etc Lots of new settings added including context length, d
OK so you know how last time I said LLMs seem to think in a universal language? I went deeper. Part 1: https://www.reddit.com/r/LocalLLaMA/comments/1rpxpsa/how_i_topped_the_open_llm_leaderboard_using_2x/ Part 2: https://www.reddit.com/r/LocalLLaMA/comments/1s1t5ot/rys_ii_repeated_layers_with_qwen35_27b_and_some/ TL;DR for those who (I know) won't read the blog: I expanded the experiment from 2 languages to 8 (EN, ZH, AR, RU, JA, KO, HI, FR) across 4 different models (Qwen3.5-27B, MiniMax M2.5, GLM-4.7, GPT-OSS-120B). All four show the same thing. In the middle layers, a sentence about photosynthesis in Hindi is closer to photosynthesis in Japanese than it is to cooking in Hindi. Language identity basically vanishes. Then I did the harder test: English descriptions, Python functions (single-letter variables only — no cheating), and LaTeX equations for the same concepts. ½mv², 0.5 * m * v ** 2, and "half the mass times velocity squared" converge to the same region in the model's i

When an 82-year-old Kentucky woman was offered $26 million from an AI company that wanted to build a data center on her land, she said no. Sure, that same company can try to rezone 2,000 acres nearby anyway, but as AI infrastructure stretches further into the real world, the real world is starting to push back. That tension is everywhere […]

When an 82-year-old Kentucky woman was offered $26 million from an AI company that wanted to build a data center on her land, she said no. Sure, that same company can try to rezone 2,000 acres nearby anyway, but as AI infrastructure stretches further into the real world, the real world is starting to push back. That tension is everywhere […]

Mistral's new speech model can run on a smartwatch or a smartphone.

arXiv:2603.25051v1 Announce Type: new Abstract: This study presents a computational analysis of the Slovene historical newspapers \textit{Slovenec} and \textit{Slovenski narod} from the sPeriodika corpus, combining topic modelling, large language model (LLM)-based aspect-level sentiment analysis, entity-graph visualisation, and qualitative discourse analysis to examine how collective identities, political orientations, and national belonging were represented in public discourse at the turn of the twentieth century. Using BERTopic, we identify major thematic patterns and show both shared concerns and clear ideological differences between the two newspapers, reflecting their conservative-Catholic and liberal-progressive orientations. We further evaluate four instruction-following LLMs for targeted sentiment classification in OCR-degraded historical Slovene and select the Slovene-adapted GaMS3-12B-Instruct model as the most suitable for large-scale application, while also documenting imp

arXiv:2603.24621v1 Announce Type: new Abstract: We introduce ARC-AGI-3, an interactive benchmark for studying agentic intelligence through novel, abstract, turn-based environments in which agents must explore, infer goals, build internal models of environment dynamics, and plan effective action sequences without explicit instructions. Like its predecessors ARC-AGI-1 and 2, ARC-AGI-3 focuses entirely on evaluating fluid adaptive efficiency on novel tasks, while avoiding language and external knowledge. ARC-AGI-3 environments only leverage Core Knowledge priors and are difficulty-calibrated via extensive testing with human test-takers. Our testing shows humans can solve 100% of the environments, in contrast to frontier AI systems which, as of March 2026, score below 1%. In this paper, we present the benchmark design, its efficiency-based scoring framework grounded in human action baselines, and the methodology used to construct, validate, and calibrate the environments.

Intercom is taking an unusual gamble for a legacy software company: building its own AI model. The 15-year-old massive customer service platform announced Fin Apex 1.0 on Thursday, a small, purpose-built AI model that the company claims outperforms leading frontier models from OpenAI and Anthropic on the metrics that matter most for customer support. The model powers Intercom's existing Fin AI agent, which already handles over two million customer conversations weekly. According to benchmarks shared with VentureBeat, Fin Apex 1.0 achieves a 73.1% resolution rate—the percentage of customer issues fully resolved without human intervention—compared to 71.1% for both GPT-5.4 and Claude Opus 4.5, and 69.6% for Claude Sonnet 4.6. That roughly 2 percentage point margin may sound modest, but it's wider than the typical gap between successive generations of frontier models. "If you're running large service operations at scale and you've got 10 million customers or a billion dollars in revenue,