Welcome back
Curated from 200+ sources across AI & machine learning

Vanyar launches today as a specialist firm focussed on Palantir Foundry and AIP. Its launch comes at a critical time in the market when enterprise AI transformation programmes are becoming slower, more complex and increasingly difficult to deliver. These initiatives, which often involve platforms like Palantir, continue to fall short for many companies wanting to get ahead.



X's AI-powered custom timelines are replacing Communities, with Grok-curated feeds...and new ad slots.

Google has introduced a host of new automated functions into Workspace, all of which are driven by Workspace Intelligence, its new AI system.

Google's newest TPUs are faster and cheaper than the previous versions. But the company is still embracing Nvidia in its cloud — for now.

(Bloomberg) -- Tencent Holdings Ltd. and Alibaba Group Holding Ltd. are in discussions to join a maiden round of financing for Chinese AI pioneer DeepSeek, marking a milestone for the country’s artificial intelligence sector.Most Read from BloombergAnthropic’s Mythos Model Is Being Accessed by Unauthorized UsersInside Alex Cooper’s Unwell: Tears, Screaming and Employees Looking for the ExitTrump Encourages Companies Not to Seek Tariff RefundsTrump Extends Iran Ceasefire, Keeps Blockade as Talks
![[AINews] Tasteful Tokenmaxxing](https://zmstgxtziqmvvwzllahg.supabase.co/storage/v1/object/public/article-images/latent-space/f263705d-4436-49b6-b27e-ee6edba977d3.jpg)
a quiet day lets us reflect on the top conversation that AI leaders are having everywhere.

The search giant is vying for a bigger slice of the AI pie.

Allbirds (NASDAQ:BIRD) is no longer a shoe company. The San Francisco-based brand, once known for its sustainable wool sneakers, is making a hard pivot into the artificial intelligence sector. The company announced last week that it has executed a $50...

On April 22, 2026, Alphabet’s fresh AI hardware and alliances helped power a broad tech-led rebound across major U.S. stock benchmarks.

Mira Murati's Thinking Machines Lab has signed a multi-billion-dollar deal with Google Cloud for AI infrastructure powered by Nvidia's latest GB300 chips, TechCrunch has exclusively learned.

New AI lab, familiar face: former OpenAI researcher Jerry Tworek wants to push past the limits of today's AI architectures with a small team and new learning methods. The article Ex-OpenAI researcher Jerry Tworek launches Core Automation to build the most automated AI lab in the world appeared first on The Decoder.

At Cloud Next, Google unveiled three new AI imaging tools. Creatives can drop AI-generated images into real Street View locations, Google says city planners will be able to analyze satellite imagery in minutes instead of weeks, and developers get new models that can identify objects like bridges and power lines. The article Google's new AI tools put film scouting in Street View and promise to cut weeks of satellite analysis to minutes appeared first on The Decoder.

Checkr hires ZipRecruiter veteran Tim Yarbrough as its new CFO.

Schematic, a startup that aims to simplify pricing and packaging for software and AI companies, has raised $6.5 million in seed funding, it tells Crunchbase News exclusively.

Less than a year ago, Apple made headlines for a lack of AI announcements at its annual WWDC event. Ten months later, the company has announced that hardware executive John Ternus will succeed longtime CEO Tim Cook as chief executive - and the official release doesn't mention AI once. Ternus, currently Apple's SVP of hardware engineering, will take over as CEO on September 1st, after Cook's decade and a half in the role. Ternus is a 25-year veteran of the company and the first Apple CEO in about 30 years to come from the hardware sector. According to Apple, he's led hardware engineering work for every model of iPad, as well as the most rec … Read the full story at The Verge.
AI news from 200+ sources
Get Started Free
In a significant shift toward local-first privacy infrastructure, OpenAI has released Privacy Filter, a specialized open-source model designed to detect and redact personally identifiable information (PII) before it ever reaches a cloud-based server. Launched today on AI code sharing community Hugging Face under a permissive Apache 2.0 license, the tool addresses a growing industry bottleneck: the risk of sensitive data "leaking" into training sets or being exposed during high-throughput inference. By providing a 1.5-billion-parameter model that can run on a standard laptop or directly in a web browser, the company is effectively handing developers a "privacy-by-design" toolkit that functions as a sophisticated, context-aware digital shredder. Though OpenAI was founded with a focus on open source models such as this, the company shifted during the ChatGPT era to providing more proprietary ("closed source") models available only through its website, apps, and API — only to return to op

Every frontier AI lab right now is rationing two things: electricity and compute. Most of them buy their compute for model training from the same supplier, at the steep gross margins that have turned Nvidia into one of the most valuable companies in the world. Google does not. On Tuesday night, inside a private gathering at F1 Plaza in Las Vegas, Google previewed its eighth-generation Tensor Processing Units. The pitch: two custom silicon designs shipping later this year, each purpose-built for a different half of the modern AI workload. TPU 8t targets training for frontier models, and TPU 8i targets the low-latency, memory-hungry world of agentic inference and real-time sampling. Amin Vahdat, Google's SVP and chief technologist for AI and infrastructure (pictured above left), used his time onstage to make a point that matters more to enterprise buyers than any individual spec: Google designs every layer of its AI stack end-to-end, and that vertical integration is starting to show up i

Gemini Enterprise Agent Platform takes an interesting approach: It is geared for IT and technical users.

OpenAI is rolling out workspace agents in ChatGPT, an evolution of custom GPTs. Powered by Codex, the agents automate complex team workflows and keep running even when no one is watching. Existing custom GPTs will stick around for now, with a migration path coming later. The article OpenAI launches workspace agents that turn ChatGPT from a chatbot into a team automation platform appeared first on The Decoder.

OpenAI introduced a new paradigm and product today that is likely to have huge implications for enterprises seeking to adopt and control fleets of AI agent workers. Called "Workspace Agents," OpenAI's new offering essentially allows users on its ChatGPT Business ($20 per user per month) and variably priced Enterprise, Edu and Teachers subscription plans to design or select from pre-existing agent templates that can take on work tasks across third-party apps and data sources including Slack, Google Drive, Microsoft apps, Salesforce, Notion, Atlassian Rovo, and other popular enterprise applications. Put simply: these agents can be created and accessed from ChatGPT, but users can also add them to third-party apps like Slack, communicate with them across disparate channels, ask them to use information from the channel they're in and other third-party tools and apps, and the agents will go off and do work like drafting emails to the entire team, selected members, or pull data and make pres

Enterprise teams building multi-agent AI systems may be paying a compute premium for gains that don't hold up under equal-budget conditions. New Stanford University research finds that single-agent systems match or outperform multi-agent architectures on complex reasoning tasks when both are given the same thinking token budget. However, multi-agent systems come with the added baggage of computational overhead. Because they typically use longer reasoning traces and multiple interactions, it is often unclear whether their reported gains stem from architectural advantages or simply from consuming more resources. To isolate the true driver of performance, researchers at Stanford University compared single-agent systems against multi-agent architectures on complex multi-hop reasoning tasks under equal "thinking token" budgets. Their experiments show that in most cases, single-agent systems match or outperform multi-agent systems when compute is equal. Multi-agent systems gain a competitive

The era of enterprises stitching together prompt chains and shadow agents is nearing its end as more options for orchestrating complex multi-agent systems emerge. As organizations move AI agents into production, the question remains: "how will we manage them?" Google and Amazon Web Services offer fundamentally different answers, illustrating a split in the AI stack. Google’s approach is to run agentic management on the system layer, while AWS’s harness method sets up in the execution layer. The debate on how to manage and control gained new energy this past month as competing companies released or updated their agent builder platforms—Anthropic with the new Claude Managed Agents and OpenAI with enhancements to the Agents SDK—giving developer teams options for managing agents. AWS with new capabilities added to Bedrock AgentCore is optimizing for velocity—relying on harnesses to bring agents to product faster—while still offering identity and tool management. Meanwhile, Google’s Gemi

Google's new generation of Tensor AI chips is actually two chips, one for inference and one for training.
Im a triathlete and the data for my training lives in 6 apps: Garmin, Strava, WHOOP, Intervals.icu, Wahoo, Withings, Apple Health, sometimes Hevy. Every morning Id eyeball a few of them and make a call on whether to do the planned session. For the past month I have been building a thing that does this for me, and got it to the point where I use it myself every day. It OAuths into whatever platforms you connect, reconciles the activities (tbh harder than it sounds — same ride shows up in Strava, Garmin, and Wahoo with different timestamps and rounding), computes daily load and readiness, and proactively messages you over Telegram or Whatsapp when something matters. Stack is straightforward: Typescript all the way, Postgres, an agent loop running on Claude (via Bedrock) with tool access to all your data + my computed metrics: zones, CTL/ATL/TSB, power/pace curves, anomaly detection on HRV and RHR, etc Two things that were harder than expected: 1. Garmins API only exposes the last 90 day
![[AINews] OpenAI launches GPT-Image-2](https://zmstgxtziqmvvwzllahg.supabase.co/storage/v1/object/public/article-images/latent-space/0cd91719-50be-4c20-921b-4e63319caa15.jpg)
with Cursor getting a $10B contract with xAI and a right to acquire for $60B.

As AI agents increasingly work alongside humans across organizations, companies could be inadvertently opening a new attack surface. Insecure agents can be manipulated to access sensitive systems and proprietary data, increasing enterprise risk. In some modern enterprises, non-human identities (NHI) are outpacing human identities, and that trend will explode with agentic AI. Solid governance and…

How Ars Technica uses, and doesn't use, generative AI.

Foxglove has launched “Data Search and Curation”, a new set of capabilities that helps robotics teams replace fragmented, manual data workflows with a unified platform to find and curate the mission-critical events, anomalies, and system behavior that matter most across growing volumes of operational data. The company also expanded the Foxglove Data Platform with Bring […]