Welcome back
Curated from 200+ sources across AI & machine learning

Alaska Air Group recently appointed Lindsay-Rae McIntyre as chief people officer, tasking her with leading talent strategy, culture, and HR operations as the company expands its international footprint and integrates Hawaiian Airlines. Her deep background in global workforce management and diversity at Microsoft and IBM adds experienced leadership at a time when Alaska is rolling out new international business class services and managing operational headwinds in key leisure markets. Next,...



On Thursday, Nvidia stock (NVDA) received a major vote of confidence from analyst Fang Boon Foo from DBS. Holding a prestigious 5-star analyst rating, Foo is known for his accurate forecasts in the tech sector. In his latest update, he reiterated his Buy rating and raised his price target for Nvidia to $220, up from his previous forecast of $180. In his report from Thursday, April 2, Foo highlighted that the company is currently benefiting from a “strong AI-led growth cycle.” Nvidia Has a Massiv

The security industry has spent the last year talking about models, copilots, and agents, but a quieter shift is happening one layer below all of that: Vendors are lining up around a shared way to describe security data. The Open Cybersecurity Schema Framework (OCSF), is emerging as one of the strongest candidates for that job. It gives vendors, enterprises, and practitioners a common way to represent security events, findings, objects, and context. That means less time rewriting field names and custom parsers and more time correlating detections, running analytics, and building workflows that can work across products. In a market where every security team is stitching together endpoint, identity, cloud, SaaS, and AI telemetry, a common infrastructure long felt like a pipe dream, and OCSF now puts it within reach. OCSF in plain language OCSF is an open-source framework for cybersecurity schemas. It’s vendor neutral by design and deliberately agnostic to storage format, data collection,

Murphy Campbell is at the center of a brewing storm around AI and a broken copyright system. | Image: Murphy Campbell In January, folk artist Murphy Campbell discovered several songs on her Spotify profile that did not belong there. They were songs that she had recorded, but she'd never uploaded them to Spotify, and something was off about the vocals. She quickly surmised that someone had pulled performances of the songs she posted to YouTube, created AI covers, and uploaded them to streaming platforms under her name. I ran one of the songs, "Four Marys", through two different AI detectors, and it seemed to support her suspicions with both saying it was probably AI-generated. Campbell was shocked, "I was kind of under the impression that we had a little b … Read the full story at The Verge.

Micron stock has soared nearly 300% over the last year as memory and storage chips become the latest must-haves for AI development.

After a run-in with reality, the market's starting to think more critically about these companies' actual potential.

With the midterms right around the corner, the new group is positioned to back candidates who support the AI company's policy agenda.

In addition to Lightcap's new role, OpenAI CMO Kate Rouch will be stepping away from the company to focus on cancer recovery, with a plan to return when her health allows.

One of the biggest differences between startup success and failure is knowing when to sell. TBPN's founders selling to OpenAI is a masterclass.

"This looks like AI." It's a phrase I dread seeing as a writer who dabbles in illustration and amateur photography. In a world where generative AI technology is increasingly adept at mimicking the work of humans, people are naturally skeptical when online platforms refuse to label even obvious AI content. This leads me to one conclusion: maybe we should start labeling human-made text, images, audio, and video with something akin to a universally recognized Fair Trade logo. The machines sure as hell aren't motivated to label their work, but the creators at risk of being displaced most definitely are. Fortunately, I'm not alone in my thinki … Read the full story at The Verge.

What first seemed like a challenge may turn out to be an opportunity.

Nearly 50% of data center projects delayed as China holds key to power infrastructure.

Glen Anderson, president of Rainmaker Securities, says the secondary market for private shares has never been more active — with Anthropic the hottest trade around, OpenAI losing ground, and SpaceX's looming IPO poised to reshape the landscape for everyone.

Major AI labs are investigating a security incident that impacted Mercor, a leading data vendor. The incident could have exposed key data about how they train AI models.

Deepseek v4 is expected to launch in the coming weeks and will run exclusively on Huawei chips. China's biggest tech companies have reportedly already ordered hundreds of thousands of units. Nvidia was shut out of early testing. The article Deepseek v4 will reportedly run entirely on Huawei chips in a major win for China's AI independence push appeared first on The Decoder.

OpenAI is undergoing another round of C-suite changes, according to an internal memo viewed by The Verge. Fidji Simo, OpenAI's CEO of AGI deployment - who was until recently the company's CEO of Applications - says in the memo that she will be stepping away on medical leave "for the next several weeks" due to a neuroimmune condition. While she's out, OpenAI president Greg Brockman will be in charge of product, including leading OpenAI's super app efforts. On the business side, CSO Jason Kwon, CFO Sarah Friar, and CRO Denise Dresser will take charge. OpenAI's CMO, Kate Rouch, has also decided to step down in order to focus on her health, … Read the full story at The Verge.

Recent graduates going into tech face a double bind: fewer entry-level openings and employers demanding experience that’s nearly impossible to get.
AI news from 200+ sources
Get Started Free
It’s about to become more expensive for Claude Code subscribers to use Anthropic’s coding assistant with OpenClaw and other third-party tools.

I built this after hitting the same wall repeatedly — no good way to enforce token budgets in application code. Provider caps are account-level and tell you what happened, not what is happening. Two ways to add it: # Direct client wrapper client = tokencap.wrap(anthropic.Anthropic(), limit=50_000) # LangChain, CrewAI, AutoGen, etc. tokencap.patch(limit=50_000) One design decision worth mentioning: tokencap tracks tokens, not dollars. Token counts come directly from the provider response and never drift with pricing changes. Happy to answer any questions. Comments URL: https://news.ycombinator.com/item?id=47639207 Points: 2 # Comments: 0

Article URL: https://www.junupark.xyz/blog/posts/improving-llm-inference-with-continuous-batching-orca-through-tinyorca/ Comments URL: https://news.ycombinator.com/item?id=47639648 Points: 1 # Comments: 0

Article URL: https://github.com/SaschaDeforth/arp-protocol Comments URL: https://news.ycombinator.com/item?id=47639907 Points: 1 # Comments: 0

Plus: The FBI says a recent hack of its wiretap tools poses a national security risk, attackers stole Cisco source code as part of an ongoing supply chain hacking spree, and more.

The baton of open source AI models has been passed on between several companies over the years since ChatGPT debuted in late 2022, from Meta with its Llama family to Chinese labs like Qwen and z.ai. But lately, Chinese companies have started pivoting back towards proprietary models even as some U.S. labs like Cursor and Nvidia release their own variants of the Chinese models, leaving a question mark about who will originate this branch of technology going forward. One answer: Arcee, a San Francisco based lab, which this week released AI Trinity-Large-Thinking—a 399-billion parameter text-only reasoning model released under the uncompromisingly open Apache 2.0 license, allowing for full customizability and commercial usage by anyone from indie developers to large enterprises. The release represents more than just a new set of weights on AI code sharing community Hugging Face; it is a strategic bet that "American Open Weights" can provide a sovereign alternative to the increasingly clo

Are you a subscriber to Anthropic's Claude Pro ($20 monthly) or Max ($100-$200 monthly) plans and use its Claude AI models and products to power third-party AI agents like OpenClaw? If so, you're in for an unpleasant surprise. Anthropic announced a few hours ago that starting tomorrow, Saturday, April 4, 2026, at 12 pm PT/3 pm ET, it will no longer be possible for those Claude subscribers to use their subscriptions to hook Anthropic's Claude models up to third-party agentic tools, citing the strain such usage was placing on Anthropic's compute and engineering resources, and desire to serve a wide number of users reliably. "We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools," wrote Boris Cherny, Head of Claude Code at Anthropic, in a post on X. "Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API." The company also reportedly

AI vibe coders have yet another reason to thank Andrej Karpathy, the coiner of the term. The former Director of AI at Tesla and co-founder of OpenAI, now running his own independent AI project, recently posted on X describing a "LLM Knowledge Bases" approach he's using to manage various topics of research interest. By building a persistent, LLM-maintained record of his projects, Karpathy is solving the core frustration of "stateless" AI development: the dreaded context-limit reset. As anyone who has vibe coded can attest, hitting a usage limit or ending a session often feels like a lobotomy for your project. You’re forced to spend valuable tokens (and time) reconstructing context for the AI, hoping it "remembers" the architectural nuances you just established. Karpathy proposes something simpler and more loosely, messily elegant than the typical enterprise solution of a vector database and RAG pipeline. Instead, he outlines a system where the LLM itself acts as a full-time "researc

Using OpenClaw with Claude AI is about to get a lot more expensive, thanks to Anthropic's new policy changes. Beginning April 4th at 3PM ET, users will "no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw," according to an email sent to users on Friday evening. Instead, if users want to use OpenClaw with Claude, they'll have to use a "pay-as-you-go option" that will be billed separate from their Claude subscription. With OpenClaw creator Peter Steinberger now employed by OpenAI, Anthropic may also be encouraging subscribers to use more of its own tools, like Claude Cowork, instead. Steinber … Read the full story at The Verge.

I have been self hosting LLMs since before llama 3 was a thing and Gemma 4 is the first model that actually has a 100% success rate in my tool calling tests. My main use for LLMs is a custom built voice assistant powered by N8N with custom tools like websearch, custom MQTT tools etc in the backend. The big thing is my household is multi lingual we use English, German and Japanese. Based on the wake word used the context, prompt and tool descriptions change to said language. My set up has 68 GB of VRAM (double 3090 + 20GB 3080) and I mainly use moe models to minimize latency, I previously have been using everything from the 30B MOEs, Qwen Next, GPTOSS to GLM AIR and so far the only model which had a 100% success rate across all three languages in tool calling is Gemma4 26BA4B. submitted by /u/MaruluVR [link] [comments]

Anthropic has purchased the stealth biotech AI startup Coefficient Bio in a $400 million stock deal, according to The Information and Eric Newcomer.