Welcome back
Curated from 200+ sources across AI & machine learning

In late March 2026, Arm Holdings announced its first in-house AI data center chip, the Arm AGI CPU, extending its platform from licensing IP into production silicon for agentic AI workloads in partnership with leading customers such as Meta and major OEMs and cloud providers. Days later, IBM revealed a collaboration with Arm to build dual-architecture hardware for AI and data-intensive enterprise workloads, signaling that Arm’s move into CPUs is being woven directly into mission-critical,...



Article URL: https://www.orchestra-research.com/perspectives/introducing-new-orchestra Comments URL: https://news.ycombinator.com/item?id=47644877 Points: 2 # Comments: 1

Imai threw 5⅔ scoreless innings with nine strikeouts to earn his first career major league win.

On Thursday, Nvidia stock (NVDA) received a major vote of confidence from analyst Fang Boon Foo from DBS. Holding a prestigious 5-star analyst rating, Foo is known for his accurate forecasts in the tech sector. In his latest update, he reiterated his Buy rating and raised his price target for Nvidia to $220, up from his previous forecast of $180. In his report from Thursday, April 2, Foo highlighted that the company is currently benefiting from a “strong AI-led growth cycle.” Nvidia Has a Massiv

Google’s AI Overviews in search results have been linked to consumer fraud risks, with scammers inserting fake phone numbers and misleading details. The issues have prompted Google, part of Alphabet (NasdaqGS:GOOGL), to accelerate anti-spam updates to its AI search features. Concerns focus on the reliability of AI-generated summaries that sit prominently above traditional search results. For Alphabet, AI Overviews are a high profile feature that sits at the center of its search experience...

Article URL: https://www.science.org/doi/10.1126/sciadv.aef5697 Comments URL: https://news.ycombinator.com/item?id=47646241 Points: 10 # Comments: 5

The tech giant is pursuing new research avenues that the company hopes will put its AI models on par with rivals.

With the midterms right around the corner, the new group is positioned to back candidates who support the AI company's policy agenda.

In addition to Lightcap's new role, OpenAI CMO Kate Rouch will be stepping away from the company to focus on cancer recovery, with a plan to return when her health allows.

The security industry has spent the last year talking about models, copilots, and agents, but a quieter shift is happening one layer below all of that: Vendors are lining up around a shared way to describe security data. The Open Cybersecurity Schema Framework (OCSF), is emerging as one of the strongest candidates for that job. It gives vendors, enterprises, and practitioners a common way to represent security events, findings, objects, and context. That means less time rewriting field names and custom parsers and more time correlating detections, running analytics, and building workflows that can work across products. In a market where every security team is stitching together endpoint, identity, cloud, SaaS, and AI telemetry, a common infrastructure long felt like a pipe dream, and OCSF now puts it within reach. OCSF in plain language OCSF is an open-source framework for cybersecurity schemas. It’s vendor neutral by design and deliberately agnostic to storage format, data collection,

Murphy Campbell is at the center of a brewing storm around AI and a broken copyright system. | Image: Murphy Campbell In January, folk artist Murphy Campbell discovered several songs on her Spotify profile that did not belong there. They were songs that she had recorded, but she'd never uploaded them to Spotify, and something was off about the vocals. She quickly surmised that someone had pulled performances of the songs she posted to YouTube, created AI covers, and uploaded them to streaming platforms under her name. I ran one of the songs, "Four Marys", through two different AI detectors, and it seemed to support her suspicions with both saying it was probably AI-generated. Campbell was shocked, "I was kind of under the impression that we had a little b … Read the full story at The Verge.

"This looks like AI." It's a phrase I dread seeing as a writer who dabbles in illustration and amateur photography. In a world where generative AI technology is increasingly adept at mimicking the work of humans, people are naturally skeptical when online platforms refuse to label even obvious AI content. This leads me to one conclusion: maybe we should start labeling human-made text, images, audio, and video with something akin to a universally recognized Fair Trade logo. The machines sure as hell aren't motivated to label their work, but the creators at risk of being displaced most definitely are. Fortunately, I'm not alone in my thinki … Read the full story at The Verge.

Deepseek v4 is expected to launch in the coming weeks and will run exclusively on Huawei chips. China's biggest tech companies have reportedly already ordered hundreds of thousands of units. Nvidia was shut out of early testing. The article Deepseek v4 will reportedly run entirely on Huawei chips in a major win for China's AI independence push appeared first on The Decoder.

Nearly 50% of data center projects delayed as China holds key to power infrastructure.

Recent graduates going into tech face a double bind: fewer entry-level openings and employers demanding experience that’s nearly impossible to get.

Hugging Face netflix/void-model: https://huggingface.co/netflix/void-model Project page - GitHub: https://github.com/Netflix/void-model Demo: https://huggingface.co/spaces/sam-motamed/VOID submitted by /u/Nunki08 [link] [comments]

I mean, I have 40GB of Vram and I still cannot fit the entire Unsloth Gemma-4-31B-it-UD-Q8 (35GB) even at 2K context size unless I quantize KV to Q4 with 2K context size? WTF? For comparison, I can fit the entire UD-Q8 Qwen3.5-27B at full context without KV quantization! If I have to run a Q4 Gemma-4-31B-it-UD with a Q8 KV cache, then I am better off just using Qwen3.5-27B. After all, the latter beats the former in basically all benchmarks. What's your experience with the Gemma-4 models so far? submitted by /u/Iory1998 [link] [comments]

OpenAI has acquired tech talk show TBPN. The show will supposedly remain editorially independent but report to OpenAI's communications department. That's as contradictory as it sounds. So what's OpenAI really after? The article OpenAI decides the best way to fight critical AI coverage is to own a newsroom appeared first on The Decoder.

One of the biggest differences between startup success and failure is knowing when to sell. TBPN's founders selling to OpenAI is a masterclass.
AI news from 200+ sources
Get Started Free
Article URL: https://www.marketwatch.com/press-release/memori-labs-launches-openclaw-plugin-bringing-persistent-ai-memory-to-multi-agent-gateways-c0d32116 Comments URL: https://news.ycombinator.com/item?id=47644665 Points: 2 # Comments: 0

Article URL: https://vibooks.ai/ Comments URL: https://news.ycombinator.com/item?id=47645019 Points: 2 # Comments: 1

Article URL: https://arxiv.org/abs/2305.11430 Comments URL: https://news.ycombinator.com/item?id=47646345 Points: 2 # Comments: 0

It’s about to become more expensive for Claude Code subscribers to use Anthropic’s coding assistant with OpenClaw and other third-party tools.

Plus: The FBI says a recent hack of its wiretap tools poses a national security risk, attackers stole Cisco source code as part of an ongoing supply chain hacking spree, and more.

I have been self hosting LLMs since before llama 3 was a thing and Gemma 4 is the first model that actually has a 100% success rate in my tool calling tests. My main use for LLMs is a custom built voice assistant powered by N8N with custom tools like websearch, custom MQTT tools etc in the backend. The big thing is my household is multi lingual we use English, German and Japanese. Based on the wake word used the context, prompt and tool descriptions change to said language. My set up has 68 GB of VRAM (double 3090 + 20GB 3080) and I mainly use moe models to minimize latency, I previously have been using everything from the 30B MOEs, Qwen Next, GPTOSS to GLM AIR and so far the only model which had a 100% success rate across all three languages in tool calling is Gemma4 26BA4B. submitted by /u/MaruluVR [link] [comments]

submitted by /u/Fcking_Chuck [link] [comments]

Big moves OpenAI acquired TBPN, the founder-led tech/business talk show. Unusual media play — covered by TechCrunch, Ars Technica, and WSJ. Google released Gemma 4 under Apache 2.0. The license shift from their previous terms may matter more than the benchmarks. Direct shot at Chinese open-weights models. Microsoft unveiled three homegrown AI models for speech and image generation — clearly reducing dependence on OpenAI. Security Claude Code source code leaked, triggering enterprise security concerns. VentureBeat published a 5-action checklist for security teams. Axios npm package was trojanized in a supply-chain attack. If your team uses it (most do), worth checking immediately. Granola notes are viewable by anyone with a link by default. PSA if you use it. Product & research Google added Veo, Lyria, and directable AI avatars to Google Vids. Arcee launched Trinity-Large-Thinking — open source, U.S.-made, downloadable enterprise model. AI chatbots are now being use

Are you a subscriber to Anthropic's Claude Pro ($20 monthly) or Max ($100-$200 monthly) plans and use its Claude AI models and products to power third-party AI agents like OpenClaw? If so, you're in for an unpleasant surprise. Anthropic announced a few hours ago that starting tomorrow, Saturday, April 4, 2026, at 12 pm PT/3 pm ET, it will no longer be possible for those Claude subscribers to use their subscriptions to hook Anthropic's Claude models up to third-party agentic tools, citing the strain such usage was placing on Anthropic's compute and engineering resources, and desire to serve a wide number of users reliably. "We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools," wrote Boris Cherny, Head of Claude Code at Anthropic, in a post on X. "Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API." The company also reportedly