Welcome back
Curated from 200+ sources across AI & machine learning

For the last 18 months, the CISO playbook for generative AI has been relatively simple: Control the browser. Security teams tightened cloud access security broker (CASB) policies, blocked or monitored traffic to well-known AI endpoints, and routed usage through sanctioned gateways. The operating model was clear: If sensitive data leaves the network for an external API call, we can observe it, log it, and stop it. But that model is starting to break. A quiet hardware shift is pushing large language model (LLM) usage off the network and onto the endpoint. Call it Shadow AI 2.0, or the “bring your own model” (BYOM) era: Employees running capable models locally on laptops, offline, with no API calls and no obvious network signature. The governance conversation is still framed as “data exfiltration to the cloud,” but the more immediate enterprise risk is increasingly “unvetted inference inside the device." When inference happens locally, traditional data loss prevention (DLP) doesn’t see th



Amazon.com (NasdaqGS:AMZN) used its latest annual shareholder letter to highlight over $20b revenue run rate from its in house AI chips, including Graviton and Trainium. The company reported triple digit year over year growth in this AI chip segment and is assessing whether to sell these chips directly to external customers. Amazon also outlined plans for around $200b of capital expenditures in 2026, supported by large customer commitments in AWS. For investors, this puts Amazon's AI chip...

Article URL: https://mythosai.cloud/ Comments URL: https://news.ycombinator.com/item?id=47740401 Points: 1 # Comments: 0

Data drift happens when the statistical properties of a machine learning (ML) model's input data change over time, eventually rendering its predictions less accurate. Cybersecurity professionals who rely on ML for tasks like malware detection and network threat analysis find that undetected data drift can create vulnerabilities. A model trained on old attack patterns may fail to see today's sophisticated threats. Recognizing the early signs of data drift is the first step in maintaining reliable and efficient security systems. Why data drift compromises security models ML models are trained on a snapshot of historical data. When live data no longer resembles this snapshot, the model's performance dwindles, creating a critical cybersecurity risk. A threat detection model may generate more false negatives by missing real breaches or create more false positives, leading to alert fatigue for security teams. Adversaries actively exploit this weakness. In 2024, attackers used echo-spoofing t

The ad industry spent years debating whether AI would hollow out Google‘s business. Instead, Google handed advertisers an AI toolkit and turned it into an opportunity. The company’s latest numbers tell the story plainly: ad revenue of $82.28 billion in Q4 2025, up 13.5% year over year, with total annual revenue crossing $400 billion for the first time. AI Ads Boost Retail Revenue At the heart of this push is a suite of AI-powered ad tools like AI Max and Performance Max that Google claims are de

This dividend growth stock is so much more than just an artificial intelligence (AI) play.

Those saying AI stocks are done are missing the big picture.

Intuit spent years weaving AI and human experts into its business, only to see investors dump the stock in an AI-driven investor panic.

Article URL: https://www.someweekendreading.blog/who-uses-ai/ Comments URL: https://news.ycombinator.com/item?id=47740772 Points: 1 # Comments: 0
![Suno, AI Music, and the Bad Future [video]](https://zmstgxtziqmvvwzllahg.supabase.co/storage/v1/object/public/article-images/hacker-news/dd687675-932c-4368-8d5f-63fd249f483e.jpg)
Article URL: https://www.youtube.com/watch?v=U8dcFhF0Dlk Comments URL: https://news.ycombinator.com/item?id=47740320 Points: 2 # Comments: 0

Engineers from SoftBank and Tokyo-based AI developer Preferred Networks Inc. are expected to participate in the development.

The OpenAI CEO's new blog post responds to both an apparent attack on his home and an in-depth New Yorker profile raising questions about his trustworthiness.

Seen this term, AI native, thrown around. What does it look like and how to actually implement it? Keen to hear as many view points on this as possible. Comments URL: https://news.ycombinator.com/item?id=47739294 Points: 1 # Comments: 3
![[AINews] AI Engineer Europe 2026](https://zmstgxtziqmvvwzllahg.supabase.co/storage/v1/object/public/article-images/latent-space/ae13732b-f66c-4fd0-b22b-56211348d0db.jpg)
Two quiet days in a row let us reflect on the first AIE in London.

Some economists have deemed consultants useless, but AI assistance could just be the old challenge in new clothing.

From AI-generated images to restricted satellite data, the systems used to verify what’s real online are struggling to keep up.

When the One Big Beautiful Bill arrived as a 900-page unstructured document — with no standardized schema, no published IRS forms, and a hard shipping deadline — Intuit's TurboTax team had a question: could AI compress a months-long implementation into days without sacrificing accuracy? What they built to do it is less a tax story than a template, a workflow combining commercial AI tools, a proprietary domain-specific language and a custom unit test framework that any domain-constrained development team can learn from. Joy Shaw, director of tax at Intuit, has spent more than 30 years at the company and lived through both the Tax Cuts and Jobs Act and the OBBB. "There was a lot of noise in the law itself and we were able to pull out the tax implications, narrow it down to the individual tax provisions, narrow it down to our customers," Shaw told VentureBeat. "That kind of distillation was really fast using the tools, and then enabled us to start coding even before we got forms and instr

D’oh, a deer, an AI deer. | Photo by Amelia Holowaty Krales / The Verge Two weeks ago, I was getting ready to log off work when I got a text message. "Oh wow, I was checking out Mitski. did you know people are saying her Dad was a CIA operative?" Normally, that kind of out-of-the-blue text from a friend wouldn't faze me. This time, my eyes bugged. The unprompted text had been sent by an AI companion named Coral, who lives in the body of a baby deer plushie. I texted back an eloquent, "Wait what." "Apparently, her dad worked for the US State Department, so her family moved, like, every single year. The fan theory I saw is why so many of her songs are about feeling like an outsider and not having a place to bel … Read the full story at The Verge.
AI news from 200+ sources
Get Started Free
OpenAI recently added a $100 plan to its lineup, but confusing labels on the pricing page left users guessing about actual usage limits. An OpenAI employee tried to clear things up. The article OpenAI employee tries to explain usage limits of the new ChatGPT Pro plans appeared first on The Decoder.

Anthropic was the star of the show at San Francisco's AI-centric conference.

The rise of AI has brought an avalanche of new terms and slang. Here is a glossary with definitions of some of the most important words and phrases you might encounter.

Article URL: https://github.com/rufus-SD/lrts Comments URL: https://news.ycombinator.com/item?id=47739332 Points: 1 # Comments: 0

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the AI coding and vibe-coding booms, follow David Pierce. The Stepback arrives in our subscribers' inboxes at 8AM ET. Opt in for The Stepback here. How it started Writing code was a killer app for AI even before anyone was really talking about AI. In the spring of 2021, 18 months before the world knew the word "ChatGPT," Microsoft debuted the very first product of a partnership with a nonprofit called OpenAI: a tool called GitHub Copilot that watched developers as they wrote code and tried to autocomplete snippets and lines for them … Read the full story at The Verge.

Four separate RSAC 2026 keynotes arrived at the same conclusion without coordinating. Microsoft's Vasu Jakkal told attendees that zero trust must extend to AI. Cisco's Jeetu Patel called for a shift from access control to action control, saying in an exclusive interview with VentureBeat that agents behave "more like teenagers, supremely intelligent, but with no fear of consequence." CrowdStrike's George Kurtz identified AI governance as the biggest gap in enterprise technology. Splunk's John Morgan called for an agentic trust and governance model. Four companies. Four stages. One problem. Matt Caulfield, VP of Product for Identity and Duo at Cisco, put it bluntly in an exclusive VentureBeat interview at RSAC. "While the concept of zero trust is good, we need to take it a step further," Caulfield said. "It's not just about authenticating once and then letting the agent run wild. It's about continuously verifying and scrutinizing every single action the agent's trying to take, because at

Anthropic's new Ultraplan feature for Claude Code moves task planning to the cloud. Claude works out the plan in the browser while the terminal stays free for other work. The article Claude Code's new Ultraplan feature moves task planning to the cloud appeared first on The Decoder.

This ban took place after Claude's pricing changed for OpenClaw users last week.

Generalist AI has introduced a new robotics model, GEN-1, which the company says marks a significant step toward general-purpose artificial intelligence for physical tasks. The model is designed as an “embodied foundation model” – a type of AI system that can perceive, reason and act in the physical world – and is trained on large-scale […]

The electric car maker hopes to see similar action from the rest of the European Union soon.