Welcome back
Curated from 200+ sources across AI & machine learning

arXiv:2603.25891v1 Announce Type: new Abstract: Pre-trained vision-language models (VLMs) excel in multimodal tasks, commonly encoding images as embedding vectors for storage in databases and retrieval via approximate nearest neighbor search (ANNS). However, these models struggle with compositional queries and out-of-distribution (OOD) image-text pairs. Inspired by human cognition's ability to learn from minimal examples, we address this performance gap through few-shot learning approaches specifically designed for image retrieval. We introduce the Few-Shot Text-to-Image Retrieval (FSIR) task and its accompanying benchmark dataset, FSIR-BD - the first to explicitly target image retrieval by text accompanied by reference examples, focusing on the challenging compositional and OOD queries. The compositional part is divided to urban scenes and nature species, both in specific situations or with distinctive features. FSIR-BD contains 38,353 images and 303 queries, with 82% comprising the


People don’t like that they can’t identify AI music. | Image: Cath Virginia / The Verge AI has touched every part of the music industry, from sample sourcing and demo recording, to serving up digital liner notes and building playlists. There are technical and legal challenges, fierce ethical debates, and fears that the slop will simply crush working musicians through sheer volume. Is it art or just an output? What exactly is “really active“? Whether it’s a new model or a new lawsuit, we’re covering it all to make sure you don’t miss any major developments. So follow along as we dig into the latest in AI “music.” Suno leans into customization with v5.5 The music industry has embraced a “don’t ask, don’t tell” policy about AI. North Carolina man pleads guilty to AI music streaming fraud. Apple Music adds optional labels for AI songs and visuals Qobuz is automatically detecting and labeling AI music now, too. This Chainsmokers-approved AI music producer is j

Article URL: https://news.cornell.edu/stories/2026/03/ai-deck-assessing-impact-mlbs-new-ball-strike-system Comments URL: https://news.ycombinator.com/item?id=47567041 Points: 1 # Comments: 0

Article URL: https://www.patreon.com/cw/BobDylan180 Comments URL: https://news.ycombinator.com/item?id=47566008 Points: 2 # Comments: 1

Last week, one of our product managers (PMs) built and shipped a feature. Not spec'd it. Not filed a ticket for it. Built it, tested it, and shipped it to production. In a day. A few days earlier, our designer noticed that the visual appearance of our IDE plugins had drifted from the design system. In the old world, that meant screenshots, a JIRA ticket, a conversation to explain the intent, and a sprint slot. Instead, he opened an agent, adjusted the layout himself, experimented, iterated, and tuned in real time, then pushed the fix. The person with the strongest design intuition fixed the design directly. No translation layer required. None of this is new in theory. Vibe coding opened the gates of software creation to millions. That was aspiration. When I shared the data on how our engineers doubled throughput, shifted from coding to validation, brought design upfront for rapid experimentation, it was still an engineering story. What changed is that the theory became practice. Here's

Geno Auriemma takes aim at the NCAA over the women's double-regional format in March Madness AP News

Bluesky’s new app Attie uses AI to help people build custom feeds the open social networking protocol atproto.

While there’s been plenty of debate about AI sycophancy, a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.

Many people have tried AI tools and walked away unimpressed. I get it — many demos promise magic, but in practice, the results can feel underwhelming. That’s why I want to write this not as a futurist prediction, but from lived experience. Over the past six months, I turned my engineering organization AI-first. I’ve shared before about the system behind that transformation — how we built the workflows, the metrics, and the guardrails. Today, I want to zoom out from the mechanics and talk about what I’ve learned from that experience — about where our profession is heading when software development itself turns inside out. Before I do, a couple of numbers to illustrate the scale of change. Subjectively, it feels that we are moving twice as fast. Objectively, here’s how the throughput evolved. Our total engineering team headcount floated from 36 at the beginning of the year to 30. So you get ~170% throughput on ~80% headcount, which matches the subjective ~2x. Zooming in, I picked a cou

All but two of Musk's 11 xAI co-founders departed before this week.

Slop yourself. | Image: Suno Suno just released one of its biggest updates yet with v5.5 of its AI music model. Where previous updates focused mostly on improving fidelity and creating more natural vocals, v5.5 is about giving users more control. It includes three new features: Voices, My Taste, and Custom Models. In the release notes, Suno says that Voices is its most requested feature. It lets users train the vocal model on their own voice. They can upload clean accapellas, finished tracks with backing music, or just sing directly into the mic on their phone or laptop. The cleaner and higher quality the recording, the less data is required. And to prevent someone fro … Read the full story at The Verge.

Samsung, like many companies using generative AI in their advertising, hasn’t placed an AI label on several videos shared through its TikTok accounts, and the fine print doesn’t always contain the answers. | Image by Samsung I've been struggling to tell whether the ads appearing in my TikTok feeds have been made with generative AI tools. As someone who spends a great deal of time scrutinizing images and videos for the usual "tells" that something was synthetically generated, some of the promotions I've seen have definitely sparked suspicion. For several weeks, I didn't see any examples with the AI disclosure required by TikTok's advertising policies, however, so I had no way of knowing for sure. What irks me is that someone knows for sure if the content is AI-generated. They're just not telling the rest of us. And if companies that claim to support AI-labelling … Read the full story at The Verge.

Rick Chorney was working long days but still had emails at night. "I went a little crazy," he said. "There came a day where I was just like, 'I am done.'"

AI isn’t the problem, says leadership expert Leena Rinne; it’s social connection and emotional intelligence instead.
AI news from 200+ sources
Get Started Free
The latest app from the team behind Bluesky is Attie, an AI assistant that lets you build your own algorithm. At the Atmosphere conference, Bluesky's former CEO, Jay Graber, and CTO Paul Frazee, unveiled Attie, which is powered by Anthropic's Claude and built on top of Bluesky's underlying AT Protocol (atproto). Attie allows users to create custom feeds using natural language. For example, you could ask for "posts about folklore, mythology, and traditional music, especially Celtic traditions." To start these custom feeds will be confined to a standalone Attie app. But the plan is to make them available in Bluesky and other atproto apps. … Read the full story at The Verge.

Article URL: https://www.gendiscover.com/blog/what-is-llm-advertising Comments URL: https://news.ycombinator.com/item?id=47567938 Points: 2 # Comments: 0

Article URL: https://github.com/A561988/bitterbot-desktop Comments URL: https://news.ycombinator.com/item?id=47568393 Points: 1 # Comments: 1

Article URL: https://pypi.org/project/safebrowse-client/ Comments URL: https://news.ycombinator.com/item?id=47568778 Points: 1 # Comments: 0
arXiv:2603.25764v1 Announce Type: cross Abstract: As LLM-based agents are deployed in production systems, understanding their behavioral consistency (whether they produce similar action sequences when given identical tasks) becomes critical for reliability. We study consistency in the context of SWE-bench, a challenging software engineering benchmark requiring complex, multi-step reasoning. Comparing Claude~4.5~Sonnet, GPT-5, and Llama-3.1-70B across 50 runs each (10 tasks $\times$ 5 runs), we find that across models, higher consistency aligns with higher accuracy: Claude achieves the lowest variance (CV: 15.2\%) and highest accuracy (58\%), GPT-5 is intermediate (CV: 32.2\%, accuracy: 32\%), and Llama shows the highest variance (CV: 47.0\%) with lowest accuracy (4\%). However, within a model, consistency can amplify both correct and incorrect interpretations. Our analysis reveals a critical nuance: \textbf{consistency amplifies outcomes rather than guaranteeing correctness}. 71\% of

Is this just normal corporate strategy, or are we about to see a broader pullback on AI-generated video?

On Tuesday morning, everything was business as usual at OpenAI. By the end of the day, the company had announced that it would scrap its video-generation app, Sora, and reverse plans for video generation inside ChatGPT; it would wind down a $1 billion Disney deal; it would shuffle the role of a high-level executive; and it would raise an additional $10 billion from investors, adding up to more than $120 billion total for its latest funding round. OpenAI is now in a frenzy to turn a profit, or at least lose less money. Since its launch, Sora seems to have taken up a massive amount of compute without the financial return to justify it. Indus … Read the full story at The Verge.

In response to “2023 Or, Why I am Not a Doomer” by Dean W. Ball. Dean Ball is a pretty big voice in AI policy – over 19k subscribers on his newsletter, and a former Senior Policy Advisor for AI at the Trump White House – so why does he disagree that AI poses an existential danger to humanity? In short, he holds the common view that superintelligence (ASI) simply won’t be that powerful. I strongly disagree, and I think he makes a couple of invalid leaps to arrive there. Better Than Us Is Enough His main flawed argument is that he implies AI must be omnipotent and omniscient to wipe us out and then explains why that won’t be the case. He states: “one common assumption… among many people in ‘the AI safety community’ is that artificial superintelligence will be able to ‘do anything.’” He then argues that “intelligence is neither omniscience nor omnipotence,” and that even a misaligned AI with “no [..] safeguards to hinder it” would “still fail” because taking over the world “involves too m