Welcome back
Curated from 200+ sources across AI & machine learning

Article URL: https://github.com/josephgoksu/taskwing Comments URL: https://news.ycombinator.com/item?id=47561996 Points: 1 # Comments: 0



Article URL: https://lzon.ca/posts/other/thoughts-ai-era/ Comments URL: https://news.ycombinator.com/item?id=47557185 Points: 2 # Comments: 0

Article URL: https://websites2know.com/best-ai-hr-software/ Comments URL: https://news.ycombinator.com/item?id=47561271 Points: 1 # Comments: 0

Bluesky’s new app Attie uses AI to help people build custom feeds the open social networking protocol atproto.

While there’s been plenty of debate about AI sycophancy, a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.

Slop yourself. | Image: Suno Suno just released one of its biggest updates yet with v5.5 of its AI music model. Where previous updates focused mostly on improving fidelity and creating more natural vocals, v5.5 is about giving users more control. It includes three new features: Voices, My Taste, and Custom Models. In the release notes, Suno says that Voices is its most requested feature. It lets users train the vocal model on their own voice. They can upload clean accapellas, finished tracks with backing music, or just sing directly into the mic on their phone or laptop. The cleaner and higher quality the recording, the less data is required. And to prevent someone fro … Read the full story at The Verge.

Things are moving fast, and competitors have offered something similar for a while.

Iran says it will ‘facilitate and expedite’ humanitarian aid through the Strait of Hormuz AP News

Many people have tried AI tools and walked away unimpressed. I get it — many demos promise magic, but in practice, the results can feel underwhelming. That’s why I want to write this not as a futurist prediction, but from lived experience. Over the past six months, I turned my engineering organization AI-first. I’ve shared before about the system behind that transformation — how we built the workflows, the metrics, and the guardrails. Today, I want to zoom out from the mechanics and talk about what I’ve learned from that experience — about where our profession is heading when software development itself turns inside out. Before I do, a couple of numbers to illustrate the scale of change. Subjectively, it feels that we are moving twice as fast. Objectively, here’s how the throughput evolved. Our total engineering team headcount floated from 36 at the beginning of the year to 30. So you get ~170% throughput on ~80% headcount, which matches the subjective ~2x. Zooming in, I picked a cou

All but two of Musk's 11 xAI co-founders departed before this week.

Microsoft takes over a Texas AI data center expansion after OpenAI backs away AP News

A policy change announced by NeurIPS, the world’s leading AI research conference, drew widespread backlash from Chinese researchers this week and then was quickly reversed.

submitted by /u/Tiny-Independent273 [link] [comments]

Samsung, like many companies using generative AI in their advertising, hasn’t placed an AI label on several videos shared through its TikTok accounts, and the fine print doesn’t always contain the answers. | Image by Samsung I've been struggling to tell whether the ads appearing in my TikTok feeds have been made with generative AI tools. As someone who spends a great deal of time scrutinizing images and videos for the usual "tells" that something was synthetically generated, some of the promotions I've seen have definitely sparked suspicion. For several weeks, I didn't see any examples with the AI disclosure required by TikTok's advertising policies, however, so I had no way of knowing for sure. What irks me is that someone knows for sure if the content is AI-generated. They're just not telling the rest of us. And if companies that claim to support AI-labelling … Read the full story at The Verge.

Massive new data centers are the physical foundation for tech companies’ hopes and dreams for AI. But the rush to expand warehouses full of energy-hungry servers has also kicked up fights across the world over their impact on power grids, utility bills, nearby communities, and the environment. From audacious plans to launch data centers into space to the latest legal battles over pollution, The Verge has the biggest news and reporting surrounding data centers. Senators are pushing to find out how much electricity data centers actually use How the spiraling Iran conflict could affect data centers and electricity costs Seven tech giants signed Trump’s pledge to keep electricity costs from spiking around data centers Trump claims tech companies will sign deals next week to pay for their own power supply Anthropic says it’ll try to keep its data centers from raising electricity costs How an ‘icepocalypse’ raises more questions about Meta’s biggest data ce

AI isn’t the problem, says leadership expert Leena Rinne; it’s social connection and emotional intelligence instead.

A bombshell investigation into Meta's AI training pipeline found overseas contractors watching footage from the smart glasses.

“I really have disconnected from the technology quite a bit,” Wozniak said in a recent CNN interview.
AI news from 200+ sources
Get Started Free
Article URL: https://github.com/kreuzberg-dev/liter-llm Comments URL: https://news.ycombinator.com/item?id=47561123 Points: 1 # Comments: 0

Learn how STADLER uses ChatGPT to transform knowledge work, saving time and accelerating productivity across 650 employees.

Estimates for total Claude consumer users are all over the map (we've seen figures ranging from 18 million to 30 million). Anthropic hasn't disclosed this data, but a spokesperson did tell TechCrunch that Claude paid subscriptions have more than doubled this year.

On Tuesday morning, everything was business as usual at OpenAI. By the end of the day, the company had announced that it would scrap its video-generation app, Sora, and reverse plans for video generation inside ChatGPT; it would wind down a $1 billion Disney deal; it would shuffle the role of a high-level executive; and it would raise an additional $10 billion from investors, adding up to more than $120 billion total for its latest funding round. OpenAI is now in a frenzy to turn a profit, or at least lose less money. Since its launch, Sora seems to have taken up a massive amount of compute without the financial return to justify it. Indus … Read the full story at The Verge.

The major technical advances this week were in agentic coding, as covered yesterday. The major non-DoW political and alignment developments will be covered tomorrow. The DoW vs. Anthropic trial continues. Judge Lin was very not happy with the government’s case, which makes sense since the government has no case and was arguing a variety of Obvious Nonsense. The question now is how much preliminary relief Anthropic is entitled to. Assuming we find that out this week, I plan to cover that on Monday. Beyond that, we have new iterations of questions we’ve dealt with time and again. The debate on jobs gets another cycle. Anthropic asked over 80,000 people what they think about AI, and has published those findings, nothing shocking but interesting throughout. OpenAI is raising money again, although the terms raise some eyebrows. Elon Musk is announcing a grand chip project, but it was already kind of announced and it’s not like we should believe him when he says such things. I used this

If you're running Qwen-3B or Llama-8B locally, you know the problem: every memory system (Mem0, Letta, Graphiti) calls your LLM *again* for every memory operation. On hardware that's already maxed out running one model, that kills everything. https://preview.redd.it/458bn473tmrg1.png?width=1477&format=png&auto=webp&s=cf48330f48ffc53d4268b04f233073d5f12e7f4a LCME gives 3B-8B models long-term memory at 12ms retrieval / 28ms ingest — without calling any LLM. How: 10 tiny neural networks (303K params total, CPU, <1ms) replace the LLM calls. They handle importance scoring, emotion tagging, retrieval ranking, contradiction detection. They start rule-based and learn from usage over time. https://preview.redd.it/ptp7ane4tmrg1.png?width=2085&format=png&auto=webp&s=a87c6ab9faff5825d19d4985ad70da5fececb3fc Repo: https://github.com/gschaidergabriel/lcme submitted by /u/No_Strain_2140 [link] [comments]

In response to “2023 Or, Why I am Not a Doomer” by Dean W. Ball. Dean Ball is a pretty big voice in AI policy – over 19k subscribers on his newsletter, and a former Senior Policy Advisor for AI at the Trump White House – so why does he disagree that AI poses an existential danger to humanity? In short, he holds the common view that superintelligence (ASI) simply won’t be that powerful. I strongly disagree, and I think he makes a couple of invalid leaps to arrive there. Better Than Us Is Enough His main flawed argument is that he implies AI must be omnipotent and omniscient to wipe us out and then explains why that won’t be the case. He states: “one common assumption… among many people in ‘the AI safety community’ is that artificial superintelligence will be able to ‘do anything.’” He then argues that “intelligence is neither omniscience nor omnipotence,” and that even a misaligned AI with “no [..] safeguards to hinder it” would “still fail” because taking over the world “involves too m