おかえりなさい
Curated from 200+ sources across AI & machine learning

The startup, co-founded by former U.S. Energy Secretary Rick Perry, has faced headwinds with its AI campus in Texas.



TSMC's earnings suggest that the company's leadership is not truly bought into the AI growth story.

Investing.com -- Alphabet’s Google is in talks with Marvell Technology to develop two new chips designed to run AI models more efficiently, The Information reported Sunday, sending the chipmaker’s shares higher in premarket trading today.
Talks signal push to reduce reliance on external chip suppliers
Only counting those categorized as cs.LG. I'm sure there are multiple other subcategories with even more ML papers uploaded such as cs.AI, and math.OC How are you keeping up with the research in this field? submitted by /u/NeighborhoodFatCat [link] [comments]

Article URL: https://gizmodo.com/salesforce-announces-huge-ai-initiative-and-calls-it-headless-360-2000748243 Comments URL: https://news.ycombinator.com/item?id=47829523 Points: 4 # Comments: 0

Microsoft Corp's Fairwater data center in Mount Pleasant, Wisconsin, is now going live earlier than expected, CEO Satya Nadella announced on Thursday. Microsoft Touts ‘World's Most Powerful' AI Facility Nadella announced the development on X, calling it "the world's most powerful AI datacenter" that will connect "hundreds of thousands of GB200s into a single seamless cluster." "Congrats to all the teams who made this possible!" he added. Our Fairwater datacenter in Wisconsin is going live, ahead

Jensen Huang, the CEO of $4.8 trillion tech giant Nvidia, says AI will help humans explore space, be better in their jobs, and live more cost-effectively.

Executive summary Safe Pareto improvements (SPIs) are ways of changing agents’ bargaining strategies that make all parties better off, regardless of their original strategies. SPIs are an unusually robust approach to preventing catastrophic conflict between AI systems, especially AIs capable of credible commitments. This is because SPIs can reduce the costs of conflict without shifting bargaining power, or requiring agents to agree on what counts as “fair”. Despite their appeal, SPIs aren’t guaranteed to be adopted. AIs or humans in the loop might lock in SPI-incompatible commitments, or undermine other parties’ incentives to agree to SPIs. This agenda describes the Center on Long-Term Risk’s plan to address these risks: Evaluations and datasets (Part I): We’ll develop evals to identify when current models endorse SPI-incompatible behavior, such as making irreversible commitments without considering more robust alternatives. We also aim to demonstrate more SPI-compatible behavior, via

It seems that AI safety is at least partly bottlenecked by a lack of orgs. To help address that, we’ve added a page to AISafety.com aimed at lowering the friction for starting one: AISafety.com/founders. This page was built largely as the result of a suggestion from @Ryan Kidd, who found he was frequently sharing the same set of resources with potential founders and realised it would be useful to have something similar publicly available. It lists: Fiscal sponsors Incubators VCs Articles and tools As with all resources on AISafety.com, we put substantial bandwidth into making sure the information on this page is accurate and up to date. If you have any feedback, please let us know in the comments or via the Suggest buttons on the page. This is the 11th resource page on the site. Here's the full list: courses for self-study communities, both local and online upcoming events and training programs (plus a weekly newsletter) funders offering support for AI safety projects (and a newslet

Article URL: https://techcrunch.com/2026/04/20/deezer-says-44-of-songs-uploaded-to-its-platform-daily-are-ai-generated/ Comments URL: https://news.ycombinator.com/item?id=47835928 Points: 1 # Comments: 0
submitted by /u/BrightOpposite [link] [comments]

Mark Zuckerberg and Jack Dorsey have different visions for how to use AI for management purposes, but both imagine a system of heightened control.

On the latest episode of Equity, we discuss OpenAI's latest acquisitions and whether they address "two big existential problems" for the company.

A lot of AI startups exist partly because the foundation models haven't expanded into their category yet. As many jokingly acknowledge, that won't last forever.

Vercel, a major development platform that hosts and deploys web apps, was compromised, and the hackers are attempting to sell stolen data. A person claiming to be a member of ShinyHunters, which was behind the recent hack of Rockstar Games, posted some data online, including employee names, email addresses, and activity time stamps. Vercel confirmed in a post on X that a "security incident" had occurred, and that it impacted a "limited subset" of its customers. Vercel said that a compromised third-party AI tool was the avenue for attack, though it did not specify which third-party was involved. We've identified a security incident that inv … Read the full story at The Verge.
AI news from 200+ sources
Get Started FreeSo I've been putting this off for months because every tutorial made it sound like you need a PhD and a startup budget to even begin. Turns out that's bullshit. Started yesterday at 2pm with literally just OpenAI's API and a Python script. No frameworks, no fancy vector databases, just me trying to make something that could answer questions about my company's support docs. First attempt was embarrassing. The thing would confidently tell customers we sold motorcycles (we don't, we make accounting software). But I kept going. By 9pm I had something that actually worked. Like, genuinely helpful responses that pulled the right info from our knowledge base. The secret wasn't some complex architecture, it was just understanding the basic flow. You feed the user question to a search function that finds relevant docs. Those docs get stuffed into a prompt with the original question. Send it all to GPT. Done. Obviously this is the kiddie pool version and I'm already hitting walls (the thin
Hi ! I just finished building a workstation specifically for local inference and wanted to get your thoughts on my setup and model recommendations. •GPU: AMD Radeon AI PRO R9700 (32GB GDDR6 VRAM) •CPU: AMD Ryzen 7 9700X •RAM: 64GB DDR5 •OS: Fedora Workstation •Software: LM Studio (Vulkan backend), wanna test LLAMA •Performance: Currently hitting a steady ~120 tok/s on simple prompts. (qwen3.6-35b-a3b) What is the largest model architecture you recommend running comfortably? Should I be focusing on Q4_K_M quantizations ? submitted by /u/jsorres [link] [comments]

AI coding tools like Claude Code, Cursor, and Codex read instructions from files on disk: .claude/skills/, .cursor/skills/, .agents/skills/. These files shape how each tool behaves. On a team of ten engineers working across several repositories, managing them by hand breaks down fast. I built SkillCatalog to solve this without a SaaS dependency. Skills live in Git repositories the team already controls. The desktop app provides authoring, organization, and delivery on top of Git. Access control is Git access. There is no server to run and no data leaves the machine unless you push it. The model is different from Microsoft's APM, which takes a package-manager approach (declare in apm.yml, everyone installs the same set). I think skills are closer to editor preferences than packages: there is a shared team baseline, but individuals layer their own selections on top. A project profile might install the team's backend bundle for everyone, while one engineer adds a personal stack for perfor

Article URL: https://github.com/phlx0/awesome-open-weight-models Comments URL: https://news.ycombinator.com/item?id=47834880 Points: 2 # Comments: 0

Article URL: https://github.com/Kenogami-AI/codebase-readiness Comments URL: https://news.ycombinator.com/item?id=47834867 Points: 1 # Comments: 0

Tech workers in China are being instructed by their bosses to train AI agents to replace them—and it’s prompting a wave of soul-searching among otherwise enthusiastic early adopters. Earlier this month a GitHub project called Colleague Skill, which claimed workers could use it to “distill” their colleagues’ skills and personality traits and replicate them with…

Google's A2UI 0.9 is a framework-agnostic standard that lets AI agents generate UI elements on the fly, tapping into an app's existing components across web, mobile, and other platforms. The article Google launches generative UI standard for AI agents appeared first on The Decoder.
For people just starting out in GPU kernel engineering or LLM inference (FlashAttention / FlashInfer / SGLang / vLLM style work), most job postings still list “C++17, CuTe, CUTLASS” as hard requirements. At the same time NVIDIA has been pushing CuTeDSL (the Python DSL in CUTLASS 4.x) hard since late 2025 as the new recommended path for new kernels — same performance, no template metaprogramming, JIT, much faster iteration, and direct TorchInductor integration. The shift feels real in FlashAttention-4, FlashInfer, and SGLang’s NVIDIA collab roadmap. Question for those already working in this space: For someone starting fresh in 2026, is it still worth going deep on legacy C++ CuTe/CUTLASS templates, or should they prioritize CuTeDSL → Triton → Mojo (and keep only light C++ for reading old code)? Is the “new stack” (CuTeDSL + Triton + Rust/Mojo for serving) actually production-viable right now, or are the job postings correct that you still need strong C++ CUTLASS skills to get hire
arXiv:2604.15646v1 Announce Type: new Abstract: Clinicians exploring oncology trial repositories often need ad-hoc, multi-constraint queries over biomarkers, endpoints, interventions, and time, yet writing SQL requires schema expertise. We demo FD-NL2SQL, a feedback-driven clinical NL2SQL assistant for SQLite-based oncology databases. Given a natural-language question, a schema-aware LLM decomposes it into predicate-level sub-questions, retrieves semantically similar expert-verified NL2SQL exemplars via sentence embeddings, and synthesizes executable SQL conditioned on the decomposition, retrieved exemplars, and schema, with post-processing validity checks. To improve with use, FD-NL2SQL incorporates two update signals: (i) clinician edits of generated SQL are approved and added to the exemplar bank; and (ii) lightweight logic-based SQL augmentation applies a single atomic mutation (e.g., operator or column change), retaining variants only if they return non-empty results. A second LL

The University of Alabama selects D-Fend Solutions’ EnforceAir system to protect campus airspace, research infrastructure, and athletic events. The University of Alabama has selected D-Fend Solutions as its counter-drone technology supplier. The university deployed D-Fend’s EnforceAir system to protect campus operations, critical infrastructure, and major events including football games at Bryant-Denny Stadium. Counter-Drone Technology Addresses Massive Drone Activity The […] The post Alabama Selects EnforceAir Counter-Drone Technology for Gameday Security appeared first on DRONELIFE.

Federal authorities identify more than half a dozen drone operators violating airspace restrictions near Coors Field during Colorado Rockies games. The FBI, FAA, Denver Police Department, and the Colorado Rockies issued a joint warning after identifying multiple drone operators violating federal regulations. The violations occurred during the Rockies’ first homestand of the 2026 season. All violators have been referred […] The post FBI, FAA Crack Down on Illegal Drone Use at Coors Field appeared first on DRONELIFE.

This new approach adapts to decide which robots should get the right of way at every moment, avoiding congestion and increasing throughput.