Welcome back
Curated from 200+ sources across AI & machine learning

The company turns footage from robots into structured, searchable datasets with a deep learning model.



Softr, the Berlin-based no-code platform used by more than one million builders and 7,000 organizations including Netflix, Google, and Stripe, today launched what it calls an AI-native platform — a bet that the explosive growth of AI-powered app creation tools has produced a market full of impressive demos but very little production-ready business software. The company's new AI Co-Builder lets non-technical users describe in plain language the software they need, and the platform generates a fully integrated system — database, user interface, permissions, and business logic included — connected and ready for real-world deployment immediately. The move marks a fundamental evolution for a company that spent five years building a no-code business before layering AI on top of what it describes as a proven infrastructure of constrained, pre-built building blocks. "Most AI app-builders stop at the shiny demo stage," Softr Co-Founder and CEO Mariam Hakobyan told VentureBeat in an exclusive in

The round reflects growing investor interest in AI‑native platforms to modernize legacy outsourced IT.

Emerald AI touts new fundraising success and partnerships with utilities and power generators.

Weather forecasting has gotten a big boost from machine learning. How that translates into what users see can vary.

Article URL: https://www.omgubuntu.co.uk/2026/03/firefox-smart-window-hands-on Comments URL: https://news.ycombinator.com/item?id=47585853 Points: 1 # Comments: 0

Article URL: https://www.maango.io Comments URL: https://news.ycombinator.com/item?id=47588192 Points: 1 # Comments: 0

Article URL: https://poll.qu.edu/poll-release?releaseid=3955 Comments URL: https://news.ycombinator.com/item?id=47586401 Points: 1 # Comments: 0

ScaleOps just raised $130M to tackle GPU shortages and soaring AI cloud costs by automating infrastructure in real time.

The startup, which is planning to go public later this year, designs chips specifically for AI inference, another challenger to Nvidia's dominance.

For decades, artificial intelligence has been evaluated through the question of whether machines outperform humans. From chess to advanced math, from coding to essay writing, the performance of AI models and applications is tested against that of individual humans completing tasks. This framing is seductive: An AI vs. human comparison on isolated problems with clear…

Feds probe whether NYC Council member, Hochul aide took bribes to help migrant shelter provider AP News
![[D] Got my first offer after months of searching — below posted range, contract-to-hire, and worried it may pause my search. Do I take it?](https://zmstgxtziqmvvwzllahg.supabase.co/storage/v1/object/public/article-images/reddit-machinelearning/d0dfd4a4-e282-49f1-8a70-0a1f543bb46c.jpg)
I could really use some outside perspective. I’m a senior ML/CV engineer in Canada with about 5–6 years across research and industry. Master’s in CS and a few publications. I left my previous remote startup role about five months ago. The role gradually changed, I burned out, and decided to step away. I took around two months to decompress and have been actively searching for the last three months. It’s been tough. A few interview loops and a couple of final rounds, but no offers until now. Last week I finished a four-round process with a small pre-seed AI startup in healthcare. The work is genuinely interesting and very aligned with my background. The team also seems strong. Here’s the complication. The role was posted with a salary range, but the verbal offer came in roughly 20% below the bottom of that range. On top of that, it’s structured as a 3-month contract-to-hire instead of full-time. Since I’m in Canada and they’re in the US, I would be working as a contractor. That mean

A college instructor turns to typewriters to curb AI-written work and teach life lessons AP News

Air Canada CEO will retire this year after his English-only crash message was criticized apnews.com

Iran University of Science and Technology building reduced to rubble by Israeli airstrike AP News
AI news from 200+ sources
Get Started Free
ThinkLabs AI, a startup building artificial intelligence models that simulate the behavior of the electric grid, announced today that it has closed a $28 million Series A financing round led by Energy Impact Partners (EIP), one of the largest energy transition investment firms in the world. Nvidia’s venture capital arm NVentures and Edison International, the parent company of Southern California Edison, also participated in the round. The funding marks a significant escalation in the race to apply AI not just to software and content generation, but to the physical infrastructure that powers modern life. While most AI investment headlines have centered on large language models and generative tools, ThinkLabs is pursuing a different and arguably more consequential application: using physics-informed AI to model the behavior of electrical grids in real time, compressing engineering studies that once took weeks or months into minutes. "We are dead focused on the grid," ThinkLabs CEO Josh W

The brand will now add the mayo and chicken stock brands to its litany of products including various hot sauce brands.

For the modern enterprise, the digital workspace risks descending into "coordination theater," in which teams spend more time discussing work than executing it. While traditional tools like Slack or Teams excel at rapid communication, they have structurally failed to serve as a reliable foundation for AI agents, such that a Hacker News thread went viral in February 2026 calling upon OpenAI to build its own version of Slack to help empower AI agents, amassing 327 comments. That's because agents often lack the real-time context and secure data access required to be truly useful, often resulting in "hallucinations" or repetitive re-explaining of codebase conventions. PromptQL, a spin-off from the GraphQL unicorn Hasura, is addressing this by pivoting from an AI data tool into a comprehensive, AI-native workspace designed to turn casual, regular team interactions into a persistent, secure memory for agentic workflows — ensuring these conversations are not simply left by the wayside or t

Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public. A 59.8 MB JavaScript source map file (.map), intended for internal debugging, was inadvertently included in version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry pushed live earlier this morning. By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted archive, acted as a digital flare. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers. For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property.The timing is particularly critical giv

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every new model iteration. Today, those jumps have flattened into incremental gains. The exception is domain-specialized intelligence, where true step-function improvements are still the norm. When a model is fused with an organization’s…

The curriculum at creative institutions is evolving to handle gen AI tools, and a lot of people aren’t happy about it. | Image by Cath Virginia / The Verge When my baby brother, a 3D modelling and animation student, talks to me about his projects and studies, the pride I usually feel is becoming increasingly tainted by a growing sense of dread. As a creative professional and former design student myself, I understand all too well how fierce the competition for postgraduate jobs will be, but his future is being threatened by something that barely even existed during my own time in higher education: generative AI. College students are feeling that fear as well. Earlier this year, in a small protest at CalArts, posters that requested the help of AI artists for a thesis were reportedly altered wit … Read the full story at The Verge.

For three decades, the web has existed in a state of architectural denial. It is a platform originally conceived to share static physics papers, yet it is now tasked with rendering the most complex, interactive, and generative interfaces humanity has ever conceived. At the heart of this tension lies a single, invisible, and prohibitively expensive operation known as "layout reflow." Whenever a developer needs to know the height of a paragraph or the position of a line to build a modern interface, they must ask the browser’s Document Object Model (DOM), the standard by which developers can create and modify webpages. In response, the browser often has to recalculate the geometry of the entire page — a process akin to a city being forced to redraw its entire map every time a resident opens their front door. Last Friday, March 27, 2026, Cheng Lou — a prominent software engineer whose work on React, ReScript, and Midjourney has defined much of the modern frontend landscape — announced on

I've been building Sandflare for the past few months — it launches Firecracker microVMs for AI agents in ~300ms cold start. The idea came from running LLM-generated code in production. Docker felt too risky (shared kernel), full VMs too slow (5–10s). Firecracker hits the middle: real VM isolation, fast boot. I also added managed Postgres because almost every agent I built needed persistent state. One call wires a database into a sandbox. There are great tools in this space already (E2B, Modal, Daytona) — I wanted something with batteries-included Postgres, and simpler pricing What I'm trying to figure out: how do I get cold start below 100ms? Currently the bottleneck is the Firecracker API + network setup. Would love to hear from anyone who's pushed Firecracker further. https://sandflare.io Comments URL: https://news.ycombinator.com/item?id=47583255 Points: 2 # Comments: 3

Article URL: https://www.aiagentsbay.com Comments URL: https://news.ycombinator.com/item?id=47586284 Points: 1 # Comments: 0

This just showed up a couple of days ago on GitHub. Note that ANE is the NPU in all Apple Silicon, not the new 'Neural Accelerator' GPU cores that are only in M5. (ggml-org/llama.cpp#10453) - Comment by arozanov Built a working ggml ANE backend. Dispatches MUL_MAT to ANE via private API. M4 Pro results: 4.0 TFLOPS peak at N=256, 16.8x faster than CPU MIL-side transpose, kernel cache, quantized weight support ANE for prefill (N>=64), Metal/CPU for decode Code: https://github.com/arozanov/ggml-ane Based on maderix/ANE bridge. submitted by /u/PracticlySpeaking [link] [comments]

arXiv:2603.26771v1 Announce Type: new Abstract: Masked diffusion language models (MDLMs) generate text by iteratively unmasking tokens from a fully masked sequence, offering parallel generation and bidirectional context. However, their standard confidence-based unmasking strategy systematically defers high-entropy logical connective tokens, the critical branching points in reasoning chains, leading to severely degraded reasoning performance. We introduce LogicDiff, an inference-time method that replaces confidence-based unmasking with logic-role-guided unmasking. A lightweight classification head (4.2M parameters, 0.05% of the base model) predicts the logical role of each masked position (premise, connective, derived step, conclusion, or filler) from the base model's hidden states with 98.4% accuracy. A dependency-ordered scheduler then unmasks tokens in logical dependency order: premises first, then connectives, then derived steps, then conclusions. Without modifying a single paramet

The latest app from the team behind Bluesky is Attie, an AI assistant that lets you build your own algorithm. At the Atmosphere conference, Bluesky's former CEO, Jay Graber, and CTO Paul Frazee, unveiled Attie, which is powered by Anthropic's Claude and built on top of Bluesky's underlying AT Protocol (atproto). Attie allows users to create custom feeds using natural language. For example, you could ask for "posts about folklore, mythology, and traditional music, especially Celtic traditions." To start these custom feeds will be confined to a standalone Attie app. But the plan is to make them available in Bluesky and other atproto apps. … Read the full story at The Verge.