
Sabi wants you to control your computer with your thoughts Sabi has come out of stealth with an AI-powered beanie fitted with up to 70,000 biosensors, designed to stream the brain's electrical activity into a large-scale foundation model trained on neural data. The wearer can then control connected devices through thought alone. The company says the first units will ship by the end of 2026. The details: How it works: The biosensors sit against the scalp and continuously transmit brain activity to Sabi's foundation model, which interprets the patterns and maps them to device commands in real time. The form factor: A soft beanie rather than a headset or medical-looking cap.
Sabi is betting that consumer neurotech lives or dies on how it looks on the shelf. The launch video: Viral across X and LinkedIn within hours of going live, with demos of thought-driven actions on laptops, phones and connected home devices. The bigger play: Sabi is building the neural interface equivalent of an operating system, betting that the input layer for AI will eventually move off the keyboard entirely. Every generation of computing eventually shrinks its interface.
Keyboards gave way to mice, touchscreens pushed aside keys, voice chipped away at typing. Neural input is the logical endpoint once latency and fidelity are solved, and Sabi is the first consumer play serious enough to attract the coverage to match. Hardware delays are common for stealth-era consumer tech. The direction of travel is what matters here.
Consumer neural input is stepping out of the 'maybe one day' column.
You have no idea what ChatGPT says about your brand Most marketing teams track Google rankings religiously. Almost none of them know what happens when someone asks ChatGPT, Perplexity, or Gemini about their product. That blind spot is becoming expensive. Trendos lets you see exactly how AI models mention, rank, and describe your brand across every major LLM. Sign up free, get 100 custom prompts, and find out what your customers are actually reading about you in AI search.
No demo required. No paywall. Just the data.
Check your AI visibility for free THE AI BRIEFING: Vibe Coding vs. Vibe Solutioning The race to build with AI has led to a surge of output, faster launches, shorter cycles, constant iteration. Yet beneath the pace, a quieter question still lingers: are we building what truly matters? Speed, on its own, is a weak compass.
This session explores a shift gaining traction among leading builders: Vibe Solutioning, an approach shaped not by how fast something ships, but by how clearly a problem is understood and solved. Steve Nouri is joined by Vishal Virani (Co-Founder & CEO, Rocket) to unpack the realities of scaling AI products, and the discipline needed to turn capability into real value. A conversation for those less focused on momentum, and more focused on direction. April 22 · 3 PM CEST | 6:30 PM IST Join here Anthropic ships Claude Opus 4.7 Anthropic has released Claude Opus 4.7, the new top of its publicly available model line.
The release pushes Anthropic ahead of GPT-5.4 and Gemini 3.1 Pro on agentic coding benchmarks and lands alongside new controls for Claude Code users. It still sits behind Anthropic's own gated Mythos Preview on the same evaluations. The details: The benchmark jump: Opus 4.7 scores 64.3% on SWE-Bench Pro against 4.6's 53.4%. Mythos Preview, accessible only to selected partners, scores 77.8%. Price and usage: Per-token pricing is identical to 4.6 at $5 input and $25 output per million tokens. Token consumption per task is higher, so real-world cost per job rises.
New Claude Code features: An 'xhigh' effort setting slots between high and max, and a new /ultrareview slash command flags bugs and design issues in a single pass. User reaction: Early feedback is split, with complaints about 4.6's performance still running alongside the rollout and mixed reviews of 4.7 against the benchmark claims. Anthropic is now operating two model tracks in parallel. There is a public release cadence of every two months and a gated frontier line reserved for a small group of commercial partners.
That structure lets the company stress-test its most powerful systems in narrow paid environments while shipping something competitive into the open market. It also means public access, for the first time in the modern Claude era, sits demonstrably behind the ceiling Anthropic is willing to build. OpenAI reshapes Codex into the start of its superapp OpenAI has pushed a major update to Codex that moves it well beyond its original role as a coding assistant. The rollout adds background computer use, parallel agents, an in-app browser, inline image generation and a memory layer, turning the product into something closer to a combined ChatGPT, Atlas and Codex environment.
Codex head Thibault Sottiaux described the company as "building the super app out in the open." The details: Background computer use: Codex can now operate any Mac app on its own, including those without APIs, with multiple agents running simultaneously across different applications. Memory in preview: A new memory layer carries preferences and context across sessions, and new automations allow Codex to pick up long-running tasks days later. Atlas-powered browser: The built-in browser lets developers annotate pages to direct Codex, with inline image generation via gpt-image-1.5 for mockups without leaving the environment. Growth numbers: 3 million weekly active users, growing 70% month-on-month, with OpenAI now framing Codex as a platform rather than a single-purpose agent.
Anthropic's Claude Code and Cowork set the template for what a frontier lab's developer product looks like when the model is wrapped in computer use, memory and multi-agent control. This update is OpenAI's most direct answer yet, pitched as the opening move in a larger platform play. Two companies are now racing to build the first serious AI operating environment for knowledge workers, with the distance between a coding assistant and a full superapp closing faster than the market priced in. OpenAI turns its frontier toolkit on drug discovery OpenAI has released GPT-Rosalind, a language model built specifically for drug discovery and biological research.
It can read papers, query lab databases, design experiments and generate hypotheses, and is the second domain-tuned model the company has shipped this week after Tuesday's GPT-5.4-Cyber. The details: Benchmark performance: Rosalind outperforms GPT-5.4 on biochemistry, experiment design and tool use benchmarks. On a blind RNA prediction test run by gene therapy lab Dyno Therapeutics, it beat 95% of human scientists. Access and pricing: Available to selected enterprise customers during the test phase only.
Amgen, Moderna and the Allen Institute are already using it. Trained scepticism: The model is tuned to flag when a target is a weak candidate or a likely dead end, a deliberate move away from the helpfulness-over-accuracy reflex of a generalist chatbot. Release cadence: Second domain-specific model in three days, following Tuesday's GPT-5.4-Cyber release for network defence and threat analysis. Two domain-specialised frontier models in three days.
That pattern is becoming hard to miss. Generalist flagships still have a place, and the real economic value increasingly sits in narrow industries where a bespoke model trained on the right data outperforms one that happens to score well on MMLU. Drug discovery, cybersecurity, defence, legal and finance are the obvious candidates. Expect this to be the competitive front for the next 12 months, well before AGI arguments return to the top of the agenda. Tool of the Day: Windsurf 2.0 Windsurf has shipped a major version upgrade for its agentic IDE. The headline addition is the Agent Command Center, a new workspace view that lets a developer run and monitor a fleet of local and cloud-based agents in parallel rather than one at a time.
Cognition's Devin has been wired directly into the IDE, so Devin-style long-horizon tasks can be spawned and reviewed inside the same editor a developer already uses for hands-on work. Try this yourself: Download Windsurf 2.0 from windsurf.com and install on Mac, Windows or Linux. Open an existing project or clone a public repo to test on. A medium-sized codebase is the best test case.
Open the Agent Command Center from the sidebar and spin up two or three agents in parallel. Start with one local agent fixing a small bug and one Devin cloud agent working on a larger refactor. Compare completion times, code quality and the review experience against your current IDE and agent setup. The parallel view is where the shift becomes obvious.
Light Bytes Perplexity's 24/7 Mac agent: Perplexity has launched Personal Computer, a Mac-only agent for Max subscribers that runs across 20+ frontier models to search, read and edit files and that works natively with iMessage, Apple Mail and the Comet browser. Physical Intelligence's π0.7: The San Francisco robotics startup has published research showing its new model can be verbally coached through tasks it was never trained on, a meaningful step toward a general-purpose robot brain. Tencent open-sources HY-World 2.0: A new world model that generates editable 3D scenes with physics-aware movement from text or image prompts, now freely available under an open licence. US government reportedly granted Mythos access: Per new reporting, select federal agencies are set to get access to Anthropic's Mythos model despite the current blacklist and legal dispute between the company and the administration.
Huawei goes viral with AI posture coaching: A new feature on Huawei's latest phone suggests poses in real time as you frame a shot, with the launch demo well past 6 million views.