
Apple plans to let third-party AI models plug directly into Siri starting with iOS 27, according to Bloomberg. The move ends ChatGPT's exclusive integration and hands users the ability to choose which AI handles their queries from the assistant itself. The details: User Choice: A new extensions menu in settings will let users route Siri questions to any compatible model, replacing the current ChatGPT-only setup.
Hood: Apple is expected to unveil a broader Siri overhaul powered by Google's Gemini at WWDC in early June. Apple's logic here is straightforward. Rather than trying to build the best AI model itself, it is turning a billion iPhones into the distribution layer for everyone else's . The hardware moat does the work. The model war becomes someone else's problem.
Build AI agents that understand your business context Arango Contextual Data Platform 4.0 provides a unified architecture for AI, combining graph, vector, document, key-value, and search into one contextual data layer so AI can retrieve, reason, and act on connected data. With 20+ services like AutoGraph and AutoRAG, it automates modeling, ingestion, retrieval, and workflows, reducing complexity and speeding deployment.
Teams move from prototype to production faster with transparent, governed data flows and real-time, enterprise-scale AI. Learn how Your next presentation is already in your meeting You don’t need to start from scratch. Every insight, decision, and detail is already in your call. Most AI tools make you rebuild that context manually. Supernormal skips that step.
It captures your meetings automatically and turns them into deliverables right after the call: Slide decks you can present Follow-ups ready to send Strategy docs you can share Spreadsheets you can use Turn your next meeting into a presentation. Download free and capture your next meeting Meta's Brain AI Outperforms Real Brain Scans Meta open-sourced TRIBE v2, a foundation model trained on brain scan data from over 700 people that simulates neural activity across vision, hearing and language. The headline result: its synthetic predictions matched population-level brain activity better than most real fMRI recordings. The details: Scale Jump: The original TRIBE used data from 4 volunteers and covered 1,000 brain regions.
V2 jumps to 700+ subjects and 70,000 regions, trained on over 1,000 hours of brain data.
TRIBE v2's synthetic predictions outperformed those noisy recordings at the population level.
Neuroscience has been bottlenecked by expensive scanners and slow, one-study-at-a-time progress for decades. TRIBE v2 could compress months of scanning into seconds of compute . The comparison to AlphaFold's impact on protein structure research is not a stretch. Wikipedia Bans AI-Generated Articles Wikipedia's English-language editors voted 40-2 to ban the use of AI for writing articles. The policy's author described it as a pushback against the forced adoption of AI across platforms.
The details: Narrow Scope: The ban covers writing or rewriting articles with LLMs. Editors can still use AI for grammar fixes and translations as long as a human reviews the output.
Spanish Wikipedia went further and banned AI use entirely, including for editing.
AI-generated text already reportedly surpassed human-written output for the first time back in 2025. Wikipedia is betting that human editorial standards still matter. How long that line holds against the volume of AI content flooding the web is another question. Google Ships Gemini 3.1 Flash Live For Real-Time Voice Google released Gemini 3.1 Flash Live, a new voice model built for faster and more natural audio interactions. The model now powers Gemini Live across Search and the Gemini app.
The details: Conversation Length: Sessions can run 2x longer than previous versions before timing out.
Google's voice AI push is accelerating . Flash Live sits alongside Mistral's new Voxtral TTS (which clones any voice from a 3-second clip across 9 languages) in what is becoming a very crowded week for voice model releases. Tool of the Day: Omma Omma lets you generate fully interactive 3D websites and apps from a text prompt. Type a description, get a working 3D landing page in minutes and edit it with follow-up prompts. Try this yourself: Go to Omma and sign up.
Enter a prompt like "Create a bold 3D landing page for a food delivery service with menu previews." Within minutes you will have a working 3D page you can preview, edit and publish. You can also remix creations from the community for inspiration. Light Bytes OpenAI shelves erotic chatbot mode: The planned feature has been put on hold indefinitely following pushback from staff and investors. Novo Nordisk deploys AI agents in clinical trials: The pharma giant says the technology is trimming drug approval timelines and reducing contractor costs.
Cohere Transcribe tops HuggingFace: The free, open-source speech recognition model hit the number one accuracy spot across 14 languages. Suno v5.5 launches: New AI music generator adds voice cloning, custom model tuning and personalised style learning for Pro subscribers. Mistral ships Voxtral TTS: A lightweight voice AI that clones any speaker from a 3-second audio clip and generates speech across 9 languages.