
Spotify’s Best Developers Have Not Written a Single Line of Code Since December Spotify put a weirdly specific claim on the record during its Q4 2025 earnings call. Spotify executive Gustav Söderström said the company’s “best developers… have not written a single line of code since December,” because AI tools got good enough to build and ship work without traditional coding. The stake is its users' trust and product control because a lack of oversight from human developers could build on ineffective features. Here are the call details: Tooling: Gustav linked the shift to Claude Code and said the jump happened around “Christmas,” with engineers steering and reviewing instead of typing.
Spotify is selling a future where software becomes cheap and constant. That can feel great when it means fewer bugs and faster improvements. It can also mean more nonstop experiments, more synthetic content, and blurrier responsibility when something goes wrong. If engineers stop writing code, Spotify's next trust test is whether users still feel like humans and who owns the outcome.
The challenge lies in striking a balance between applying AI for efficiency while maintaining the human touch that users value.
When AI lives inside your Shopify store Sponsored by Shopify Most AI tools sit on the side of your workflows. Sidekick moves the artificial intelligence into Shopify itself. By connecting data across your store, Sidekick can: Proactively generate recommendations for your store Automate multi step tasks like launches, pricing updates and metafields Turn plain language into pages, imagery and even simple apps All inside the platform merchants already use every day. See how Shopify Sidekick changes from “AI assistant” to “store operator” and what it looks like in real merchant workflows.
Musk Says Founders Were Laid off as xAI Pitches Moon Data Centers xAI published an internal meeting on X to explain why top people are leaving. Elon Musk called this departure wave "layoffs” from a reorganization. Then he tried to justify the shakeup by explaining that xAI will build faster by splitting into four teams and scaling its compute. Musk also claimed that xAI already runs on 100,000 top-tier AI chips and wants the power of roughly a million of them.
Speed can make Grok better quickly, but instability can make it unreliable for users who just want the assistant to work. Here's what xAI is really doing: Structure: Four groups now run the company: Grok, Coding, Imagine, and Macrohard.
AI companies used to win by having the smartest model but now they win by compute. OpenAI has people relying on ChatGPT, Google has Search’s reach, while xAI has X and Tesla dashboards and a habit of turning strategy into spectacle. That can be an advantage for users because it forces attention and accelerates processing. It can also backfire because users don’t want their assistant to feel like it’s being rebuilt mid-conversation.
If it stumbles, the industry gets another reminder that scale does not automatically create trust. Anthropic Pays for AI Growth So That It Skips Your Power Bill Anthropic has picked a clear battleground for AI’s growing backlash: the electricity bill . In a new U.S. pledge tied to upcoming data centers, the Claude maker says that it will pay for 100% of the grid upgrades and cover any power price increases its sites cause for homes and businesses. The company warns that training a single frontier model will soon use gigawatts of power and wants to show that this hunger will not make local bills more expensive. Here is the plan Anthropic says it will follow: Infrastructure: Anthropic says it will pay for all transmission and substation work.
Anthropic’s promise looks generous, but it also reads like preemptive damage control as politicians warn that tech must 'pay its own way.' Microsoft is testing similar protections, and rivals may copy it if the strategy buys time with regulators. The hard part is proving cause and effect because if people still see higher bills and the math behind repayments stays opaque, this promise could look like clever accounting. GLM-5: Open-Weights Model for Long Docs and Code GLM-5 is Z.ai’s open-weights language model you can run locally (Ollama) or access by API. It’s most useful when your problem involves big specs, long error logs, or multi-file changes that exceed the capacity of a small prompt. You paste the relevant information once, then ask for concrete outputs like a refactor diff, a test plan, or a set of tickets.
Core functions (and how to use them): Repo refactor: Paste a folder tree plus 2–3 key files. Ask for a two-PR plan and a diff for PR1. Spec to tickets: Paste a PRD. Ask for tickets with acceptance criteria, edge cases, and “not doing” notes. Log triage: Paste the stack trace and 50–100 lines around it. Ask for root causes and a minimal repro.
UI flow draft: Describe the features and constraints. Ask for screens, empty states, error copy, and required API calls. Test plan as JSON: Ask for a JSON list of test cases tied to specific requirements you pasted. Try this yourself: Take one document you already have open your PRD, a bug report, or a failing CI log.
Paste it into GLM-5 and ask for one usable artifact: either (1) a PR plan with a diff for the first change or (2) Jira-ready tickets with acceptance criteria. After you get the output, conduct a review process: tell it to quote the exact line from your input that supports each ticket or code change, and delete anything it can’t support.