
🚀Meta Superintelligence Labs Ships Its First Model Muse Spark
Meta's Superintelligence Labs just rolled out Muse Spark, a multimodal reasoning model that marks the highly anticipated debut release of Alexandr Wang's high-profile division assembled last summer.
- —Muse Spark handles voice, text, and image inputs, with a contemplating mode that pits multiple agents against each other on hard problems
- —The model's benchmarks are competitive with frontier rivals like Opus 4.6 and GPT 5.4 on reasoning, though it lags in coding and tests like ARC-AGI 2
- —Muse Spark is particularly strong in health reasoning, with the company prioritizing the area as part of its 'personal superintelligence' mission
- —Unlike the Llama family, Muse Spark is proprietary, with Meta saying it hopes to open-source future versions but has not committed to a timeline
- —Wang took over Meta Superintelligence Labs 9 months ago after Zuck acquired Scale AI for $14.3B, saying the team "rebuilt our AI stack from scratch"
Why it matters: Meta is back in the game. While still sitting below the top models, Muse Spark is a serious change from where Meta sat with its Llama family. It may not break the internet, but with tons of resources, valuable data across its platforms, and billions of users, Meta's AI efforts just took a step in the right direction.
🎭HeyGen's Avatar V Solves AI's Identity Drift
HeyGen released Avatar V, a new model the company calls "the most realistic AI avatar model in the world," and claims that it can eliminate identity drift — the tendency for AI-generated faces to stop resembling the user over time.
- —The system builds a full video avatar from a short 15-second phone recording, capturing the user's real facial details, gestures, and movement patterns
- —The model also separates identity from appearance for the first time, allowing users to record once, then swap outfits and backgrounds without filming again
- —HeyGen says Avatar V outperformed Google's Veo 3.1 on accuracy and lip sync in internal tests, while also beating out Kling and Seedance in blind tests
Why it matters: Just like image and video models, AI avatars have come a ridiculously long way over the last few years, going from simple mouth movements to mimicking a user's micro-movements for indistinguishable outputs. While some may scoff at the idea of an 'AI twin', the content creation landscape is changing with or without them.
📺Build an Automated Ad Generator with ElevenLabs Flows
In this guide, you will learn how to turn a product photo into a finished video ad using ElevenLabs Flows. It's a new workflow builder that bundles image, video, voice, and music in one place.
- —Open ElevenLabs, click ElevenCreative > Flows, then hit + New Flow. Name it [product line] ad template so you can reuse it
- —Add an Image Generation node and upload 1-3 product shots. Prompt with a scene, like: "The product on a white pedestal, studio product shoot, soft morning light, photorealistic product shot"
- —Add a Video Generation node, drag a line from the image node's output into the video node's start frame, and prompt: "Slow cinematic push-in on the product, soft morning light drifting across the scene, shallow depth of field"
- —Click Run on the video node > Run till here to generate an image and video in one go. Then, swap out image/video prompts, and quickly iterate on creatives
Why it matters: For audio, add a Text-to-Speech or Music node and connect it to a Mix Audio node alongside the video. You can also try this on other products by duplicating the canvas and swapping in new images.

⚒️ Anthropic Simplifies Agent-Building with Claude Managed Agents
Anthropic opened a public beta for Claude Managed Agents, a new platform that lets developers go from an agent idea to a live product in days — handling all the backend plumbing that used to take engineering teams months to set up.
- —Users pick the task, tools, and guardrails, with Managed Agents handling running, securing, and controlling what the agentic system can access
- —Agents can work solo for hours without dropping state, with a coordination mode also in preview, letting one agent farm out subtasks to others
- —Notion, Rakuten, Asana, and Sentry are early adopters, with Rakuten reportedly setting up agents across five departments in about a week each
- —Each agent session costs $0.08 per hour on top of the usual AI usage fee, with users paying based on consumption instead of upfront platform fees
Why it matters: Anthropic continues to roll out features that eat away at the complexities of users getting the most out of their models and tools. Managed Agents now does the same, simplifying the agentic building process and making it possible for anyone to deploy and control agents without the typical backend headaches.