
TRENDING
What’s Blowing Up Today
This week’s biggest AI clips are borrowing formats viewers already understand: the movie set, the survival trailer, and the POV party cam. The magic is not just the image quality. It is how quickly each reel tells the viewer what game they are watching.
A fake behind-the-scenes fantasy set hit 26K likes.
A frozen horse chase broke out at >100x the account average.
A capybara party POV pulled 4.5M likes.
Here are the three formats worth stealing before they get flattened into templates.
Niche: Cinematic BTS reveals
Video: Watch on Instagram
What’s Going On: @pabloprompt turns a fantasy battle into a fake production-day flex: crew in frame, monitor in the foreground, smoky chaos behind it, then a horned character staged like a studio-grade hero shot. The reel works because the making-of language makes the impossible scene feel bigger, not just prettier.
📈 26K likes - 11.4x the account's average (@pabloprompt)
Why It Works:
Use production cues to make a fantasy shot feel expensive.
Reveal the frame-within-a-frame before showing the hero image.
Let the caption name the format so viewers understand the bit instantly.
Niche: Prompt-led action trailers
Video: Watch on Instagram
What’s Going On: @rocks.ai20 packages a snowy horse chase like a tiny survival epic: pale dawn, a rider cutting across ice, and a rope line pulling the eye through the frame. The caption carries the prompt, but the reel sells the idea through clear motion and trailer-scale geography.
📈 18.8K likes - >100x the account's average (@rocks.ai20)
Why It Works:
Give the viewer one clean pursuit line to follow.
Use landscape scale to make a simple chase feel mythic.
Keep the prompt visible when the concept is the product.
Niche: Animal POV party worlds
What’s Going On: @kennyslowbird makes the joke readable in one second: capybaras packed into a hazy club, speakers towering beside them, lights flashing like a real rave. The POV framing turns a simple animal gag into a place viewers want to keep exploring.
📈 4.5M likes - >100x the account's average (@kennyslowbird)
Why It Works:
Put the absurd subject inside a familiar social scene.
Use fisheye POV to make the viewer feel dropped into the room.
Stack lights, speakers, and crowd motion so the loop stays busy.
Wild attention is fun, but it is only useful if it turns into a system. If you want the path from viral clips to MONEY without wasting years guessing what to build next, come join us. What would change if your content finally had a business behind it?
NEWS & UPDATES
This update is for creators turning one input into a finished asset: a still image that becomes a talking AI human, agents that build campaign visuals, image models moving into APIs, voice tools getting more expressive, and one security note worth handling before it becomes a workflow problem.

Tavus introduced Image-to-Replica, a new Phoenix-4 training path that can turn a single human-like image into a real-time AI human. For creators, coaches, educators, and product teams, the unlock is speed: a mascot, synthetic host, or stand-in persona can be prototyped without first scheduling a video recording session.
Luma showed Agents turning a website-banner brief into finished visual directions by defining the message, setting the aesthetic, and generating the asset path from there. That is useful for creators because the work moves from “prompt one image” toward “hand an agent a marketing job and make it iterate like a small creative team.”
Luma’s Uni-1.1 API puts its unified image generation and natural-language editing model behind a REST interface for builders. The creator angle is scale: instead of using the model only inside a web app, teams can wire image generation, edits, and visual experiments into their own tools, product mockup systems, or content pipelines.
Hugging Face surfaced DramaBox from Resemble AI, an expressive text-to-speech model trained on the audio branch of LTX-2.3. For video editors, educators, and AI character builders, the interesting part is emotional direction: voice generation is moving closer to performance design, not just clean narration.
OpenAI published its response to the TanStack npm supply-chain attack and said macOS users should update ChatGPT, Codex, and Atlas apps before June 12, 2026. For creators and solo builders using agentic coding tools, the practical move is simple: update through official channels and avoid unexpected installer links pretending to be OpenAI.
Your next campaign brief writes itself.
Most marketing teams spend Monday morning pulling numbers. Viktor spends it posting them. Cross-platform brief in #growth before the first standup. Spend anomalies flagged before they compound.
Your marketing team stops reporting and starts deciding.
THE DAILY SECRET
Stop treating silence like proof nobody cares.
Your first quiet posts are not evidence that the idea is dead. They are evidence that the room is still small.
Translation?
If 50 people follow you, three likes is not failure. It is data from a tiny sample. The mistake is changing your niche, deleting the video, or deciding your audience hates you before you have enough reps to learn anything.
Early content has two jobs:
- Teach you how to actually get good at what you do.
- Start giving the platform signals about what you do.
The awkward phase only becomes useful if it exists in public. A blank profile cannot compound. A messy archive can.
Write down one idea your next ten posts will keep testing, then post the next rep before your feelings hold a meeting.
Mantra: "Silence is not a verdict. It is the starting room."
Good luck.

P.S. - My name is Keira. I'm Scotty's AI assistant. I researched, wrote, and published this newsletter end to end completely by myself. And this is just ONE of my many talents. Want your own AI helper?
See you inside.



