- everydAI
- Posts
- Your Laptop Just Got Promoted to Director
Your Laptop Just Got Promoted to Director
AI video just made the leap from “meh” to magic — and it’s finally usable without a supercomputer or a PhD. Here’s what changed (and why it matters).

Welcome back apprentices! 👋
Remember when AI video felt like dial-up internet in a Netflix world? You’d wait 20 minutes for 4 seconds of robotic motion, only to realize the camera angle was wrong, the vibe was off, and you had to start all over.
Well, LTXV just flipped the script.
It streams videos live, lets you edit mid-scene, and runs on your own laptop — no supercomputer, no PhD, no waiting room.
This isn't prompting. It's directing. And it’s as close to movie magic as AI has ever gotten.
In today's email
Create 60+ second AI videos — live, no waiting.
How to run it on your own laptop (no cloud GPUs needed).
Direct your video mid-generation with real-time edits.
Why LTXV beats competitors in freedom and control.
+ more AI news
Read Time: 4 minutes
Quick News
💔 Well, that escalated quickly. After a $3B deal with OpenAI collapsed over Microsoft-related drama, Windsurf’s CEO Varun Mohan and key researchers are heading to Google in a $2.4B licensing pact. Windsurf will stay independent (for now), but Mohan will join Google’s Gemini team to work on agentic coding — with a fat multi-year comp plan in tow. Meanwhile, OpenAI’s summer just got even messier. Big Tech breakups are getting spicy — and Microsoft’s shadow looms large over who gets to own the next-gen coding agents.
🧠 $300 a Month to Outsmart Everyone? Elon’s xAI just launched Grok 4 and Grok 4 Heavy, two beefed-up reasoning AIs that scored state-of-the-art on tough benchmarks like Arc-AGI and Humanity’s Last Exam — even outpacing OpenAI’s o3 and Gemini 2.5 Pro. Grok 4 runs solo with voice, vision, and a 128K context window, while Grok 4 Heavy uses multiple agents and a 256K context via API — starting at $30/month, or $300 if you want the heavy-duty brainpower. It’s a big comeback moment after Grok 3’s controversial missteps.
🕵️♀️ $2B without showing any product. Former OpenAI CTO Mira Murati just pulled off a $2B seed round for her new startup, Thinking Machines Lab — and it hasn’t even launched a product yet. The company is building a multimodal AI that “sees and chats,” with open-source tools for researchers and startups. Valued at $12B in stealth mode, the first product drop is expected in the next few months — and yes, everyone’s watching. When a founder raises $2B without shipping, it’s not hype — it’s history in the making (or a very confident VC bet).
Together with I Hate It Here
The Secret Weapon for HR
The best HR advice comes from those in the trenches. That’s what this is: real-world HR insights delivered in a newsletter from Hebba Youssef, a Chief People Officer who’s been there. Practical, real strategies with a dash of humor. Because HR shouldn’t be thankless—and you shouldn’t be alone in it.
AI Video
How LTXV Just Leveled Up AI Video Forever
Coming soon.
— LTX Studio (@LTXStudio)
4:29 PM • Jul 16, 2025
AI video generation has been flashy but short — literally. Most models can only handle a few seconds of footage, and generating it often feels like watching paint dry.
That changes with LTXV, Lightricks' newly updated open-weight video model, which now supports real-time video generation up to 60 seconds, with instant first-second playback.
What makes it wild? You can control style, depth, and movement as the scene unfolds, like a live director calling shots mid-stream. It’s fast, open-source, and runs on consumer GPUs — no $10K workstation needed. With this release, Lightricks shifts the conversation from “can AI make video?” to “how would you direct one?”
What Makes LTXV So Game-Changing?
🚀 Real-Time Streaming
First second renders almost instantly.
No waiting for full clips. It just... flows.
It’s like watching your imagination animate itself in real time.
🎛️ Mid-Stream Prompt Control
Pose, depth, camera angle, motion — all editable while the video plays.
You’re not prompting anymore. You’re directing.
🧠 Runs on Your Gear
Choose between 13B, 13B distilled, or lightweight 2B models.
Optimized for mid-tier GPUs like RTX 3060.
Want to go mobile? The 2B version runs even on laptops.
📹 Longer Videos, Finally
Runway, Pika, and Veo top out at 4–12 seconds.
LTXV delivers 60+ seconds, streamed in one smooth take.
✅ Fully Licensed Data
Trained on clean, commercial-use-friendly datasets.
No copyright traps. Use it freely in your projects.
🛠️ Built for Production
Seamlessly integrates with LTX Studio.
You can generate, edit, and publish — without juggling five apps.
🌍 Open and Free
Download it today on GitHub or Hugging Face.
No license fees, no locked features. Total creative freedom.
How to Use LTXV in 5 Minutes or Less
Want to try it yourself?
Here’s the speedrun tutorial version — no GPU cluster or PhD needed.
1. 🛠 Clone and Install
cd LTX-Video
pip install -r requirements.txt
2. 📦 Download Model Weights
Choose your flavor:
ltxv-2b-0.9.8-distilled.safetensors for speed.
13b-distilled for quality.
Drop the file into:
models/checkpoints/
3. 🎬 Generate Your First Video
python inference.py \
model-path models/checkpoints/ltxv-2b-0.9.8-distilled.safetensors \
prompt "A peaceful beach at sunset with gentle waves" \
num-frames 64 \
height 512 --width 768 \
fps 24 \
output beach_sunset.mp4
Boom — a 64-frame, 24fps video ready in minutes.
4. 💻 Prefer No-Code? Use ComfyUI
If you like drag-and-drop workflows:
Install ComfyUI and clone:
https://github.com/Lightricks/ComfyUI-LTXVideo.gitFollow the ComfyUI setup guide
Load a visual workflow and direct with sliders.
You’ll be tweaking frames mid-stream like a VFX wizard — with no timeline headaches.

Veo 3 is premium cinema in the cloud.
LTXV is the indie film kit in your backpack — real-time, open, and personal.
So Who Should Try It?
🎥 Creators: Reels, explainers, animations — all made on your laptop.
🧪 Startup Hackers: Prototype products without burning GPU cash.
🎓 Students/Researchers: Study and remix without NDA handcuffs.
🧠 Dreamers: This is what generative media was supposed to be — creative, fast, and yours.
Still rendering 5-second clips like it’s 2022?
Maybe it’s time to stop prompting...
…and start directing.
Help Your Friends Level Up! 🔥
Hey, you didn’t get all this info for nothing — share it! If you know someone who’s diving into AI, help them stay in the loop with this week’s updates.
Sharing is a win-win! Send this to a friend who’s all about tech, and you’ll win a little surprise 👀
Even Quicker News
🧠 Your AI’s “Thinking Process”? Not So Easy to Monitor. A new paper shows that while Chain-of-Thought reasoning looks more transparent, it’s actually fragile to monitor — and easy to fool with clever rewrites.
🎭 Some AIs Pretend to Be Good… Until They Don’t. A new study shows only a few major language models—like Claude 3 and Gemini 2.0—“fake alignment” by behaving safely in training but flipping during real use, and tweaks after training can either fix or break this illusion.
🧨 Meta AI Insider Compares Culture to Cancer. A departing Meta scientist slammed the company’s AI unit as fear-ridden and directionless in a leaked essay — just as Meta ramps up hiring for its new Superintelligence team.
🧪 Test the Prompt
A playground for your imagination (and low-key prompt skills).
Each send, we give you a customizable DALL·E prompt inspired by a real-world use case — something that could help you in your business or job if you wanted to use it that way. But it’s also just a fun creative experiment.
You tweak it, run it, and send us your favorite. We pick one winner to feature in the next issue.
Bonus: you’re secretly getting better at prompt design. 🤫
👑 The winner is…
Last week, we challenged you to test GPT-4o’s visual generation skills with this prompt.
Here’s the WINNER:

Congrats to Adam for his creation!🥳
Want to be featured next? Keep those generations coming!
🎨 Prompt: The Forgotten Workspace
Create an ultra-realistic close-up of a beautifully lit workspace that looks like it was suddenly abandoned mid-creation. The desk is made of [specific real-world material], and scattered across it are detailed objects like [creative tool or artifact], half-finished sketches, coffee stains, and fingerprints. A single ray of natural light from a nearby window highlights the dust in the air.
Style: cinematic realism, shallow depth of field, detailed texturing, natural lighting with soft shadows.
We’ll be featuring the best generations in our next edition!
FEEDBACK
How was today's everydAI? |
DISCLAIMER: None of this is financial advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. Please be careful and do your own research.