Things I noticed creating an AI music video in my own image. 🔥 1. Consistent characters have come a long way for images, but modeling for video is still a bit of a challenge 2. With the variability of AI, the iterative creative process is on hyperdrive. Lots of switching back and forth between generation and editing tools to craft a final piece of work 3. It was helpful to start with a really strong base image, then build around it. Starting close up and zooming out gave me the best consistency when generating video. —- Models used → Nano Banana - Image Gen, iteration → Kling 2.1 - Video (first/last frame), Lip Sync → Veo 3 - Singing (the secret sauce💥) → Runway Aleph - Video Inpainting → Trained a LORA in my image —- Most of all - I learned how difficult the job is for an editor. A lot of work went into this 1 min 30 seconds. 🥲 Enjoy! —- 🎵Song: THE DRIFT - Performed by me, co-written with Damien Foster. 🎥 Watch on YouTube: https://lnkd.in/gBrhyJY4
Impressed! But when's the next IRL event? :)
Amazing Lucy Liu! You blended your music, images, videos and your voice so well!
Very cool :)
🔥🔥🔥 so good Lucy
niiiice!
Looks great! I’ve been holding off on doing this myself cause it seems to be a ton of work still.
Always love to hear people talk about their challenges and experiences using the new tech. You said videos are still a bit of a challenge - how so?
VP Strategy & Growth @ Range | Stablecoin Security & Compliance | Cross-Chain Analytics & Risk | 2x Exited Founder
1wvery cool! do you use any AI for the music production?