Project Breakdown:
Creating a Dynamic Music Video
When Omri approached me with his concept, he envisioned a storyline where a young party girl navigates through various nightlife scenes, encountering diverse characters and experiencing different atmospheres.
Initial Concept and Visualization
I started with Midjourney as my primary tool, focusing on designing the main character and detailing each party scene. Critical questions shaped the visual development: What does the club look like? Who attends these parties? What are the unique elements of each club's entrance? This process involved creating distinct club environments, ensuring that the clothing and appearance of the club-goers matched the specific atmosphere and color theme of each location.
Developing Story Elements
As the project progressed, I defined the more provocative aspects of each party, including the depiction of drugs and alcohol. This step was crucial for adding depth to the narrative and enhancing the realism of the party scenes.
Storyboard and Preliminary Animation
Creating a comprehensive storyboard followed, which allowed us to align the static images with the soundtrack. This step was crucial for evaluating the video's visual and timing effectiveness. For the animation, I utilized Runway Gen2 and Stable Diffusion SVD1.1, both of which facilitated the transformation of static images into engaging animations.
Advanced Animation Techniques
For unique parts of the video, I wanted more interesting visuals. I used Stable Diffusion ComfyUI AnimateDiff with a Prompt Scheduler to create cool morph animations. For the zoom-in sequence at the beginning of the video, I utilized Midjourney's zoom-out feature, then stitched the images and time-reversed them using Adobe After Effects.
Character Integration Challenges
The most complex part was creating visuals of my two stars, Omri Guetta and Takiru, together. I used a few different techniques. The first one was straightforward—adding them as character reference images (Cref) into Midjourney to generate images of DJs partying in a club. However, I encountered issues as Midjourney kept creating identical twins of one character in every image. I then trained a model + Lora for each DJ using Dreamlook.ai and Astria.ai, and used these models in Stable Diffusion to generate more diverse images. Once I had enough good images of Omri and Takiru, I used Photoshop to merge a set of images of the two together, and then animated these images using Gen2 and SVD1.1 again.
Highlight Scene and Final Touches
In the video’s climax, where Omri and Takiru perform, I wanted to create complex animations with a rotating camera effect around the Y-axis. This was achieved using Stable Diffusion Deforum, adding an enhanced quality to the DJs' performance. The final editing of the video was completed in Adobe After Effects, where I made color corrections and added manual zoom effects.
Enhancement and Upscaling
To ensure the highest quality, the final video was processed through Topaz Video, enhancing clarity and upscaling the resolution to 4K, making it suitable for various platforms.
Back to Top