Runway Gen-2 Image to Video Prompt Examples

When you want to create a video using the Runway Gen-2 AI model, you have three different ways to do it. The first is to make a video from a text prompt. The second option ...

When you want to create a video using the Runway Gen-2 AI model, you have three different ways to do it. The first is to make a video from a text prompt. The second option is to upload an image and transform that into a video. Finally, you can use a combination of an image and text to create a video.

I've recently written an article about how to write prompts for Runway Gen-2. It included multiple prompt examples. I suggest you read that article if you want to better understand the process of writing prompts for Gen-2.

Today, I'm going to dedicate a full article to showcase how you can turn images into videos using the Runway AI platform. Of course, the first step is to find a good image.

If you're an AI enthusiast, I recommend that you make your own images using popular generative AI products like Stable Diffusion and Midjourney. I have a lot of Midjourney related articles on this website, so make sure you check them out if you want to improve your prompting skills.

How Does the Runway Gen-2 Image to Video Model Work?

The process of turning images into videos using the Runway AI platform is incredibly easy. The whole process is super intuitive from the moment you sign into your account. In case you don't have a lot of experience with this platform, I'll explain the process step by step.

If you haven't already made an account on the official Runway website, I suggest that you do because you're going to need it. You don't have to immediately pay for a subscription since you can try out the platform for free.

Once you're signed in and are on your dashboard page, you'll see a big button in the upper right side. The button says "Image to Video", as pictured in the image below.

runway image to video model

Pressing on the button will immediately lead you to a page where you can upload an image and generate a video from it. You also have a "Text" tab available which you can press if you want to include a text prompt along with the image.

It's not a necessary part of the process, but can provide additional context to the AI model and impact how the video will look. I won't be doing that today since I believe those types of prompts deserve a separate article.

Runway Gen-2 Image to Video Prompt Examples

I decided to use only images that I've personally created in Midjourney for today's experiment. I will choose several images that I've made in the past and transform them into videos.

I could have made new ones, but I already shared hundreds of great images on this website that I made in Midjourney and it would be a shame if I didn't use them for other projects. I'll give a background for each image I select, as well as what prompt I used to generate it.

/imagine an ancient warrior is hiking along a forgotten path to complete the biggest quest of his life --ar 16:9 --stylize 400 --niji 5

stylize parameter midjourney ancient warrior

This is a prompt I wrote in an article about how to use the stylize parameter in Midjourney. The moment I saw it, I knew I wanted to use it in Runway, but was worried how the movement of the main character would be generated. In fact, I thought that maybe the character wouldn't move at all. Let's see how the video turned out.

If I would say that the movement here isn't good that would literally be nitpicking. I'm not saying it can't be better, but it looks awesome the way it is. It looks like an intro scene from an animated TV show.

/imagine glitch art style, futuristic warriors that look like templar knights are meeting around a table to talk about how they will conquer the world, dystopia --ar 16:9

glitch art wallpaper midjourney prompt

Bringing images to life isn't easy, and what can make it even more difficult is when there are multiple characters. It's incredibly hard to animate the head movement and facial expressions of all characters. But since the characters in this image are wearing masks, I thought it would be a good idea to animate it.

I could extend this video and add some voices to it and have it turn out great. Of course, that's not the point of this particular article. Today, I'm just showing you the basics of how to use the image to video Runway AI model.

/imagine ghostcore aesthetic, anime with supernatural beings, ghosts, spirits, undead, connections with the underworld, haunting, ethereal, mysterious, dark --ar 16:9 --niji 5

midjourney ghostcore anime aesthetic prompt

This is one of my favorite images that I made using Midjourney. I did this while exploring all of the aesthetics tokens that the AI model was trained on. I decided to turn the image into a short clip, and here's the result.

There isn't too much going on in this video, but it was brought to life in a great way. This type of video can literally be looped and used as an animation for a song that you're planning to upload to YouTube or any other similar platform.

/imagine unexplainable phenomenon in deep space, abstract, wondrous, strange, mysterious --ar 16:9 --s 500

stylize parameter midjourney deep space example

Just like the first AI-generated image in this article, this one was also the result of me experimenting with the stylize parameter. I love astronomy and like making images like this.

The video here is not exactly what I expected because things are happening too fast, but that doesn't mean I don't like it. I still think that it's a great clip that can be used in various forms of content.

/imagine photorealistic image shot with Nikon D850, 24mm lens, forest lake reflecting the surrounding foliage and a symphony of autumn colors, with a peaceful canoe gliding across the glass-like surface, wondrous, adventurous --ar 16:9

midjourney prompts for landscape photography lake

Ever since the release of Midjourney V5, many people have started dedicating a lot more time and effort toward creating photorealistic images with the AI model. The results are always amazing when you write the right prompt.

Everything about this video is exactly how it's supposed to be. It just shows you that using the right image plays a huge role in how the result will be generated. The slow movement of the canoe is perfect.

/imagine photorealistic image captured with Sony A9, 24mm lens, cascading series of waterfalls forming a picturesque scene of water flowing gracefully through rugged terrain --ar 16:9

midjourney beautiful waterfall nature prompt

This is another photorealistic image that I made using Midjourney. It would be hard to tell whether it's AI-generated if you didn't have any information about the image beforehand.

The video looks good, but the one thing I would point out is that I wish the flowing of the water was better animated. But maybe I would've gotten better results if I used a different image.

/imagine fractalism, life in a world of deja vu, everything that’s already happened will happen again, there is no original idea, life just keeps on repeating, all formed memories already exist --ar 16:9

fractal art before weird parameter

I've always liked looking at fractals. Knowing the beauty of mathematics is ingrained into nature everywhere around us is a mind blowing thing to me. Fractal patterns can be found literally everywhere.

I think the video could've been better if the camera started zooming in, but I'm not complaining. This still looks pretty amazing.

/imagine abstract art, the truth we have been seeking our entire existence is revealed, the revelations are shocking --weird 250 --ar 16:9

abstract art existence midjourney prompt

The weird parameter is a useful tool in Midjourney when you're making abstract art. If you want to learn more about this parameter, I suggest you read my article on this topic.

I feel like this is another animation that can easily be looped and used as visual content to go alongside a song. I'm surprised more music artists haven't started doing this yet, but I think they're going to be doing it really soon.

The next two images don't have a prompt associated with them. The reason for this is because I used existing images and sprinkled only a few words to generate them. If you would like to see how I did that, I recommend that you read my article on Midjourney prompts for existing images.

midjourney edit photos budapest 2

The original photo was taken in Budapest and didn't feature any person. It was just a skyline image of Budapest. Midjourney added the woman to this image, and I animated her movement using the Runway AI platform. This just goes to show how much artificial intelligence can help you create good content quickly.

The animation here is awesome. The first thing I thought when I saw this image was that this clip should be featured in a music video. It makes sense why some artists have already started using Runway Gen-2 to create music videos.

I made the next image from a photo my brother took in Greece. The original image featured only one goat, but I decided to make it eerie because the original photo was already strange to begin with.

midjourney edit existing photos greece 2

The image I made in Midjourney is really creepy. It's something you'd expect to see in a horror film. I didn't expect much when I uploaded it to the Runway image to video model, but I was pleasantly surprised with the result.

This is another example of a video made using Runway Gen-2 that would benefit from some voice acting. This is the type of clip that would make sense to include in a short film.

Final Thoughts

I definitely love how the Runway Gen-2 model works. Sometimes, it can get frustrating to get a good video from a text prompt because something unpredictable can often happen. I find that some of the best videos made with Runway Gen-2 have been by using the image to video model.

As long as the image looks really good and has a combination of both beauty and relative simplicity, the generated video usually ends up great.

This model is also perfect when you want to create a more coherent piece of content in which you combine multiple Runway generated videos. You can make a set of images that share a similar theme using Midjourney or any other text to image model, and then use those images to create a short film or music video, or any other type of content that is normally at least a few minutes long.

I would say that if you want to use this model, you should also possess skills in Stable Diffusion, Midjourney, or any other similar AI platform. Using both one of those platforms and Runway AI is an amazing combination that'll help you make better videos.

Leave a Comment