Runway, the AI startup that was instrumental in the development of its AI images generator Stable Diffusion, launched its first mobile app on Tuesday that gives users access to Gen-1 which is its video-to-video generated AI model. It is currently only available for iOS devices.
With the app’s new version users will be able to take a video with their phones and then create the AI video in just minutes. It can also alter any video they have in their library with images, text prompts as well as style presets.
Users can choose from Runway’s selection of pre-sets, such as “Cloudscape,” or transform the video so that it appears as if it’s a claymation sketch or watercolor artwork, origami paper, and many more. You can upload your image or type in an idea in your text field.
The application will then create four samples for you to pick from. When you’ve selected the one you prefer the most the app will then take a few minutes to create your final version. We tried the application ourselves and found that it took around 60 seconds or more. Sometimes it takes 2 minutes to create.
Naturally, just like any AI generator the outputs aren’t always perfect and are often strange or distorted. In general, the idea for AI video generators could be absurd and perhaps even silly. But it’s clear the possibilities of how awesome it can be as the technology develops in time.
Meta, as well as Google, have both released text-to-video generators. They are known as Make-A-Video and Imagen respectively.
However, we did find that Runway’s mobile application was simple to use and enjoyable to play with.
Below is an example we came up with by using a scene from Michael Scott from “The Office.” The text prompt that we entered would be “realistic puppet.”
However, there are a few other limitations, aside from glitches and distorted faces.
If users opt for an upgrade to the paid version they’ll have to pay a limitation of 525 credit and they’re only able to upload videos that are five seconds in length. The second part of the movie uses 14 credits.
In the near future, Runway plans to add the ability to play longer videos co-founder and chief executive Cristobal Valenzuela said to TechCrunch. The app will continue to develop and add more features as it goes, the CEO explained.
“We’re working to increase the efficiency, quality, and control. In the coming months and weeks you’ll see a variety of improvements, from longer outputs and higher-quality video,” Valenzuela said.
Additionally, be aware that the app can’t create copyright-protected or nudity-free work therefore, you won’t be able to create videos that are reminiscent of the style of well-known IP.
Runway’s latest mobile app comes with 2 premium options: Standard ($143.99/year) and Pro ($344.99/year). The Standard plan offers 625 credits per month, along with additional premium features such as 1080p video and unlimited projects, among others. The Pro plan gives you 2250 credits per month and Runway’s entire 30-plus AI tools.
One month after Runway launched Gen-1 which debuted in February Runway released Gen-2 as its model. It is certainly a step ahead of the text-to-image models StableDiffusion as well as DALL-E Gen-2 is a text-to-video generator, meaning that users can make movies from scratch.
The runway is slowly beginning to open an open beta program for Gen-2 Valenzuela has told us that the company is slowly rolling out its closed beta for Gen.
The app is currently compatible with Gen-1 models However, Gen-2 will be released in the near future alongside the other Runway AI tools, including the image generator.
Runway has created a variety of AI-powered software for editing videos since its launch in the year 2018. Runway has a range of tools in its online-based video editor like frame interpolation blur effects, background removal, and a feature that cleanses or eliminates motion and audio tracking, to name a few.
The tools have assisted creators of content and even film/TV studios to cut down on the time they spend making and editing videos.
For instance, Visual effects crew behind “Everything Everywhere All at Once” utilized Runway’s technology to aid in creating the scene of the film in which Evelyn (Michelle Yeoh) and Joy (Stephanie Hsu) are trapped in the multiverse, which has turned them into moving rocks.
In addition, the graphics department that created CBS’ “The Late Show with Stephen Colbert” utilized Runway to cut down the editing time to just five minutes according to the Director of art Andro Buneta.
Runway also runs Runway Studios, its entertainment and production division.
AI startup Runway launches an app that brings users video-to-video the power of generative AI written by Lauren Forristal and originally published on TechCrunch