Yesterday, the AI startup Runway released its first mobile application that works with Gen-1, their video-to-video generative AI model, and is currently only available on iOS devices. Runway was also involved in the making of Stable Diffusion, an AI image generator.
The updated app will allow users to collaborate quickly with their phone to make an AI video, as well as tweak any video they already have through text directions, photos, or various predefined templates.
Users can choose from a variety of options on Runway’s list such as “Cloudscape” or they can alter their video to look like a claymation movie, a charcoal sketch, a watercolor painting, paper origami and more. They can additionally upload a photo or enter a phrase into the text box.
The app will then create four potential versions of the final product and allow the user to pick their favorite one. Once they’ve decided, it takes a bit of time for completion. We tried the app out and it took around a minute or so, on some occasions taking up to two minutes.
It is only natural that AI generated videos are not always perfect and may appear distorted. People may think this technology is pseudo or trite. But as it advances in time, it could be useful to users, especially content creators wanting to make their social media posts more interesting.
Despite this, we established that the Runway mobile app was easy to maneuver and a blast to toy around with.
This is an example of how we utilized a clip of Michael Scott from “The Office” in relation to the text prompt “realistic puppet.”
Be cautious: The outcome is frightening.
We additionally experimented with “3D animation,” and the result was satisfactory.
Of course, there are a few more considerations apart from technical issues and distorted facial features.
If you would like to obtain the free version, there is a restriction of 525 credits and you only have permission to upload clips that have a maximum length of five seconds. A single credit is consumed for every second of a video.
Runway intends to incorporate the capability to have longer videos in the future, clarified CristÃ³bal Valenzuela, co-founder and chief executive of the app, in his talk with TechCrunch. Additionally, he proclaimed the app is looking to better itself and come out with new features.
Valenzuela stated that the priority is to boost effectiveness, quality and supervision. In the forthcoming weeks and months, people will notice a range of modifications, from loner productions to higher-class videos.
Be aware that this app doesn’t create any content of a sexual nature or with any copyright-protected material, so you won’t be able to make clips that resemble well-known intellectual property.
Runway has launched a new mobile app with two subscription options: Standard ($143.99/year) and Pro ($344.99/year). The Standard plan comes with 625 credits a month, along with 1080p video, unrestricted future projects, and other premium benefits. The Pro category offers 2,250 credits a month and access to all 30+ of Runway’s AI capabilities.
One month after Runway released its introductory “Gen-1” model in February, it followed up with the more advanced “Gen-2” edition. This latest version is a text-to-video generator, outstripping previous text-to-image models such as Stable Diffusion and DALL-E. As such, users are now able to create videos from nothing.
Runway has gradually started to give people access to the next-generation trial version, Valenzuela informed us.
The app is currently compatible with the Gen-1 model, but soon it will include support for Gen-2 and a selection of other artificial intelligence tools such as its image-to-image creator.
Meta and Google have both brought out text-to-video creators, which are known as Make-A-Video and Imagen respectively.
Since its arrival in 2018, Runway has made a variety of AI-fueled video editing software. Its web-based video editor comes with numerous tools, for example, frame interpolation, background extraction, blur effects, audio cleaning/erasing and movement tracking, just to mention a few.
Tools have decreased the amount of time it takes for influencers and cinematographic/television companies to edit and generate videos.
For example, the visual effects team behind the movie “Everything Everywhere All at Once” made use of Runway’s tech in order to create the scene showing Evelyn (Michelle Yeoh) and Joy (Stephanie Hsu) in a multiverse where they are transformed into animated stones.
The graphics team behind CBS’s “The Late Show with Stephen Colbert” managed to reduce hours of editing to only five minutes thanks to Runway, according to art director Andro Buneta.
Runway has a department dedicated to entertainment and production known as Runway Studios.