GETTING STARTED
Video Nodes
The Video node generates short video clips from text prompts, still images, or both. Connect an Image node output to use your generated image as the starting frame, or describe a scene from scratch with text alone.
What you can do and how to use it
Text to Video — generate a video from a text description.
Image to Video — use an image as the starting frame.
First frame + End frame — some models accept a start and end image, letting you define the beginning and end state of the motion.
Reference images — certain models support references to guide visual style or subject consistency.
How to use it:
Add a Video node to your canvas
Write a prompt describing the motion and scene
Optionally connect an Image node output to the first-frame input
Select a model, then set aspect ratio, duration, and resolution
Toggle Generate Audio on if you want the model to produce an audio track
Click Generate
The generated clip plays inline inside the node. Connect it to Video Trim or Video Stitch for further editing.
Available models, settings, and utility nodes
Available models (check the model picker for the current full list):
Veo 3 / Veo 3 Fast / Veo 3.1 / Veo 3.1 Fast (Google) — High-quality cinematic video with audio generation support
Kling 2.5/2.6/01/3.0 (Image Video, Motion Control, Omni Video) — Natural motion with camera control options
Seedance 1/ 1.5 Pro/ 2.0 / Seedance 2.0 Fast (ByteDance) — Longer durations, natural movement
Hailuo 02 (Minimax) — Sharp detail, strong motion quality
Key settings: Aspect ratio (16:9, 9:16, 1:1 and more depending on model) • Duration (4–15s, varies by model) • Resolution (720p / 1080p) • Generate Audio toggle.
Video utility nodes: Video Trim (cut to start/end time) • Video Stitch (combine clips) • Video Frame Grab (extract a frame as image).