Overview
Syllabus
0:00 Comparing AI video models in ComfyUI
0:10 Integrated video nodes explained
0:31 Downloading required model files
0:55 Node modifications for consistency
1:15 Prebuilt workflows in ComfyUI
1:40 Checking your ComfyUI version
2:00 Using the workflow browser
2:22 Video API vs local models
2:43 Free models for local rendering
3:00 Exploring video model templates
3:28 Test setup and render time setup
4:11 Missing models and how to fix it
5:00 YAML setup for model paths
5:41 Red nodes and missing node issues
6:00 Installing ComfyUI the right way
6:30 ComfyUI course info & promo
7:02 Custom nodes: time tracking and output format
7:45 Switching to MP4 and wide resolution
8:40 Standard prompts and testing setup
9:50 Testing Adv2.1 TX2 Text-to-Video
11:00 Results and quality discussion
11:40 Image-to-Video test with VAN
13:00 Performance and render time review
14:16 Testing LF2V720 model
15:30 High render time due to resolution
16:00 LTXV Text-to-Video test
17:00 Fast speed, but poor coherence
18:10 LTXV Image-to-Video results
18:55 Poor result and user feedback request
19:40 Mochi Text-to-Video test
20:20 Great visual results and sharpness tips
20:55 Using Topaz for upscaling
21:16 Hunyuan model testing
21:50 Oversaturation issues in default setup
22:40 Comparing default workflow vs custom
23:10 SVD Text-to-Video test
23:30 Fast but low quality
24:00 SVD Image-to-Video performs better
24:40 Overall results and recommendations
25:10 Best models summary: VAN, Mochi, Hunyuan
25:40 Final thoughts and viewer feedback request
Taught by
Vladimir Chopine [GeekatPlay]