Gain a Splash of New Skills - Coursera+ Annual Just ₹7,999
Free courses from frontend to fullstack and AI
Overview
Syllabus
I'm not using vanilla standard diffusion 1.4 checkpoint for my textual inversion training model or img2img. The model I'm using is a 0.5/0.5 weighted sum split between standard diffusion 1.4 and waifu diffusion 1.3 not the final 1.3 model. I go over this in the video at timestamp .
- Intro/Preview of character artwork created with the help of stable diffusion
- Some context about textual inversion
- Short rundown of the image dataset I used as input to textual inversion training
- Continuation of textual inversion process
- Explanation of checkpoint merging
- Checking training process
- Img2img demo on character sketch
- Explanation of prompts
- How to use loopback and why I use it
- Sample output of loopback generation
- Short narrated real-time demo of painting over loopback images
- Demonstration of spot healing brush to correct irregularities
- Painting the ear
- Start of the image finalization
- Sharpening the image Filter - Other - High Pass - Pixel value lowish 1.0 - 1.5 - Put it on hard light blending mode
- Adding bloom Filter - Blur - Gaussian Blur Pass - Put it on screen blending mode
- Camera Raw Filter
- Using 3rd party addon AKVIS artwork to add a bit of painterly effect. Use sparingly.
Taught by
kasukanra