The image diffusion model, in its simplest form, generates an image from the prompt. The prompt can be a text prompt or an image as long as a suitable encoder is available to convert it into a tensor that the model can use as a condition to guide the generation process. Text prompts are probably […]
Using OpenPose with Stable Diffusion
We have just learned about ControlNet. Now, let’s explore the most effective way to control your character based on human pose. OpenPose is a great tool that can detect body keypoint locations in images and video. By integrating OpenPose with Stable Diffusion, we can guide the AI in generating images that match specific poses. In […]
Using ControlNet with Stable Diffusion
ControlNet is a neural network that can improve image generation in Stable Diffusion by adding extra conditions. This allows users to have more control over the images generated. Instead of trying out different prompts, the ControlNet models enable users to generate consistent images with just one prompt. In this post, you will learn how to […]
Inpainting and Outpainting with Stable Diffusion
Inpainting and outpainting have long been popular and well-studied image processing domains. Traditional approaches to these problems often relied on complex algorithms and deep learning techniques yet still gave inconsistent outputs. However, recent advancements in the form of Stable diffusion have reshaped these domains. Stable diffusion now offers enhanced efficacy in inpainting and outpainting while […]
Generate Realistic Faces in Stable Diffusion
Stable Diffusion’s latest models are very good at generating hyper-realistic images, but they can struggle with accurately generating human faces. We can experiment with prompts, but to get seamless, photorealistic results for faces, we may need to try new methodologies and models. In this post, we will explore various techniques and models for generating highly […]
Using LoRA in Stable Diffusion
The deep learning model of Stable Diffusion is huge. The weight file is multiple GB large. Retraining the model means to update a lot of weights and that is a lot of work. Sometimes we must modify the Stable Diffusion model, for example, to define a new interpretation of prompts or make the model to […]
Prompting Techniques for Stable Diffusion
In all cases, generating pictures using Stable Diffusion would involve submitting a prompt to the pipeline. This is only one of the parameters, but the most important one. An incomplete or poorly constructed prompt would make the resulting image not as you would expect. In this post, you will learn some key techniques to construct […]
How to Create Images Using Stable Diffusion Web UI
Launching the Stable Diffusion Web UI can be done in one command. After that, you can control the image generation pipeline from a browser. The pipeline has a lot of moving parts and all are important in one way or another. To effectively command Stable Diffusion to generate images, you should recognize the widgets from […]
A Technical Introduction to Stable Diffusion
The introduction of GPT-3, particularly its chatbot form, i.e. the ChatGPT, has proven to be a monumental moment in the AI landscape, marking the onset of the generative AI (GenAI) revolution. Although prior models existed in the image generation space, it’s the GenAI wave that caught everyone’s attention. Stable Diffusion is a member of the […]
Brief Introduction to Diffusion Models for Image Generation
The advance of generative machine learning models makes computers capable of creative work. In the scope of drawing pictures, there are a few notable models that allow you to convert a textual description into an array of pixels. The most powerful models today are part of the family of diffusion models. In this post, you […]