Generating Images with Stable Diffusion | Generative AI Series

Introduction

Stable Diffusion is a text-to-image diffusion model developed by Stability AI. It is capable of generating high-quality, photorealistic images from text descriptions. Unlike other text-to-image models, Stable Diffusion generates consistent photos even when the input text description is complex or open-ended.

In this guide, you’ll set up the Stable Diffusion environment and query the model using a web user interface. Then, you’ll create a REST API to generate responses from the model and access the API through a Jupyter Notebook.

Prerequisites

Before you begin:

  • Deploy a new Ubuntu 22.04 A100 Vultr Cloud GPU Server with at least:
    • 80 GB GPU RAM
    • 12 vCPUs
    • 120 GB Memory
  • Establish an SSH connection to the server.
  • Create a non-root user with sudo rights and switch to the account.
  • Create a HuggingFace account.
  • Create a Hugging Face user access token.

Install Dependency Packages

The Stable Diffusion requires some dependency packages to work. Install the packages using the following command:

CONSOLECopy$ sudo apt update $ sudo apt install -y wget git python3 python3-venv libgl1 libglib2.0-0

Run Stable Diffusion in a Web Interface

You can run the Stable Diffusion model in a web interface. Follow the steps below to download an automatic script that installs all the necessary packages. Then, load the model:

  1. Create a new sd directory and navigate to it.CONSOLECopy$ mkdir sd $ cd sd
  2. Download the Stable Diffusion webui.sh file.CONSOLECopy$ wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
  3. Add execute permissions to the webui.sh file.CONSOLECopy$ sudo chmod +x ./webui.sh
  4. Allow port 7860 through the firewall.CONSOLECopy$ sudo ufw allow 7860 $ sudo ufw reload
  5. Run the webui.sh script to download the model and run the web interface.CONSOLECopy$ ./webui.sh –listen Output:... Model loaded in 18.6s (calculate hash: 10.6s, load weights from disk: 0.2s, create model: 1.9s, apply weights to model: 5.4s, apply half(): 0.1s, calculate empty prompt: 0.2s).
  6. Visit the URL below. Replace PUBLIC_IP_ADDRESS with the public IP address of your GPU instance.http://PUBLIC_IP_ADDRESS:7860
  7. Type the following queries and review the output:
    • A cute white cat sitting next to a computer keyboardOutput:Sample cat photo
    • Taj Mahal during sunset, photo realistic, high qualityOutput:Sample Taj Mahal Photo

Create a REST API for the Stable Diffusion Model

The bentoml library provides support for deploying and serving the Stable Diffusion model through an API. Follow the steps below to create and run an API:

  1. Use pip to install the required libraries.CONSOLECopy$ pip install bentoml diffusers transformers accelerate pydantic
  2. Navigate to the sd directory you created earlier.CONSOLECopy$ cd ~/sd
  3. Create a new fetch_sd.py file.CONSOLECopy$ nano fetch_sd.py
  4. Enter the following information into the fetch_sd.py file.PYTHONCopyimport bentoml bentoml.diffusers.import_model( “sd2.1”, “stabilityai/stable-diffusion-2-1”, )
  5. Create a new service.py file.CONSOLECopy$ nano service.py
  6. Enter the following information into the service.py file. The following script loads a BentoML service that uses the Stable Diffusion model to convert text to image.PYTHONCopyimport bentoml from bentoml.io import Image, JSON from sdargs import SDArgs bento_model = bentoml.diffusers.get(“sd2.1:latest”) sd21_runner = bento_model.to_runner(name = “sd21-runner”) svc = bentoml.Service(“stable-diffusion-21”, runners=[sd21_runner]) @svc.api(input = JSON(pydantic_model = SDArgs), output = Image()) async def txt2img(input_data): kwargs = input_data.dict() res = await sd21_runner.async_run(**kwargs) images = res[0] return images[0]
  7. Save and close the file.
  8. Create a new sdargs.py file.CONSOLECopy$ nano sdargs.py
  9. Enter the following information into the sdargs.py file. The following script defines an SDArgs Pydantic model that allows extra fields while inputting data. The script handles data validation in the application.PYTHONCopyimport typing as t from pydantic import BaseModel class SDArgs(BaseModel): prompt: str negative_prompt: t.Optional[str] = None height: t.Optional[int] = 512 width: t.Optional[int] = 512 class Config: extra = “allow”
  10. Create a service.yaml fileCONSOLECopy$ nano service.yaml
  11. Enter the following information into the file.YAMLCopyservice: “service.py:svc” include: – “service.py” python: packages: – torch – transformers – accelerate – diffusers – triton – xformers – pydantic docker: distro: debian cuda_version: “11.6”
  12. Save and close the file
  13. Run the fetch_sd.py file to pull the image from Hugging Face. This file allows the bentoml library to download the Stable Diffusion image and make it available locally. The command takes around 10 minutes to complete.CONSOLECopy$ python3 fetch_sd.py Output:Downloading (…)rocessor_config.json:... ...
  14. List the models.CONSOLECopy$ bentoml models list
  15. Allow port 3000 through the firewall.CONSOLECopy$ sudo ufw allow 3000 $ sudo ufw reload
  16. Run the bentoml service.CONSOLECopybentoml serve service:svc
  17. Save and close the file.

Access Stable Diffusion API from a Jupyter Notebook

After setting up a Stable Diffusion API in the previous section, you can now run a Python script to access the API using a Jupyter Notebook. Follow the steps below:

  1. Invoke a Jupypter lab service and retrieve your access token.CONSOLECopy$ jupyter lab –ip 0.0.0.0 –port 8890
  2. Allow port 8890 through the firewall.CONSOLECopy$ sudo ufw allow 8890 $ sudo ufw reload
  3. Access the Jupyter Lab on a web browser. Replace YOUR_SERVER_IP with the public IP address of the GPU instance.http://YOUR_SERVER_IP:8890/lab?token=YOUR_TOKEN
  4. Click Python 3 ipykernel under Notebook and paste the following Python code. The following script accesses the REST API to infer the Stable diffusion model. The script also provides a text prompt to the model with extra values like height and width to generate a response.PYTHONCopyimport requests from io import BytesIO from IPython.display import Image, display url = “http://127.0.0.1:3000/txt2img” headers = {‘Content-Type’: ‘application/json’} data = { “prompt”: “a black cat”, “height”: 768, “width”: 768 } response = requests.post(url, headers = headers, json = data) display(Image(response.content)) Output:Sample black cat photo

Conclusion

In this guide, you’ve used the Stable Diffusion model to generate images using text inputs. You’ve run the model’s functionalities using a web interface and later used a Jupyter Notebook to access the REST API.

Introduction Stable Diffusion is a text-to-image diffusion model developed by Stability AI. It is capable of generating high-quality, photorealistic images from text descriptions. Unlike other text-to-image models, Stable Diffusion generates consistent photos even when the input text description is complex or open-ended. In this guide, you’ll set up the Stable…

Introduction Stable Diffusion is a text-to-image diffusion model developed by Stability AI. It is capable of generating high-quality, photorealistic images from text descriptions. Unlike other text-to-image models, Stable Diffusion generates consistent photos even when the input text description is complex or open-ended. In this guide, you’ll set up the Stable…

Leave a Reply

Your email address will not be published. Required fields are marked *