top of page

Generating Photos of Yourself Using Stable Diffusion and Dreambooth

Updated: Jan 18

Hello! A month ago, I had fun training a custom model on images of myself so I could generate photos of me.


First, I'll quickly introduce the technology, then the results, and finally, I'll provide links and instructions so you can do the same.


The Tech

For the tech part, I used "Stable Diffusion" and "Dreambooth." Stable Diffusion is an AI created by stability.ai. Here is a link to the release of this :


Dreambooth is an open-source model created by a team at Google Research. This model can learn to reproduce a subject by generating new images of that subject.


The Results :



And now, the HOW-TO, in brief :


First, create an account on "HuggingFace 🤗" and generate an "access token" that you will use later.


To train the model on photos of yourself, start by preparing them:

  • Select about 20 different photos of yourself in various situations with different framings.

  • Resize all these photos to 512x512px and crop them. You can use the online software birme.net.


Now that your image set is ready, you can go to this Google Colab to train a custom model:


Run the code step by step:

As you can see in the screen above, we reach the "HuggingFace 🤗" step, where you accept the licenses/agreements, enter your access token, and then click the "play" button to run this block of code.


Continue the step-by-step process. In my case, I decided to save the model to my Google Drive.


Then, at the "Start Training" step, I slightly modified the code to understand the prompt "Alexis." I don't remember exactly what I put, but something like:


# You can also add multiple concepts here. Try tweaking `--max_train_steps` accordingly.

concepts_list = [
    {
        "instance_prompt":      "Alexis",
        "class_prompt":         "photo of a person",
        "instance_data_dir":    "/content/data/ukj",
        "class_data_dir":       "/content/data/person"
    },
    # Additional concepts can be added here
]

# `class_data_dir` contains regularization images
import json
import os
for c in concepts_list:
    os.makedirs(c["instance_data_dir"], exist_ok=True)

with open("concepts_list.json", "w") as f:
    json.dump(concepts_list, f, indent=4)

Then in the following block, I edited the following values:

I don't remember exactly what I put for train_steps, but I think it was between 2000 and 8000. With 800, the results were not high enough quality.


Anyway, run the blocks in order up to the step "Convert weights to ckpt to use in web UIs like AUTOMATIC1111."


This step is super useful, and I recommend doing it because AUTOMATIC1111 is a really great web interface!


You can test your generation here:


If you are satisfied, then you can go copy the repo of the AUTOMATIC1111 web UI here:


In its readme, AUTOMATIC1111 explains how to install the UI; you can follow his step-by-step, which I'll copy here:


Automatic Installation on Windows

  • Install Python 3.10.6, checking "Add Python to PATH"

  • Install git.

  • Download the stable-diffusion-webui repository, for example by running

  • Place model.ckpt in the models directory (see dependencies for where to get it).

  • (Optional) Place GFPGANv1.4.pth in the base directory, alongside webui.py (see dependencies for where to get it).

  • Run webui-user.bat from Windows Explorer as normal, non-administrator, user.


In our case, the model.ckpt is the one we generated earlier via Google Colab; it's our model trained on our face. I also performed the optional step with GFPGANv1.4.pth.


Once all this is done, run the "webui-user.bat" file, and your project runs locally. You can now generate images of yourself using this web interface. CONGRATULATIONS 🤩


Enjoy! 😎


Here are all the links/sources I used:


Stable Diffusion :


Dreambooth :


Stable Diffusion Web UI :


A Google Colab for easy model training:

Recent Posts

See All
bottom of page