Writing better Stable Diffusion prompts: AI Tips and Tricks

This article is over a year old – go to the updated tutorials




A Quick Review 

In this tutorial we’ll be using our Telegram app, but the same principles will apply to every platform. For the uninitiated, you can simply describe an image and it will create multiple interpretations based on your input.

Using PirateDiffusion Telegram
/render a cat wearing a hat in the style of Manet

Do the order of the words in the prompt matter?

Yes, they do.  The earlier in the sentence, the stronger the effect. Compare these two prompts:

1. A painting of a cute dog wearing a suit, in the sky

2. A painting of a cute dog in the sky, wearing a suit

The second prompt may get more focus on the sky, so the golden rule is to move things to the front of the sentence that matter the most. However, AI image models are kind of like databases that contain a finite number of topics, so a concept may not necessarily be available, especially in the case of specific people or recent events. For advanced prompting with parameters like /guidance, you’ll want to put those parameters right after the /render command, before any other information.

Introduction to /remixing

Remix means to create a new or similar photo from a photo that was already uploaded or generated. It works as the general basis for the next render.

🤖💇 To revise an existing AI art image, right-click on an image (Android) or tap on it (iPhone) and you’ll see the Telegram reply function appear.  Here, you can type /more to see a few more variations of the same concept.

By default, we output low resolution AI photos to quickly arrive at an idea. But sometimes you’ll want something with more detail.  A quick way to do this is to reply to an image with the /facelift command.  This will smooth facial details as well as inflate the image to 1024×1024, doubling the size.

Remixing and More Photos explained

After an AI photo is present after a render command, it’s remixable. Also, you can upload any photo from your device into the chat and remix it, just by “replying” to an image as you did above with the /more command.

More can also give you subtle variations. Experiment with both.

/Remix is perhaps the most fun command of them all. It takes the basic premise of an AI image and tries to accommodate brand new instructions for it. This command is very powerful and can produce some very interesting results. Let’s try it here – like turning a photo of this dog into a statue:

/remix dog as a wood statue

(Credit: We are Legion)


Introduction to Guidance

We work hard to make Pirate Diffusion as easy to use as possible while creating a great set of diverse results. To that end, our bot automatically picks what it thinks are the best ranges of settings, and then randomly chooses the exact setting from there. But sometimes, that’s not enough and you need to go manual.

Let’s use “lush vines and cotton candy” for example, using a random shape:

To use the Strength and Guidance systems, add the following options to your commands:

/strength:0.5 on /remix and /more controls how much the new images can vary. You can put in a number from 0.1 to 1. This lets you get more creative with the rework.  (When not specified, the system uses a random number from 0.3 – 0.7 for /remix and 0.2 – 0.4 for /more)

/guidance:15 (on /render, /remix and /more) lets you control how strongly it applies a prompt. Results can get out of control quickly but higher values can get you weirder very specific results. (When not specified, the system uses a random guidance value from 8 – 20. You can use a value from -50 to 50)


Other people can join the same Telegram room to /remix art with you, even if the AI photo displayed was not generated or uploaded by you. New AI remixed photos appear below without changing other people’s inputs, so feel free to go off into your own tangent. Or spin off your own private server, which we’re happy to facilitate when you need more privacy and control.

How to export your work to other devices:

/gallery – creates a web link to pick up and share your art on any device

/username – rename the gallery

Adding certain keywords to your prompt can lead to dramatically different stylized results without changing your original render idea. For example, if you said /render statue of a dog and added vivid words like “cinematic” or “movie concept art”, the results are going to be dramatically different. There really is no limit to how many words you can add, so mix and match until you love the results.

The community is learning the most effective words to quickly arrive at dramatic art, such as “trending on artstation” , which refers to acclaimed styles of paintings featured on a popular artist community. In fact, feel free to try these cool ideas we found on Metaverse Post’s top 100 prompts

/render temple in ruind, forest, stairs, columns, cinematic, detailed, atmospheric, epic, concept art, Matte painting, background, mist, photo-realistic, concept art, volumetric light, cinematic epic + rule of thirds octane render, 8k, corona render, movie concept art, octane render, cinematic, trending on artstation, movie concept art, cinematic composition , ultra-detailed, realistic , hyper-realistic , volumetric lighting, 8k –ar 2:3 –test –uplight

Controlling your aspect ratio

Output Control: `/portrait` and `/landscape` 🏞

Use /render /portrait a dog in a hat and /render /landscape a dog in a field to control the shape or aspect ratio of your generated image

Introduction to Styles

After you’ve mastered the basics, and you’ve found some styles you like, you can make the system remember the styles so you don’t have to type them every time.

Type /styles to see which ones the community has already created, and add your own, as shown in the example below. A Star Wars fan had already created a “sith” style, so just provide the subject and you’re done.

De-emphasize elements with “negative” keywords

Within the same render prompt, you can also add words in [square brackets] to de-emphasize them in the output. This is best done from the start, like this:

/render Sea Monster [ocean]

This will still draw the monster, but not in the water. Add more brackets for more emphasis, as shown below. You can also mix this with other commands, such as /steps:wayless which quickly draws nine ideas in low detail. We  explain the steps command in much more detail below.

You can use any number of square brackets to further push down a prompt, as shown on the far right. There is no limit on the brackets you can use to de-emphasize, though perhaps four on each side is about as effective as it gets.


By default when you use /render, /remix or /more, we generate 5 images, with 50 “steps” of the AI’s diffusion model. (This is kind of like when you are drunk and have to put your key in the lock 50 times before you get it to open)

More steps generally add more detail, but sometimes they can actually ruin an image. 50 steps is actually a pretty good number in most cases. (That’s about how many steps I get in my pedometer per day, too.)

Now you can decide how many images you want and how many steps we should spend trying to perfect each one, using these options:

Use /render /steps:wayless a dog in space to generate 9 images, using 15 diffusion steps
Use /render /steps:less a dog in space to generate 6 images, using 30 diffusion steps
Default: Use /render a dog in space to generate 5 images, using 50 diffusion steps
Use /render /steps:more a dog in space to generate 3 images, using 100 steps
Use /render /steps:waymore a dog in space to generate 2 images, using 200 steps

Note: /steps shouldn’t be used with /facelift as they produce similar results.

Additional Reading

We highly recommend OpenArt’s excellent Stable Diffusion Prompt Book, a free regularly-updated PDF e-book by Mohamad Diab, Julian Herrera, Musical Sleep, Bob Chernow, and Coco Mao. Download it here

Can we help you with something else?

Contact us