PirateDiffusion Guide

Pirate Diffusion

pirate diffusion logo

Overview

Pirate Diffusion by Graydient AI is the most powerful bot on Telegram. It is a multi-modal bot, meaning that it handles large language models and thousands of image models from many model families like FLUX, Pony, and SDXL.

Why use a bot?

It’s extremely fast and lightweight on mobile. You can link the most complicated desktop workflow to a hashtag and use it when you’re out and about, or just want to casually render on your PC without opening a whole graphics UI. 

Social Experience

You can also PirateDiffusion solo, or with a group of friends — and there are also many interest groups within. People can also create bots and workflows, and share them. 

Unlike other generative AI services, there are no “tokens” or “credits” required. PirateDiffusion is designed for unlimited use, royalty free.

 

What can it do?

 

It’s pro image generation and custom bot conversations, over chat.  

You can make 4k images in a few keystrokes without a GUI. You can use every major Stable Diffusion via chat. Some features require a visual interface, and it pops up a web page for those (stable2go integration)

Create by yourself with your private bot, or join groups and see what other people are making. You can also create themed bots like /animebot which connect our PollyGPT LLMs to StableDiffusion models, and chat with them to help you create! Create a workflow that’s totally unique to you using loadouts, recipe (macros) widgets (visual gui builder) and custom bots. It does a lot!

Origin Story

The name Pirate Diffusion comes from the leaked Stable Diffusion 1.5 model in October 2022, which was open sourced. We built the first Stable Diffusion bot on Telegram and thousands of people showed up, and that’s how we started. But to be perfectly clear: there’s no piracy going on here, we just loved the name. Still, enough people (and our bank) said the name was a little much, so we renamed our company to “Graydient” but we love Pirate Diffusion. It attracts fun, interesting people.

Try a new recipe! Type /render #quick and your prompt
Tip: Use /render #quick - a macro to achive this quality without typing negatives

Guidance (CFG)

The Classifier-Free Guidance scale is a parameter that controls how closely the AI follows the prompt; higher values mean more adherence to the prompt. 

When this value is set higher, the image can appear sharper but the AI will have less “creativity” to fill in the spaces, so pixelation and glitches may occur. 

A safe default is 7 for the most common base models. However, there are special high efficiency models that use a different guidance scale, which are explained below.

SYNTAX

/render <sdxl> [[<fastnegative-xl:-2>]]
/guidance:7
/size:1024x1024
Takoyaki on a plate

How high or how low the guidance should be set depends on the sampler that you are using. Samplers are explained below. The amount of steps allowed to “solve” an image can also play an important role.

 

Exceptions to the rule 

Typical models follow this guidance and step pattern, but newer high efficiency models require far less guidance to function in the same way, between 1.5 – 2.5.  This is explained below:

High Efficiency Models

Low Steps, Low Guidance

Most concepts require a guidance of 7 and 35+ steps to generate a great image. This is changing as higher efficiency models have arrived.

These models can create images in 1/4 of the time, only requiring 4-12 steps with lower guidance. You can find them tagged as Turbo, Hyper, LCM, and Lightning in the concepts system, and they’re compatible with classic models. You can use them alongside Loras and Inversions of the same model family. The SDXL family has the biggest selection (use the pulldown menu, far right). Juggernaut 9 Lightining is a popular choice.

Some of our other favorite Lightning models are <boltning-xl> and <realvis4light-xl> which look great with a guidance of 2, steps between 4-12, and Refiner (no fix) turned off. Polish it off with a good negative like [[<fastnegative-xl:-2>]].  Follow it up with an upscale, and the effects are stunning!

Look into the notes of these special model types for more details on how to use them, like Aetherverse-XL (pictured below), with a guidance of 2.5 and 8 steps as pictured below.

VASS (SDXL only)

Vass is an HDR mode for SDXL, which may also improve composition and reduce color saturation. Some prefer it, others may not. If the image looks too colorful, try it without Refiner (NoFix)

The name comes from Timothy Alexis Vass, an independent researcher that has been exploring the SDXL latent space and has made some interesting observations. His aim is color correction, and improving the content of images. We have adapted his published code to run in PirateDiffusion.

/render a cool cat <sdxl> /vass

Why and when to use it: Try it on SDXL images that are too yellow, off-center, or the color range feels limited. You should see better vibrance and cleaned up backgrounds.

Limitations: This only works in SDXL. 

 
 

More tool (reply command)

The More tool creates variations of the same image

To see the same subject in slightly different variations, use the more tool. 

DESCRIBE A PHOTO

Updated!  There are now two modes of describe: CLIP and FLORENCE2

Generate a prompt from any image with computer vision with Describe! It is a “reply” command, so right click on the image as if you were going to talk to it, and write

/describe /florence

The additional Florence parameter gives you a much more detailed prompt. It uses the new Florence2 computer vision model.  /describe by itself uses the CLIP model

Example

Launch widgets within PirateDiffusion