LoRaMaker instruction guide

LoRa maker

Overview

LoRa maker is Graydient’s original software for training models in the browser, ready in minutes, no GPU or programming skills required. The models are almost immediately ready to use in Stable2go and Telegram, to create an unlimited number of images.

Access

Login to my.graydient.ai and click the LoRa maker icon, or create an account.

What’s new

LoRa maker just launched — we’d love your feedback!

Please try the service and suggest a feature and upvote others’ ideas. See what’s next in our roadmap and what just arrived in our changelog.

Welcome to LoRa Maker

LoRa maker is for AI enthusiasts that want to create a likeness of something, and recall it over and over with a lot of flexibility. The basic model training process does not require technical ability, but it does require time to carefully describe the contents of each photo and to “balance” the best photo set. If this is your first time, we recommend at least 1 hour to read the full tutorial and try the 10 best practices.

In a hurry? We also have an easier 1-step Face Swap

Use Cases

There’s so many amazing things you can do with LoRa maker:

  • Digital Likeness: Upload photos of yourself or your clients, and create a digital twin. Then use our imaging products render yourself in any art style or situation.
  • Headshots: Create professional headshots for LinkedIn and social media, with incredible fidelity and quality.
  • Fantasy Photography: Didn’t get the shot you wanted? Now you can render it!  Pros can create new revenue streams, offering digital and fantasy services.
  • Digital studio: Effortlessly create product photos – create a LoRa model for each of your products and it’s like a studio on demand without the costs or wait time.
  • Speed up creativity: Train your own artwork and create art in your style, privately for your own use. (Graydient will never use your images to train our own models)
  • Monetization: Graydient can help you distribute and sell your models as a business. You can offer a credit card portal, free trials, and more. Learn more
  • Resellers welcome:  LoRa maker is also available via the Graydient API

Benefits

Speed and cloud storage

Your model is ready in minutes! We also preload tons of “base” models, a handy foundation that saves you time and disk space.

Flexibility

Model training typically requires specialized hardware to run quickly, but Graydient makes it easy to do anywhere, fast and private on all of your devices. You can train a model from your smartphone while you’re out having a coffee, it’s that handy. 

Lower overall costs 

No expensive GPUs to maintain. Also, unlike other services, there are no “tokens” or “credits” required. LoRa maker is designed for unlimited use, royalty free.

Privacy checklist

No license grant

Your photos are yours alone. Graydient does NOT use any photos for training our own models. We are not assuming granted permission to use our distribute your images. As stated in our Terms of Use, please only upload photos that you have permission to use, and within the laws of your country.

Internet-connected computer

Please refrain from uploading very sensitive images here, and anywhere on the Internet. Please understand the risks. While we make every effort to keep your images safe and regularly audit our systems, there are rare cases of exploits in the Unix operating system and other moving parts. Please use our software responsibly.

Community vs. Private models

It’s possible to create models that are shared with the community, or privately only.  If you’re not 100% sure, please build a quick test model first.  How privacy works & support chat

First time here?  We recommend at least 1 hour to test our 10 best practices (below).

Model Training Tutorial & FAQ

You are training the AI with a pattern. The training method is easy and non-technical: You will be uploading images and describing them.  The quality of the images and the quality of the image captions plays a huge role in the quality of the LoRa. This cannot be overstated. You can import great images, write bad captions, and get a bad result. Amateur mistake.

Our mission is to not only to teach you how to use our software, but also to teach you how to become a good trainer, and how to balance and optimize your data sets, which we call a Catalog.

You also don’t have to learn this alone: Get help in the LoRaMaker Beta Tester support group 

Preparing your data set

The data set is the collection of images that you must gather for your models. We recommend images with a resolution of at least 768×768 for SD15 and 1024×1024. Larger images will get resized and cropped, as even video cards with over 40GB of ram will tap out at higher sizes during training.  It’s best to resize them locally so they upload faster.

Less is more

Avoid this amateur move: uploading 20,000 badly cropped, badly tagged images. Logically, it makes sense, right? How could the AI produce a worse result if you provided a motherload of pictures? The problem with that approach is that the human eye is better at forgiving inconsistencies than computers. Have you ever tried to use face unlock and had to turn on another light or move to get the computer to recognize you?  The computer is seeing a different person, so it’s going to render a different person if you feed that kind of photo.  If you learn one thing from this tutorial, it is to be extremely picky about which photos you allow into your training data set!

Recommendations for characters

Aim for 25-50 well tagged, well lit, clear images (the bare minimum is five images). Your LoRA model requires specific kinds of pictures, not just clear and pretty ones. For example, aim for (6) six full-body shots (10) ten chest up and (10) closeups.  The “steps” or learning rate of each picture should be set at around 1500 steps.  More than this typically has diminishing returns and just slows down the creation. You’ll want to re-train and optimize your model a few times to get your balance right.

Recommendations for art styles / concepts

We’ve had success at training an artist’s style faithfully at 200 images at 50 steps.  You’ll want less steps because you want a looser interpretation of content so that it performs more flexibly, but do feel free to experiment.

Go for quality and consistency, not for volume. 

Ensure consistency by avoiding optical discrepancies, dark or blurry images, and any that alter the subject’s typical appearance, like extreme facial expressions or accessories. Images should be clear, at least 512×512 pixels, and ideally square (1:1 aspect ratio). Each photo should be unique, offer variety in lighting and time, and include informative captions. Avoid images with multiple subjects or repetitive backgrounds, as these can confuse the AI.

Take our free mastery course below.

If you have the time, the long version may turn you into a LoRa making master!  If you study this guide and learn what makes the best catalog of images, you will be much happier with your results. We cannot stress this enough. The guide below walks you through each step of the way.

What to expect

The model training itself is very fast, ready in a few minutes. Our servers are very fast. Depending on your goals, you’re either done in 3 minutes or 6 hours. When the system is done, it will give you a unique tag to add to your prompts.  Check the “How to Prompt” section below for more details.

Absolute Beginners: Create an artistic likeness first, before trying to make a real person, because it usually takes Pros a few tries, too.

How long it takes depends on the quality of photos and how accurate we are at writing captions. These two things make a huge difference. We asked a professional model trainer: It comes down to balancing the data set and prompts — the last 20% can mean the difference of a very high resemblance or something that’s a little uncanny.

When you get that a-ha moment, it’s totally worth it:

If you are creating a general LoRa to create a similar face in many art styles, such as fantasy and illustration, it is MUCH easier than trying to create a realistic carbon copy of someone.  This is because the captions for realistic photos and careful photo selection require practice, observation, retesting, and time.  Unless the person has typical similar features to the people in the base model, and you know how to prompt for those features, you will have some testing to do.

This is why we’ve partnered with Graydient so you don’t have to worry about running up training bills and all your test renders are free and unlimited, too.

Finding your way around

 

The Catalog tab is your Data Set. Every model needs at least FIVE images in its data set or it will not complete its training. This is the minimum. Please see Figure A above for minimum requirements and best practices before uploading.

The Finetunes tab is where your completed and in-process models will appear. It will send a Telegram notifications. You can also refresh the Finetunes page to check if it’s done, too. The column that says concept is the trigger word that goes into your prompt.

So if I want to use the model above, I would write <jon-carnage> in my prompt and also add <realvis51> (not pictured) which is the base art style that corresponds with it. The word Realvis51 comes from our models page, which you can pair to make a LoRA model bend it into any art style. We’ll touch on this in detail below in Prompt Advice, you don’t have to worry about this right now.

Adding Images from the Web

You can add photos three ways. You can directly upload a photo (top right) from WebUI. JPG photos work best. If you are using a VPN, it may slow your speeds. Our data center is located in the East Coast of the US.

Adding Images from Telegram

Reply to a photo in Telegraph with the /catalog command. You can also upload to Telegram and write the /catalog command as the caption. The photo must be sent as a “photo” and not an attachment.

Turn compression ON when sending images.  This won’t hurt the likeness, the effect is very mild.

Tip for Samsung phone users: don’t use the shortcut method to add photos, go the long way into the file manager, then upload it.

The bot will display a confirmation when the image is cataloged, with its item number. The item numbers are not important, they are for debug purposes.

The configuration screen

  1.  

Jump down to Tip #10, we explain the above fields in detail.

Important: The prompt field is what you will be writing to activate your lora.  Don’t forget that phrase. Write it down right now, that’s the activator.

 

Mastery: The 10 Best Practices

We’ve asked top model trainers how they make their sauce. This is what they said.

Tip #1 – Quality over Quantity

Start with five great photos. Look at our five photos critically and make sure that they are unmistakably the same person. what You will need a minimum of five (5) images. In our testing, we have found that the golden number is (26) photos. Do not repeat photos.  Especially do not repeat photos with different captions! Bad!

More photos can produce worse results, because it introduces the risk of more inconsistencies and hurried captioning. Don’t overwork yourself or cut corners.

What makes a good data set?

Our software includes a Catalog – a place to store all of your photo ideas and select the ones that go into the final model, the Data Set.  A data set must contain at least five images.

Use photos taken at different times of the day.

Different days, different lighting work best.

To use an imperfect analogy: If you do an exercise the wrong way 195 times out of 200, you don’t get bigger muscles — you get an injury. Make each rep count.

Here are some examples of good and bad photos, but even the two “good ones” have discrepancies. Does she have a chin dimple, and is her eye color still green in #2?

FIGURE A

Credit: Olivio Sarikas.  An expert model balancer will be able to make these work, but for the average person, you want to avoid these. “Garbage in, Garbage Out” -George Fuechsel, IBM Instructor, 1957

If you are absolutely sure you have 200 good photos and the quality is still bad, you’re either wrong or your captions are the problem. We do a deep dive into captions down below.

Tip #2 – No props, no costumes

Makeup vs. Natural – If a person’s best look is a certain kind of make-up, only upload images with that makeup. Otherwise their features will be confused and subject to the biases of the base model for what makeup is supposed to look like, which can add unwanted matte or gloss, or different eyebrows and lips.

Generally speaking, Stable Diffusion makes women’s faces rounder with fuller lips. Otherwise, the person may look different. If the accessories are very bold and may cause a training distraction, consider removing them or inpainting them out of the photo before uploading.

You learned in rule #1 that it’s a bad image when the face is covered by hands. This also applies to sunglasses, facial glitter, and other foreign objects. No costumes. These things will often cause mutations. The AI already knows what costumes and glasses are, so you can prompt for those sorts of things later.

Be picky. Because training bad data is hard to pluck out. It’s not a less is more thing, it’s a quality thing. Imagine you are the robot and you are confused if these photos are actually of the same person. Obsess over this.

 

Tip #3 – Check for the “fat” in lights

Shadows and Light can create the illusion of slimness of heaviness, shaper facial features just as much as makeup does. Does the person appear to be roughly the same weight on all of the photos?  A certain shadow or dress can add 10 pounds or make a face look rounder or thinner!  If you have those borderline images, toss those.

Do any of the angles make facial features look sharper, rounder, bigger, smaller than the other photos?  Toss those out.

Avoid photos that are dark, blurry, or make the person look different from the other photos. Expert model makers are taking each photo into Lightroom and color correcting them, every image!  You don’t have to do this, but the best of the best do.

Tip #4 – Crop your own images

Manually trim the fat, and set the focus: Again, not a requirement, because our software will actually crop the images for you if the face is too far away. For those highly concerned with quality, doing your own 1:1 crops are a good idea.

The images should be very clear and at least 512×512. Square, 1:1 photos work best.

Tip #5 – No special guests

One person per image, always: Don’t upload photos of multiple people in one image, even if you are describing them as two different people in the caption. This will hurt the training.  You can create multiple models, so a focused model about 1 thing is best.

To add multiple people into a photo, use Inpaint instead.

Let’s look at this example:

While helping a customer improve the likeness of their model, we noticed this caption read “Person’s Name headshot”

If we are a robot and these captions taken literally, we have just learned that every photo of Person must have a (fused?) dog next to their face. (Cursed!)

Are all of your portraits in the same room, at the same hour, with the same light?  That will make a very rigid model. We call this “overtraining” – See the note in Figure A about removing backgrounds and different lighting conditions to solve this.

Don’t take anything for granted. Be detailed and throw out photos that may confuse or trick Stable Diffusion.

Tip #6 – Make backgrounds flexible

Keep the backgrounds varied, or remove them completely.

If all of your photos have the same background, it will be very difficult to prompt a different outcome. If different photos are not possible, we recommend using the background remove command on your photos first.

Backgrounds or Not?

For example, if all of your photos are in a bedroom, it will be harder to prompt for things like “cyberpunk spaceship background” because the AI will have “trained” on the bedroom for multiple steps on every image.

Tip #7: Caption your images methodically

Captions are NOT prompts, don’t add commands inside captions

We asked the team that worked on a very popular NSFW model and various LoRAs, and to our surprise they told us that they take an average of SIX HOURS to caption and rebalance and rebuild their LoRAs constantly before they are satisfied. Wow!  You don’t have to do that, but it just to show how much work goes into some of these popular models.

Commas and periods are OK. Avoid other punctuation marks in captions.  Don’t add commands or ((weights)) or [[negatives]], use natural language

It makes sense: Your favorite Stable Diffusion models were painstakingly captioned, each image tagged with great precision. The best model makers study the render results and blame the bad photos in their data sets for the bad results, and train again. If you have this mindset, you will achieve your goals. Otherwise, every model would be great, and there is clearly a difference in quality between certain models.

Don’t use more than 77 tokens. Tokens are words or parts of words that Stable Diffusion uses to understand your prompts.  This topic itself deserves its own guide. For the purposes of this tutorial, as long as your prompt isn’t a two paragraphs, you’re probably fine.  Aim for 2-3 descriptive and clear sentences.

Make your captions map to how you will prompt in the future.  If you will be prompting for a brunette, then the word brunette must go into your caption. Otherwise, it will strongly pull biases from the base model for that token.

Each image must be cataloged one by one. You can provide a caption for the photo by adding it after /catalog as you upload, and edit it again later in the Catalog system. You will have a chance to check all captions one more time before building the model.

TAG MANAGERS – FOR THE PROFESSIONAL CAPTIONER

We recommend first writing down the description of what the person looks like on a spreadsheet. Do they have a unique nose, do they have a heart-shaped face or is it more of a triangle shape, are their features more masculine or are they soft and round?  These things are important to write down. The hair color, the eye color.  Think about the unique characteristics and also the situations the photos are in.  This free tag management software pictured below is recommended as it will keep you organized and give you a clear battle plan on what to balance out if the quality of your LoRA is not what you had hoped.

The top model makers are people who suffer for their incredible work. While it’s possible to toss up a few instagram pictures and get a LoRa back in 3 minutes, we want to set your expectations correctly. Making a good model requires good curation, an attention to detail, and discipline. You should expect the first model to be a little off.

Tip #8 – Balance your Data Set

Picking good images was the easy part. Tagging them was more difficult. Now comes the hardest part: Balancing what you did.

Look at Figure A again. Think about the additive effect of telling the AI that this person makes crazy faces 8/10 times. So, in a regular situation they must be crazy-looking 80% of the time? If we submitted these 10 images that what we are reinforcing with training.  The average sum here is unbalanced. Be very picky!

FAQ: What percent should be mug shots and body poses?

If body poses and body likeness are important to you, use 6 photos of full body or entire object + 10 medium shot photos from the chest up + 10 close-ups.  Otherwise, only focus only on headshots.

After you try the model, think about how each image, their caption, and the weight of the other images played a role in your results. When you know your prompt is on point but the image is a little off, try removing images from the catalog and building the model again until the balance is just right.  Kind of like cooking, isn’t it?

  • Choose Stable Diffusion 15 for the most flexible results, the best for stylish avatars. This is not the one you want for creating a “deepfake” real person, it will bend a lot based on the base model that you pair it with.
  • You can choose a fine-tuned model Realistic Vision 51 for real people, but first run one of your captions on that model to understand it’s biases.  Does “redhead woman” look like what you expect?  That woman will look totally different in Deliberate2, so know the biases and pick the right model. The tradeoff is that it will be less flexible for other art styles.

Tip #9 – Know the base model biases

Each base model has a bias of what a man and woman are supposed to look like.

Boob size, nose shape, hair style, skin color. The fullness of a woman’s lips. How often window’s peaks appear. Dimples. Every base model has a completely different likeliness of when these things appear.

We strongly recommend trying your target prompt WITHOUT your LoRa first to see what you are working with. When you add your model without a weight, it will influence it by 0.7 (maximum is 2).  Adjust the weight and also your prompt to iron out the biases out of the final result.

This means that if you write “a portrait of a brunette” and Realvis51 has more European images trained than Stable Diffusion 1.5, the resulting image will be less Asian. There is no objective perfect result for what a brunette or an Asian person looks like. Asian could mean more South Asian, for example. (Darker skin)

Your training has to be aware of those biases, and then caption and prompt to beat those biases.  Some base models are harder to override on their biases.

Some models are harder to prompt for than others.  So at run time, your model might be fine but your weights and prompts might also need tuning based on the model and its biases, as we have already mentioned.

Tip #10 – Optimize the token and prompt

The final step when building the Lora, the point of no return

Once the LoRA is built, there is no way to change it. You can train a new LoRA with the same data set though. We don’t throw out your saved Catalog images.

With at least five images selected, click Train Lora in the header or Create Lora in the footer. They both do the same thing. You’ll be presented with a few menu options before the training begins.

Checklist of things to complete before clicking Create Concept:

  1. A friendly name — this is just for your FYI, it won’t effect the image
  2. The concept name — you can’t determine this yet.  It will be your friendly name plus a few numbers, so keep your friendly name short
  3. Token — this is the unique word for your the subject, not the trigger word for your LoRa.  This should be a cryptic weird word that cannot be confused with something else that Stable Diffusion knows.  A good trigger is yxheehgge.  A bad trigger is Susan, Lola, Rita.  Those already have a meaning.
  4. Prompt — This is the activation phrase to use your LoRA, the thing you must repeat and the most important thing is that it must contain the Token word you just made up.
  5. Don’t forget the base model.  The base model will have the biggest impact on how flexible the model is.  Check the bias info in Tip 9 about this.You may want to create multiple versions of a model until you make one that suits your goals.  There’s no penalty for experimentation. If you’re not familiar with the names from the pulldown menu, search for them in our models page to see a preview.

Important: The prompt field is what you will be writing to activate your lora.  Don’t forget that phrase. Write it down right now, that’s the activator.

And you’re almost done!

What happens next:   The system will build the model and give you a Trigger or Concept name back. At this point, we still do not have the name for the model, so we cannot use it.  Wait for it to appear in the fine tunes menu.

This is explained in the Getting Around section above (see Fine-Tunes diagram)

How to use your Lora

  • Repeat the prompt section word-for-word exactly as you trained it (see Concept Configure screen above)
  • Repeat the base concept in your prompt. It is not embedded in the LoRA.  (That would take hours)
  • Try adding /guidance:7 — how strictly the AI should follow your idea.  A max is 20 for very strict, and at 0 its creepy.
  • Correct:   a photo of zhzhzy person <happyguy134> <realvis51>
  • Wrong:  a photo of Susan, a brunette <happyguy>  (did you train the word brunette or Susan? is it in the token or config prompt? if you don’t specify the base model, it will look like an illustration

Please be kind to your humble LoRA trainer servers

At the risk of sounding like one of those hotels that puts up a Save The Earth sticker to lower their laundry bills while the whole staff drives Diesel Ford F150s, there is a compute cost every time we train a LoRA, there is a compute cost (to Graydient, not you) every time a LoRA is created, so please use this resource responsibly so we can offer this product at a low price to others. Don’t smash build unless you mean it.

Troubleshooting

My LoRA says “waiting” for over an hour

Unfortunately this could mean that the process has failed. Try again with less images and make sure the captions don’t have any special characters.

Realistic Model: My LoRA doesn’t look like me or the subject

PREP

If this is your first time, welcome to the club!  You went from being a LoRa first time maker to now being a LoRa first time optimizer. You need to scrutinize your data set, captions, your prompt, and configuration and try again. Olivio Sarikas has a great video on this, and much of his advice is already on this page. Most of the features in the video are available on our platform, including the 4X Ersgan Upscaler (it’s our native one).

WEIGHTS

You can control how much or how little to enforce the training with weights.  The default weight of a LoRa is 0.7.  So <mustacheguy:0.8> is 1x more influential, and <mustacheguy:0.6> lowers it by 1x. If you get blue squares, tuning this helps.

OPTIONS

If the base model <realvis51> isn’t a good fit, try other realistic base models like <natural-sin> from our models page.  Click models at the top of your webui and filter by realistic for hundreds of possibilities.

An error like “unhandled” has occurred when rendering

If you are using other LoRas in combination with your custom model, like a pose or effect or background, this can happen. It may mean that model is new or not correctly installed.  Please contact us in VIP support chat and we’ll fix this for you asap.

Frequently Asked Questions

LoRaMaker.ai is a web application to train a custom Stable Diffusion model in your browser, without the need to rent or use a powerful computer. No technical knowledge necessary. It is very easy to use, and the models are created in minutes, not days. The model is stored to your Graydient Cloud, which can be imported into Graydient AI art creation apps. Unlike other services, we do not charge for 150 photos and run away with your money. You can create an unlimited number of images with your private models with a Graydient Pass membership, and continue to refine and retrain your model.

The word LoRA stands for “Low-Rank Adaptation”, a technology used to train small specific data sets, such as a likeness of a specific person, to use it at runtime alongside a larger model, such as Stable Diffusion fine-tuned models.

Serious power under the hood. Unlike other services (which we won’t name), we don’t limit how many images you can create with your model after it’s ready — and it is ready in minutes, not hours or days. We also don’t make any preset assumptions of what you want your model to be, so you can optimize for realism, stylized, or illustrated, whatever your goal is. With our service, you can train your model to perfection and also use over 3500+ preinstalled models to interact with your model, including enthusiast features like Vass color correction, SDXL fine-tuned models, ControlNet, Inpaint, and more. In short, our service does a lot more and provides incredible value — it also comes bundled with five other professional AI creation apps with unlimited credits.

Depends!

Its possible to upload five images and get a resemblance of a person within 3 minutes, and then pair it with other art styles and prompts to make images immediately. This is very easy to do.

Training a real face to make more real photos of that person… that will depend on how close the person already looks like Stable Diffusion biases. Training a face requires using a “base” model which has many assumptions on what men, women, lips, hair, and realistic looks like, down to color contrast and color saturation. An amateur model maker will land in uncanny valley soon and get frustrated.  Don’t be that person. 

For example, here are 10 training images where only 2 of them are usable, do you know why?  We teach you.

The most common feedback we get is that the person looks like another nationality. This is where captions come in. You must learn how to write good captions if you want to create a great Lora.  

Included: We provide a free extensive training guide to help you become a great model maker. You’ll need a large chunk of time to experiment and find the right formula. At least an hour.

PLUS plan members can create an unlimited number of models during the early adopter beta.  

We have not announced retail pricing, however we plan to offer some free model generation tier for all PRO and PLUS members.

We are studying the computing costs during this private beta. Our pricing will be competitive.

  • Train up to (200) images per model, each image trained at 100 steps.
  • At this time, the amount of steps/epochs is not customizable. We will offer this in the future.
  • The minimum number of images in a data set is (5) five.
  • You can upload your own images, and also also train a model from rendered AI images from our apps. 
  • You can use a combination of both real photos and AI renders.
  • You can load data sets directly from PirateDiffusion without opening the browser
  • You can define a custom trigger and token that is saved in your models library, so you don’t have to type it every time
  • Your Graydient drive provides persistent storage for your data set so you can train and improve models anytime, anywhere right from your smartphone. Train a model immediately from pictures you take on the street, on the train, or in your hot tub.

You can delete your model and retrain it at any time, as well as make models of the same thing to compare with is best. 

This service gives you a ton of control, more than the “as is” sites. You can tune every image to perfection.

Unlike other services, you are in full control of the start and end process. Your good inputs and good prompts will determine your good outcomes.

This is the exact opposite of a so-called “avatar” or “headshot” service, where you  have no-refund clauses and then you’re stuck with 150 that may or may not look like you. You can take this matter into your own hands and also write the kinds of prompts that you care about the most, in the situations and styles you want.

It’s a great place for beginners to learn how to making models works, how photo tagging works, and how to prompt for a model. Check out our detailed tutorial to start making great models.

 This model selection is rolling out right now:

  • Natural Sin – <natural-sin>
  • Realistic Vision 5.1 – <realvis51>
  • Juggernaut Final – <juggernaut>
  • Dreamshaper 8 – <dreamshaper8>
  • I can’t believe it’s not photos – <icb-seco>
  • RPG 5 – <rpg5>
  • Anything V3 – <av3>
  • Deliberate 2 – <deliberate2>
  • Deliberate 3 – <deliberate3>
  • NextPhoto 3 – <nextphoto3>
  • Epic Realism 3 – <photogasm>
  • Protogen Sci-fi – <proto58-scifi>
  • Protogen Real – <proto53-real>
  • Analog Diffusion  – <analog>
  • OpenJourney 4 – <openjourney>

We’re adding many more. Your Graydient membership actually comes with 4,500 models preinstalled, so this is just the most compatible ones right now for the best results.

Not at this time, but we’re interesting in adding many innovative training features, such as being able to start from a base model and one or more LoRAs. (This part is hard, and will take months – we didn’t want to delay our launch any further)

Choose the plan that bets fits your use case on Graydient.  LoRaMaker comes bundled with image creation software as a complete software suite now. The Plus plan offers unlimited images and rendering!

By using this software, you are bound by our Terms of Use

Please only upload photos that you have consent or ownership of. Uploading and creating adult images is allowed, simulated and illustrated sex is fine, hentai is fine, and all forms of kinky-but-lawful fantasy stuff is fine, whatever you’re into. Please adhere to our Content Safety Guidelines. What is not allowed: No cruelty, no racism, no gore, and absolutely no inappropriate underage images. We will report this.

We have a no questions asked 7-day money back unlimited guarantee, there’s no fine print. You can also cancel anytime by clicking on your profile avatar and clicking on subscription.

Your private photos are your property, we do not share your photos or your data, including your models, with anyone. We also don’t pry. That said, for compliance reasons with our merchant banks, we have automated systems and random spot checks by moderators are performed daily for to uphold community safety. We take your security very seriously and have added 2-factor authentication on every account.

The team at Graydient has previously founded websites like Destructoid.com and Classic.com, co-founders Yanier Gonzalez and Thomas Lackner. We’re independent web developers who like to build cool stuff and will never share your data with third-parties, nor train our own models with your personal images or distribute them in any way. For the legal longer version please check our Privacy Policy