How to use Faceswap and Facepush in Telegram – Face Swapping AI in Stable Diffusion

Replace the face in any photo by swapping, or “push” a different face into a brand new render

You can create an unlimited number of AI images of your face for social media, or AI avatars for your friends with the AI FaceSwap and AI Facepush features of PirateDiffusion. They’re great for keeping a consistent face across many pictures without complicated prompting or building a model, anyone can do it. Here’s how:

Examples of AI face swap (your face + image)

/faceswap mybro
/faceswap niero

Example of a Facepush (your face + prompt)

FaceSwap and Facepush: What’s the difference?

As you can guess by the name, you can use our Telegram Bot to swap the face of one photo into countless others.  In our implementation, we store the face as a “preset”, you can assign it an easy to remember name, and then targeting other photos with it. There are two ways:

  • FaceSwap requires a finished photo, and the face is “swapped” into it.  This is it’s own command, similar to /remix.  Reply to a photo with /faceswap.
  • FacePush is a parameter of /render.  This means that you can prompt any realistic situation and “push” the face into that situation.  It works similar to a one-shot Lora.  Before using Facepush, you must have a ControlNet preset of a saved face, or a debug ID from a completed photo with a face.

Prepping

For best results, /facelift the image first.  That will create an “ai version” that will sharpen up small images.  If you prefer not to have the face retouched, add the /photo parameter like this:

/facelift /photo /size:768x768

You’re ready to faceswap and facepush by calling the Debug ID, but the ID numbers are of course hard to remember.  Let’s give it a name.

Give it an easy name to remember

You can use the long debug ID string above as the name, or save it as an easier name to remember, like this:

/control /new:browneyedgirl

Now we’re ready.

FACESWAP

Step 1: Create a photograph, or paste a second photo to target

/render /portrait  close-up editorial photo of 20 yo woman, ginger hair, slim American sweetheart, (freckles, lips parted), realistic green eyes, POV, realistic [<easy-negative>] <lyriel16>

Step 2: Reply to the target photo with faceswap

We saved “browneyedgirl” in ControlNet, so we can recall the face anytime from now on:

/faceswap browneyedgirl

To control the effect, use the /strength parameter from 0.0 to 1.0

/faceswap /strength:1 browneyedgirl

There are more examples below with troubleshooting tips.

How to facepush

Facepush is a parameter of the render command. This command is amazing for creating fictional situations where you don’t have a target photo, putting the character in just about any situation.

You can use the same ControlNet preset name as faceswaps, like this:

/render /facepush:browneyedgirl a closeup portrait of a woman standing on a pier <last-unicorn>
Last Unicorn (model) is a great choice when you want powder-soft skin

/render /facepush:browneyedgirl Shrek’s wife in the forest <realvis6>

The shape of the target face will impact how realistic it looks. Similar target, better results. You can see some weird clipping in the Shrek example above.

 

Troubleshooting

Why is my target photo sharp, but my faceswap came out blurry?

This picture is too tiny, use a 768×768 clear face instead with visible pores for best results. But it will still work

Hmmm… let’s take a critical look at your input photo. Can you we the pores on her face? Not really, no. So this will limit the quality of what Facepush and Faceswap can do. If you’re really shooting for realism, you’ll want to be able to see the skin pores very clearly.

But it’s good enough for a quick demonstration, let’s continue.

Consider this example:

/render /size:512×512 /lpw /seed:310239 /sampler:dpm2m /guidance:7.0 masterpiece, masterwork, high quality, [[low resolution, worst quality, blurry, plain background, white background, simple background, normal quality, bad quality]] A woman in a red coat skiing down a snowy mountain surrounded by pine trees. She has a big smile on her face and looks excited. Behind her is a lodge with people coming out onto the deck. /images:9 /steps:20
Scroll back up and look at the input image. The overly smooth input face causes the swap to look too soft. Start from a sharper photo to avoid blurry swaps. Garbage in, garbage out!

Best Practices

  • Use high quality photos with clear lighting
  • Use photos with a minimum resolution of 768×768 to a max of 1400×1400.
  • It’s better to use a bright, daytime input photo even when you are targeting night time or indoor renders. The AI doesn’t have trouble doing style transfer, but it will struggle if it can’t clearly read the input photos
  • You can save an unlimited number of faces
  • It works best when the angle and size of the face are similar
  • When the input photo and target resolution are the same, the results are more crisp
  • For higher quality results, create a lora instead

Limitations

  • Facial hair matters. If you are going from no facial hair to a face that has facial hair, it may erase part of the beard. This can be added back in with Inpaint, but just an FYI.
  • Facepush does not work with SDXL /remix or /more (yet)
  • Facepush requires a realistic render prompt
  • Add a model for better results. In the example above, we added <last-uniforn> to support realism
  • The faces that you create in @piratediffusion_bot cannot be seen by anyone else
  • You cannot use those faces in public groups.  But you can make (separate) shared controlnet presets in group
  • If the input photo isn’t well lit or low quality, you will get fuzzy edges
  • It doesn’t work as well on anime or illustrations
  • When the faces are already similar, the effect is more subtle. But it can be intensified or reduced using the strength parameter
  • It will swap EVERY face in the picture
  • Facepush
    • does not work with SDXL /remix or /more (yet)
    • requires a realistic render prompt
    • Example /render /facepush:myfavoriteguy2 a realistic photo of ____”  <realvis6>

Error Messages

  • If you’re getting a “face not found” image, try doing a /facelift command on it. This will repair found faces and increase the size of the photo, two things that will help the next time you try /faceswap
  • Try cropping the edges of the image so the face is more zoomed in, so it has less pixels to work with
  • Try adjusting the brightness and sharpness of the image
  • Try a combination of these things, with another /facelift after fixing the light and clarity
  • Worst case, try a photo with a different angle

 

More Examples

Here’s PirateDiffusion’s lead developer with a pearl earring:

You can invite your friends into a Telegram group chat, program all of your faces, and roast each other:

One limitation of /faceswap is that it will target ALL of the faces, but you can use /inpaint to correct this


 

Will Smith Chungus Blooper:

/render /facepush:myfavoriteguy2 a man is hugging a giant chungus <realvis6>

Back to Tutorials

PollyGPT 3 Released, available via API, too. Train your own bots (updated)

4/18 Update Great news, everyone! PollyGPT V3 is here.  Pro and Plus Graydient users now have access to more powerful 70 billion parameter models like Llama3 and Mixtral-8x7B. Try the demo by talking to /polly3bot or edit its training and … Read More

Call for Entries – AI art book cover art challenge, $300 cash prize

Graydient Platform is best known for our Stable Diffusion API with thousands of checkpoints, loras, and textual inversion models, but did you know that we also have thousands of brilliant solo creators on the platform as well?  Our community is … Read More

Photo of a man standing in a detailed industrial complex

Try our vastly improved WebUI, reimagined from the ground up

Try our new unified editor! We heard you. Your top 3 most requested features were: (1) a web interface that does not reload when images are created, (2) displays large images in high resolution on desktop and (3) and allows … Read More

New Product Launch: Train your own Image models with LoraMaker

We’re thrilled to announce LoRaMaker.ai – a brand new way to create your own custom image models right from your browser or via the Graydient API within minutes, not hours or days. This service is available today and is free … Read More

More payment options for Creative Suite plans now available

By popular request – you can now subscribe to PirateDiffusion with Paypal, Direct Debit, or Credit Card as we wind down support for Patreon. If you’re an existing Patreon customer and , please contact us in VIP Chat from your … Read More

Background removal and After Detailer have arrived

For API users, Stable2go WebUI and PirateDiffusion users, you can now instantly remove or replace backgrounds in a single rapid step, automatically creating a high-quality mask. Additionally, a HEX color value can be set or the background from an unrelated … Read More

New Samplers are now available across the platform

By request, the rapid-fire LCM samplers are available for Generative Image functions across the Graydient Platform: API, Stable2go, PirateDiffusion, and other derivative apps. We’ve also added the special “Karras” mode to four existing samplers.  Learn about it on the PirateDiffusion … Read More

New VAE, HDR, and Latent Space controls added

For the discerning AI images creators that want more control, we are happy to bring you FreeU, Vass HDR color and composition correction, and a large collection of VAE that can be set at run time. Additionally, we have evaluated … Read More

HDR Color correction in Stable Diffusion with VASS

What is VASS and HDR color and composition correction? VASS is an alternative color and composition for SDXL. To use it, add the the /vass parameter to your SDXL render. Whether or not it improves the image is highly subjective. … Read More

How do swap the VAE in runtime in Stable Diffusion

What is a VAE? VAE is a file that is often included alongside Checkpoints (full models) that impacts the color and noise cleanup, to put it simply. You have have seen models advertised as “VAE baked” — literally packing this … Read More

How to use FreeU in Stable Diffusion with PirateDiffusion

How to use FreeU to boost image quality FreeU is a simple and elegant method that enhances render quality without additional training or finetuning by changing the factors in the U-Net architecture. FreeU consists of skip scaling factors (indicated by … Read More

What does Arafed mean in Stable Diffusion? ShowPrompt and the Describe command

What does Arafed mean in Stable Diffusion? ShowPrompt and the Describe command Ah yes, the mysterious Arafed.  You might see this in prompt outputs or when using our /describe command “The word “arafed” appears to have multiple meanings in English, … Read More

PirateDiffusion Challenges

Update: Congratulations to our winners, Tomato (below) and Kurolux MKII.  A new challenge will begin shortly, please check the channel for updates. Here are the winning entries:   For Telegram users, this month PirateDiffusion exploring ephemeral concepts in the Challenges … Read More

PirateDiffusion Content Guidelines

Be cool. These are general guidelines for your creations in public groups and private spaces as well. This is an evolving document based on community feedback. … Read More

Stable Diffusion XL models are here! ✨

You can try Stable Diffusion XL (SDXL) right now from your browser or Telegram from My.Graydient.ai — SDXL works across all of our apps as well as through our API. We’ve also secured additional hardware to account for the extended … Read More

Request a model install

Are we missing a model? We can install models from Huggingface or Civitai for you.  How to check if we already have it: Search our models collection – and toggle nsfw if applicable. If its not there, we’ll try to … Read More

Free alternative to ChatGPT – Introducing PollyGPT LLM

Anniversary Update Pirate Diffusion turns one To be honest, we weren’t sure if anyone was going to show up, but we’re so happy that you all did. We’ve met so many great creators and it’s a joy to see all … Read More

PirateDiffusion turns one, PollyGPT announced

On August 22, 2022, PirateDiffusion launched as Tlack305’s hobby project. Today it’s used by thousands of people everyday in over 30 countries. There’s so many new features announced today that we we couldn’t possibly fit them all in this blog. … Read More

Side Hustles welcome: Introducing Graydient Affiliates

Introducing our Partner program – open to all individuals and businesses. Graydient has partnered with Tolt.io to provide third-party audited cookies and revenue attribution, meaning we cookie the user to give partners credit even if they don’t transact right away. … Read More

We want your feedback: Join our roadmap

We’ve redesigned all of our product pages to include feedback forums, roadmaps, and changelogs so you can more easily suggest new ideas an follow the things you care about. You can also subscribe and get notified by email when things … Read More

We’re Art Basel @ Black Dove Gallery

Graydient is powering an interactive Generative Art pop-up at #ArtBaselMiami (twitter) We’ll be here for three days at Black Dove Gallery, 2119 NW 2nd Ave. You can test interactive demos illustrating generative AI over QR and SMS, custom curation software … Read More

Cloud Drive update: You can add now favorites

Concept Faves are now available in My Models  You can now tag your favorite concepts and find them more easily. To begin, hit the “♥️ Fave” button in the concept browser in Stable2go and My Graydient Telegram Use also available … Read More

API V3: Over 5,000 fine-tuned generative models on demand

We’ve updated our V3 documents to support new activities Check our the V3 Documentation site No hidden fees. No penalties. No contracts. Graydient Platform hosts the very best Open Source LLM and Generative Image models from Stable Diffusion, in addition … Read More

Textual Inversion models are here, thousands available today

For generative image use cases, we now support Textual Inversions. TIs are the fastest way to arrive at high-quality images without typing a very long negative (or giving up control to an LLM to write your prompt) Thousands of Textual … Read More