top of page
justthetipwithdani

How to Use Creative Upscaling Techniques to Repair and Reimagine Ultra Pixelated Images

Updated: Sep 28

Here is your Guide to your Free Magnific AI. If your not familiar with Magnific it has a creative upscale feature where you can reimagine some parts of the image to recreate the image using the pixelated image you gave it as reference for the structure and color.


I will be going over a work flow that was originally made by Olivio Sarikas.


All the download Links mentioned in this tutorial can be found at the very end. I recommend Holding CTRL and clicking on all the links one by one so they open up in multiple Browser Tabs and then download all of them and place them in their respective folders. More instructions will be found at the bottom of the his page. Click HERE to go there now.


Note: We will be using ComfyUI and V1.5 models for this tutorial


Here are some examples of what creative upscales and this workflow can do. you can go from:


This:

To This:


Here are some other examples:

Thoronir the biggest Douche in Elder Scrolls Oblivion turned into a realistic neckbeard






























































Legacy of Kain









Lineage 2 Dark Elf in Plate Armor
























Here is an example with multiple passes with the first image of Sniper Wolf from Metal Gear solid as the original picture with the 1st pass to the right of it.



Here is the Second Pass. You can right click on the final output and send it to the current workflow and run it again



So How do you do this???


Download that workflow.



Here is a version that I saved with colors.




Go to your downloads and Drag the workflow into ComfyUI. (Of course you have to have ComfyUI open)



First order of business is to go into Manager and install missing nodes and update ComfyUI. When you first drag in the workflow (shown above) you will see a lot of red nodes and this is because your missing those nodes. If you don't see them at first you will see that after you run the workflow once.



You will have to click on manager then click on "Install Missing Custom Nodes" restart and then "Update All" and restart. The 1st part will install missing nodes and the second part will ensure the nodes will work.

















 Each time you update you will be asked to restart, Confirm and click OK, and you will get a reconnecting notice. For me it opens up a new tab for comfyUI if it does this for you, close the old one so you're not confused.


We will also have to go into the custom nodes manager and install 2 things (if they aren't already) which are the comfyui Controlnet Auxiliary Preprocessors and the ControlNet Auxiliary shown below. Click on install for both and then click restart and it will refresh your comfyui page.



Here is an overview of the workflow

The manager has installed all of the custom nodes and there should be about 11 custom nodes so this is a huge Quality of Life improvement to have the manager installed. If you don't have it installed then check out my video or blog on how to do this:


The workflow looks fully functional but it won't work for you. Go ahead and upload the worst image you can find into upload portion




You can go ahead and type in a Positive prompt to describe the character because we will use that later after we fix the workflow and add the requirements.


Click on the queue prompt button, you will now

You will notice the workflow has some problems. We will have to find the requirements and add them.



Look for anything that is surrounded by a Red Oval/Square

This shows you where something went wrong and where you need to add something.


The majority of the issues come from missing models for each workflow so this technique will work for other workflows aside from this one.

Let's fix this by looking for anything that says load like the image below then right click on it, go to color and change it to any color you want so you can find them later.


In the loader you will also see a name. You will have to find that model on hugging face or the ComfyUI's manager model manager.

Most of these models are a bit older and will most likely not be on the manager and not all models are on there. So google the name of the model followed by hugging face. It's recommended that you don't grab models from anywhere but Huggingface or CivitAI


So for this one above you would type in Control_Scribble-fp16 Hugging face


In hugging face you will have to click on files and versions to find the downloads



IN this particular case i also had to click on the models folder to get here. These are the old

controlnet models by illyasviel who is the most trusted for these models and the first to release them. The author is actually not russian and his name is Lvmin Zhang. All the links you need to download will be at the bottom of this page.


Click on the down arrow to download the model.


By the way control net has a few different models and there is one called Union ProMax that combines all the control net's into a single package but this workflow isn't compatible because of the way it's set up. I haven't figured out what's messing up.


Now. Very important find the rest of the required things to load. Here is another example.

This is the main model. The green box around it shows it is currently running with no issues on that particular node which in this case is the checkpoint loader. So again, search for realisticvisionv70b1_v5 but in this case models will most likely be on CivitAI. Do the same thing for the remaining 9 or so models.

This is the workflow that I'm using which is a V1.5 base model meaning it was trained with 1.5


Now that you have your models, Pth files upscalers etc... Basically all the things that said you need to load.


Open up your download folder and your comfyUI folder and drag all the models into their appropriate folder.


99% of the time these will go into your comfyUI Models folder. You can tell you put it in the wrong place if you refresh your workflow and look for it but can't find it.


This is where my folder is located. E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models

Find this folder on your PC.


It should look like this:



All Checkpoint models like Stable Diffusion 3 or the RealisticcVision that I referenced earlier will go into your "Checkpoints" Folder.


All Flux Models will go into your UNET folder.


ControlNet files will go into the Controlnet folder.


Upscale models will go into the Upscale_Models folder.


The rest should be self explanatory. Vae's go into the VAE, Loras go into the Loras, etc... With the exception of the SUPIR upscaler which would go int the checkpoints which this workflow doesn't have.


For this workflow we will have checkpoint models, Controlnet models and upscaler models.


After you put everything in their appropriate folders. Refresh your workflow. If you don't refresh you won't see your models. If you refresh and you still don't see your models this might mean you put them in the wrong folder or will have to do a full restart and close the command prompt that comfyui runs when you start it up.


You can refresh by pressing F5 or Pressing Ctrl+R or click on the refresh button at the top left of your browser (edge browsers)




Now we have to select and load the models regardless of if you installed the exact same model. It won't be in the system so Click on the words in the loaders and select the model you wish to use. You don't have to use the exact model, but you need to choose one.



Once you think everything is good to go. Just run the workflow by clicking on Queue Prompt and follow the workflow by searching for green boxes



If you find a red node then check if the model was loaded or try a different model.


Keep in mind control net will have a version to select



This one has SD15 but you can select SDXL if you don't have the correct version for an SDXL model you will get a resolution error as it renders in diff resolutions


Once you get everything running with no red nodes let's take a closer look at the model and how to use it.


The first group is your INPUT


This has your image upload, positive prompt ( what you want in the picture and how you describe it), and the initial denoise (how much you want the image to be reimagined)




Upload what you want reimagined then describe the picture in the custom prompt and leave the denoise. This seems to be the perfect amount of denoise to allow the AI to reimagine the image. You can play with these values later to introduce more noise and allow more or less change.


The next group is the REFINER stage.


This Stage will have 2 Upscalers but these are mild ones, the first one adds details and resolution and the second one Deblurs but you can put whatever you want in here for the initial upscale.

Notice there are 2 Upscale Loaders here that you will have to get models for. I colored them black so they are easier to identify.


Next up is the Model and ControlNet Group


This is where the magic happens and most of the changes are introduced. It is also the most important section to get right.

One important note is the 2 major ksamplers (aside from the upscale ksampler) will be in here.

This will influence how your picture turns out and there is 2 diff passes to iterate and improve on the image. Take note of the optimal steps and CFG for the checkpoint model you upload especially if your images are coming out really bad. Some faster models require 2-6 steps and 1-3 cfg to work right. I always just run it once to see how it looks before I mess with anything.



Sometimes you might have to Mask the image to remove unwanted words or other things you don't want in the image. You can also add a mask to add elements that weren't in the picture to begin with like clothes.


First connect the mask node to the start of the workflow.


Then right click on the load image node and select "Open in Mask Editor"

Add the Mask


Make sure you describe the masked area in this case I put a dress made out of leaves



Don't forget you can do multiple passes by sending the result back to the current workflow and clicking Queue again. Change some of the settings in the ksampler, the model checkpoint, control net models or lower or raise the denoise to tweak the results

Denoise was raised to 0.8 at the first node


















Model was changed from realistic Vision to RPG_V5 which is a fantasy focused model and trained on high fantasy images.



You can also send it to Olivio's upscaler workflow. All the models you downloaded from the reimagine (the workflow this tutorial is about) workflow will work in this one.

Here is the link to that HERE


The portrait upscaler is to Deblur, add detail without changing the image too much if it is already photorealistic. This example was a cartoon but it still kept pretty close to the orginal.




Now Let's talk about VAE's and if you need them or not.

You will need to ensure you have a ckpt that requires a VAE and the VAE is appropriate for the version. i.e. don't use SDXL with a Version 1.5 model. Unfortunately not all models are named well but typically V5 is v1.5 and VAE will be at the end of the model if it's included.


If you do have a model like realisticvisionv60B1_51HyperVAE that includes a VAE you can try using the generic VAE like I have up above but it might not get you the best results.


So the solution is to delete the VAE and connect the VAE to the 3 nodes that require a VAE directly to the Load Checkpoint VAE or drag a link from the red dot in the picture above to the other nodes with Red Dots.


So if you delete the VAE loader below:


You will loose a connection to all the nodes it was connected to which is all the redlines. You can move the node around to see which lines move with it.


But these are the three you will need to reconnect back to the load checkpoint if you delete load VAE.


Here are the first 2 which I colored the nodes yellow:

Technically there is a third one in the top middle but it was already connected directly to the load checkpoint so no action was needed for that one.


You will simply pull from the red vae dot from the load checkpoint which will give you a line and you pull that to the red vae dot on the vae encode/decode nodes. When you are directly over the node connect a yellow circle will appear and when you let go the nodes will be connected by a line.


The last 2 nodes you will need to connect directly to the load checkpoint vae is the 2 upscalers vae connectors.













You can bring the VAE loader back by dragging out a the VAE until you get a line connecter and drop it in a blank space and look for load VAE. Then reconnect the nodes to the Load Checkpoint to the Load VAE to the VAE's again.



Here is a closeup. There will be 2 upscalers. In the picture above you will notice the optional upscale is covered by purple. This is because I bypassed the node because it takes 2x longer than the entire workflow up to this point and makes it take x2 longer for very little gain. You can do this by right clicking on a node and selecting bypass.






















This is possible because nothing depends on the node and it's the last part of the workflow and it won't break anything to bypass it. I just won't get the x2 crisp upscale.


The last group is your upscale and optional upscale which I have bypassed and I do not use.

You final image will end up here.


I won't go over the upscaler other than you will have to load your favorite upscaler at the bottom node which will be used in both the upscale and optional upscale if you choose to do that.


Here is a reminder of the original pic of the creative upscaled image above. This is the character Ashley Riot or Ashe from Vagrant Story on PS1 in 2000 if you're not familiar.


This thing really does miracles.



My Last Tip and this is the end of the tutorial. Don't forget to change the prompt or you will get an odd mixture of elements

























BONUS CONTENT!



Olivio Also had 2 Upscalers that improve the resolution and quality of an image. You should have all the models and control nets to load since you followed the other tutorial so all you will have to do is drag in the workflow and go through the motions.


Step 1: Upload the picture and describe the picture in the prompt


Step 2/ Group 2: Ensure Load upscale model nodes x 2 both have an upscaler. Model used was 4xNomos8kHAT-L_otf.pth and x1_ITF_SkinDiffDetail_Lite. This pruple group is adding details with the 2 upscalers. You can change one of them to DeBlr model if the image is blurry.

Step 3/ Group 3: Load the model (I'm using RealisticVision60 and change control net models x 2 if needed. I would just keep it as is (Depth and Canny) unless you have a specific reason to change it.


Step 4/ Group 4: Ensure Final upscale models are present. These are the same models in Step 2 but loaded in reverse orders.


Your done. Go grab your picture or comparison picture.


Here is a Workflow I altered from Olivio Sarikas for a Landscape upscaler.



Here are all the Download Links mentioned in this tutorial.

I will show you the folders I put my models in and your root folder my vary but it will be the same after ComfyUI Portable and these will all be subfolders of the models Folder.


If you can't find a model listed below just look at the name of the model in the ComfyUI workflow loader and search that name followed by hugging face and you will find a link. Stick to Hugging face, CivitAI and other reputable sites. If you have an option between ckpt and safetensor of the same file. Choose the safetensor.



DOWNLOADS


Olivio's Workflow for Reimagine or creative upscales


Here is my cleaned-up version of the workflow with colored nodes and the final upscale bypassed, right click on the purple bypassed optional node on the far right and select bypass again to enable it:





2 Other amazing upscalers created by Olivio that were not shown in this video. These will improve quality instead of change the details.


Model I used for V1.5

This is the folder I used: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\checkpoints



Control Net

(These will all have the same name when you download them so make sure to rename them as you go. Technically Union is all you need but I couldn't get 1.5 to work on union with this workflow.) This is the folder i put them in: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\controlnet


SDXL Control Net Union - (get the non-promax version) - xinsir/controlnet-union-sdxl-1.0



Upscaler

This is my upscaler folder: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\upscale_models







Take note that each node shows how long the flow took at this portion and the first time loading will take 5-10 longer.



2 Useful tools to do this:


Clip Interrogator. You can upload an image and it will describe to you what a prompt would look like to create this picture which is the data that we need to provide in the positive prompt of the workflow to describe our image.


Next is PicLumen which is an amazing free and unlimited alternative to Midjourney. I will leave a video on PicLumen at the end but one tool that it has that will help us is a prompt enhancer. You can take your basic prompt or a prompt created by clip interrogator and enchance it with this tool.





Our simple prompt has now been converted to:


A woman clad in a sleek, tactical outfit, inspired by the Metal Gear Solid universe, stands confidently in front of a stark white background, her gaze intense as she holds a high-tech pistol at her side, her full body rendered in detailed, concept art style, capturing the essence of a character reminiscent of Solid Snake's enigmatic ally, Silent Wolf, with a stoic, battle-ready expression, her face a picture of determination, as if ready to face any challenge that comes her way, her physique honed from years of covert operations, her hair styled in a practical, yet stylish manner, her outfit a perfect blend of functionality and stealth, evoking the spirit of a skilled operative, like Silent Wolf, who has mastered the art of staying one step ahead of her enemies.







Comments


bottom of page