Comfyui upscale example reddit

Comfyui upscale example reddit. Sample a 3072 x 1280 image, sample again for more detail, then upscale 4x, and the result is a 12288 x 5120 px image. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. The final node is where comfyui take those images and turn it into a video. We would like to show you a description here but the site won’t allow us. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. Hands are still bad though. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird . Flux Examples. That's it for upscaling. It does not work with SDXL for me at the moment. Thanks. In the CR Upscale Image node, select the upscale_model and set the rescale_factor. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Jan 13, 2024 · submitted 7 months ago * by nooblito. Please keep posted images SFW. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. 43 votes, 16 comments. If you are looking for upscale models to use you can find some on 31 Aug 2024 76:17. Please share your tips, tricks, and workflows for using this software to create your AI art. Welcome to the unofficial ComfyUI subreddit. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). Like 1024, 1280, 2048, 1536. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. I originally wanted to release 9. 5 are usually a better idea than going 2+ here because latent upscale introduces noise which requires an offset denoise value be added in the following ksampler) a second ksampler at 20+ steps set to probably over 0 - run your prompt. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. all in one workflow would be awesome. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. Thanks for your help This repo contains examples of what is achievable with ComfyUI. ComfyUI is a powerful and modular GUI for diffusion models with a graph interface. There's "latent upscale by", but I don't want to upscale the latent image. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. 1 Dev Flux. That's because of the model upscale. Where can one get such things? It would be nice to use ready-made, elaborate workflows! For example, ones that might do Tile Upscle like we're used to in AUTOMATIC 1111, to produce huge images. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. The video demonstrates how to integrate a large language model (LLM) for creative image results without adapters or control nets. If the workflow is not loaded, drag and drop the image you downloaded earlier. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to An example might be using a latent upscale; it works fine, but it adds a ton of noise that can lead your image to change after going through the refining step. Images reduced from 12288 to 3840 px width. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? For example, a professional tennis player pretending to be an amateur tennis player or a famous singer smurfing as an unknown singer. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. You can run AnimateDiff at pretty reasonable resolutions with 8Gb or less - with less VRAM, some ComfyUI optimizations kick in that decrease VRAM required. And above all, BE NICE. Explore its features, templates and examples on GitHub. Depending on the noise and strength it end up treating each square as an individual image. This is done after the refined image is upscaled and encoded into a latent. Currently the extension still needs some improvement, for example you can only do resolution which can be divided by 256. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. By applying both a prompt to improve detail and to increase resolution (indicating as percentage, for example 200% or 300%). Start ComfyUI. Yes, I search google before asking. The armor is upscaled from the original image without modification. If I understand correctly how Ultimate SD Upscale + controlnet_tile works, they make an upscale, divide the upscaled image on tiles and then img2img through all the tiles. So. On my 4090 with no optimizations kicking in, a 512x512 16 frame animation takes around 8GB of VRAM. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. it's nothing spectacular but gives good consistent results without If your image changes drastically on the second sample after upscaling, it's because you are denoising too much. I want to upscale my image with a model, and then select the final size of it. For some context, I am trying to upscale images of an anime village, something like Ghibli style. After that, they generate seams and combine everything together. ComfyUI Examples. I just uploaded a simpler example workflow that does a 2x latent upscale in two ways: . I might do an issue in ComfyUI about that. Here is an example of how to use upscale models like ESRGAN. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. thats Is there a version of ultimate SD upscale that has been ported to ComfyUI? I am hoping to find a way to implement image2image in a pipeline that includes multi controlnet and has a way that I can make it so that all generations automatically get passed through something like SD upscale without me having to run the upscaling as a separate step The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. You end up with images anyway after ksampling so you can use those upscale node. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. Pixel upscale to a low denoise 2nd sampler is not as clean as Upscaler roundup and comparison. 4x using consumer-grade hardware. Thanks I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. It's so wonderful what the ComfyUI Kohya Deep Shrink node can do on a video card with just 8GB. It's why you need at least 0. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. Thanks for the tips on Comfy! I'm enjoying it a lot so far. 2 and resampling faces 0. SDXL most definitely doesn't work with the old control net. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. g Use a X2 Upscaler model. TLDR In this tutorial, Seth introduces ComfyUI's Flux workflow, a powerful tool for AI image generation that simplifies the process of upscaling images up to 5. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. 1 Pro Flux. Flux is a family of diffusion models by black forest labs. I have been generally pleased with the results I get from simply using additional samplers. Latent upscale is different from pixel upscale. ComfyUI Fooocus Inpaint with Segmentation Workflow Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. 2 Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. In the Load Video node, click on choose video to upload and select the video you want. The equivalent to Ultimate SD Upscale for A1111 is Ultimate SD Upscale for ComfyUI. The workflow used is the Default Turbo Postprocessing from this Gdrive folder. I haven't been able to replicate this in Comfy. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt Feature/Version Flux. Makeing a bit of progress this week in ComfyUI. This will get to the low-resolution stage and stop. Step 2: Download this sample Image. Is there a workflow to upscale an entire folder of images as is easily done in A1111 in the img2img module? Basically I want to choose a folder and process all the images inside it. Examples of ComfyUI workflows. I used 4x-AnimeSharp as the upscale_model and rescale the video to 2x. Upscale x1. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. I also combined ELLA in the workflow to make it easier to get what I want. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. The cape is img2img upscale after the first 2x upscale, cropped out that portion as a square, and just highres that portion, and comp it back in. The downside is that it takes a very long time. You guys have been very supportive, so I'm posting here first. 2 options here. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. 0 for ComfyUI. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Thanks! I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. 5 denoise. 0. 5 "Upscaling with model" and then denoising 0. But I probably wouldn't upscale by 4x at all if fidelity is important. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. Latent quality is better but the final image deviates significantly from the initial generation. There are also "face detailer" workflows for faces specifically. 1 or not. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. the good thing is no upscale needed. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. 0 Alpha + SD XL Refiner 1. When I search with quotes it didn't give any results (know it's only giving this reddit post) and without quotes it gave me a bunch of stuff mainly related to sdxl but not cascade and the first result is this: Examples of ComfyUI workflows. So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. You're funny. This could lead users to increase pressure to developers. Adding LORAs in my next iteration. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. 19K subscribers in the comfyui community. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very detailed, by the best painters I wonder if there are any workflows for ComfyUI that combine Ultimate SD Upscale + controlnet_tile + IP-Adapter. I just find I'm gonna inpaint on my images anyways so that whole process is just an extra step and time. You can do the ControlNet/Ulitmate SD Upscale combo. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). Using the Iterative Mixing KSampler to noise up the 2x latent before passing it to a few steps of refinement in a regular KSampler. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Larger images also look better after refining, but on 4gb we aren’t going to get away with anything bigger than maybe 1536 x 1536. Ugh. Thank I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. A lot of people are just discovering this technology, and want to show off what they created. The 16GB usage you saw was for your second, latent upscale pass. You should be able to see where the comp ends, and the quality of the cape drops down to the original upscale. AP Workflow 9. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata Jan 5, 2024 · Example. Still working on the the whole thing but I got the idea down A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Workflow Included Share second pic. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. Hope someone can advise. No attempts to fix jpg artifacts, etc. Both these are of similar speed. For videos of celebrities just going undercover and not doing the activity they are known for please submit to /r/UndercoverCelebs. repeat until you have an image you like, that you want to upscale. The workflow is kept very simple for this test; Load image Upscale Save image. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Here is an example: You can load this image in ComfyUI to get the workflow. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. But I hardly ever use controlnet for upscaling. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. I go back and forth between OG SD Upscale and Ultimate. This repo contains examples of what is achievable with ComfyUI. 1-0. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. There isn't a "mode" for img2img. this is just a simple node build off what's given and some of the newer nodes that have come out. I try to use comfyUI to upscale (use SDXL 1. And when purely upscaling, the best upscaler is called LDSR. - now change the first sampler's state to 'hold' (from 'sample') and unmute the second sampler - queue the prompt again - this will now run the upscaler and second pass. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). Please share your tips, tricks, and… These comparisons are done using ComfyUI with default node settings and fixed seeds. Try immediately VAEDecode after latent upscale to see what I mean. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. I upscaled it to a resolution of 10240x6144 px for us to examine the results. Belittling their efforts will get you banned. I usually take my first sample result to pixelspace, upscale by 4x, downscale by 2x, and sampling from step 42 to step 48, then pass it to my third sampler for steps 52 to 58, before going to post with it. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. aovor tvlhk ozc gjy ydv iberfg bdj ivzmqbd fzt kwli  »

LA Spay/Neuter Clinic