Comfyui load workflow from image reddit


  1. Comfyui load workflow from image reddit. Sync your collection everywhere by Git. This workflow allows you to load images of an AI Avatar's face, shirt, pants and shoes and pose generates a fashion image based on your prompt. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. A quick question for people with more experience with ComfyUI than me. AP Workflow v5. If you are still interested - basically I added 2 nodes to the workflow of the image (image load and save image). enjoy. You can Load these images in ComfyUI to get the full workflow. this is just a simple node build off what's given and some of the newer nodes that have come out. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. With for instance a graph like this one you can tell it to: Load this model, put these bits of text into the CLIP encoder, make an empty latent image, use the model loaded with the embedded text and noisy latent to sample the image, now save the resulting image. I hope you like it. No need to put in image size, and has a 3 stack lora with a Refiner. And above all, BE NICE. Browse and manage your images/videos/workflows in the output folder. This is what it looks like, second pic. I have a video and I want to run SD on each frame of that video. You need to select the directory your frames are located in (ie. 2. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. Belittling their efforts will get you banned. json files. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. If it's a . I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. So, I just made this workflow ComfyUI. Pro-tip: Insert a WD-14 or a BLIP Interrogation node after it to automate the prompting for each image. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. And you need to drag them into an empty spot, not a load image node or something. I am trying to understand how it works and created an animation morphing between 2 image inputs. Maybe a useful tool to some people. And images that are generated using ComfyBox will also embed the whole workflow, so it should be possible to just load it from an image. Please share your tips, tricks, and workflows for using this software to create your AI art. How to solve the problem of looping? I had an idea to just write an analog of two-in-one Save image, Load image in one node, that would save the last result to a file and then output it at the next rendering queue. The graph that contains all of this information is refered to as a workflow in comfy. In either case, you must load the target image in the I2I section of the workflow. Get a quick introduction about how powerful ComfyUI Hidden Faces. 75K subscribers. Ending Workflow. 1 or not. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References It is necessary to give the last generated image as it does load image locally. Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. json file location, open it that way. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. That node will try to send all the images in at once, usually leading to 'out of memory' issues. Those images have to contain a workflow, so one you've generated yourself for example. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! Basically if you have a really good photo, but no longer have the workflow used to create it, you can just load the image and it'll load the workflow. If this is what you are seeing when you go to choose an image in the image loader, then all you need to do is go to that folder and delete the ones you no longer need. ComfyUI/web folder is where you want to save/load . I had to load the image into the mask node after saving it to my hard drive. Hello there. [DOING] Clone public workflow by Git and load them more easily. This causes my steps to take up a lot of RAM, leading to killed RAM. and spit it out in some shape or form. They are completely separate from the main workflow. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. That image would have the complete workflow, even with 2 extra nodes. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. 5 checkpoint in combination with a Tiled ControlNet to feed an Ultimate SD Upscale node for a more detailed upscale. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. I liked the ability in MJ, to choose an image from the batch and upscale just that image. It animates 16 frames and uses the looping context options to make a video that loops. That's how I made and shared this. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. I tend to agree with NexusStar: as opposed to having some uber-workflow thingie, it's easy enough to load specialised workflows just by dropping a wkfl-embedded . Please keep posted images SFW. Made this while investigating the BLIP nodes, it can grab the theme off an existing image and then using concatenate nodes we can add and remove features, this allows us to load old generated images as a part of our prompt without using the image itself as img2img. this will open the live painting thing you are looking for. My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. Now the problem I am facing is that it starts like already morphed between the 2 I guess because it happens so quickly. a search of the subreddit Didn't turn up any answers to my question. Initial Input block - I cant load workflows from the example images using a second computer. Notice that Face Swapper can work in conjunction with the Upscaler. Hey all- I'm attempting to replicate my workflow from 1111 and SD1. I can load workflows from the example images through localhost:8188, this seems to work fine. Aug 7, 2023 ยท Workflows can only be loaded from images that contain the actual workflow metadata created by ComfyUI, and stored in each image COmfyUI creates. 82. here Tip, for speed, you can load image using the (clipspace) method using right click on images you generate. Thanks. Experimental Functions. the diagram doesn't load into comfyui so I can't test it out. 168. I'm sorry, I'm not at the computer at the moment or I'd get a screen cap. Add your workflows to the collection so that you can switch and manage them more easily. more. 5. Images created with anything else do not contain this data. Ensure that you use this node and not Load Image Batch From Dir. 0 includes the following experimental functions: Then I fix the seed to that specific image and use it's latent in the next step of the process. Are you referring to the Input folder in the Comfyui installation folder? Comfyui runs as a server and the input images are 'uploaded'/copied into that folder. this is like copy paste basically and doesnt save the files to disk. The images above were all created with this method. I want to load an image in comfyui and the workflow appear, just as it does when I load a saved image from my own work. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. This workflow generates an image with SD1. Upcoming tutorial - SDXL Lora + using 1. Your efforts are much appreciated. The prompt for the first couple for example is this: Basically, I want a simple workflow (with as few custom nodes as possible) that uses an SDXL checkpoint to create an initial image and then passes that to a separate "upscale" section that uses a SD1. Just load your image, and prompt and go. As someone relatively new to AI imagery, I started off with Automatic 1111 but was tempted by the flexibility of ComfyUI but felt a bit overwhelmed. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them And another general difference is that A1111 when you set 20 steps 0. All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. it's nothing spectacular but gives good consistent results without Starting workflow. The image you're trying to replicate should be plugged into pixels and the VAE for whatever model is going into Ksampler should also be plugged into the VAE Encode. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. You can save the workflow as json file and load it again from that file. . Pretty Comfy, Right? ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. Get Started with ComfyUI - Drag and Drop Workflows from an Image! Run Diffusion. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. Any ideas on this? Welcome to the unofficial ComfyUI subreddit. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. Thanks a lot for sharing the workflow. Details on how to use the workflow are in the workflow link. You need to load and save edited image. 0. This is the node you are looking for. json file hit the "load" button and locate the . Unfortunately, the file names are often unhelpful for identifying the contents of the images. 1:8188 but when i try to load a flow through one of the example images it just does nothing. 8K views 11 months ago. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. 5 by using XL in comfy. Is there a way to load each image in a video (or a batch) to save memory? Welcome to the unofficial ComfyUI subreddit. Load Image List From Dir (Inspire). Pixels and VAE. There's a node called VAE Encode with two inputs. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. My 2nd Attempt, i thought to myself, I will go as basic and as easy as possible, I will limit the models I am using to only large popular models, I will try to stick to basic ComfyUI nodes as possible, meaning I have none except for Manager and Workflow Spaces, thats it. Drag and drop doesn't work for . This workflow chains together multiple IPAdapters, which allows you to change one piece of the AI Avatar's clothing individually. PNG into ComfyUI. So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the prompt info and keep my workflow otherwise? Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. I've been using ComfyUI for nearly a year, during which I've accumulated a significant number of images in my input folder through the load image node. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. A lot of people are just discovering this technology, and want to show off what they created. I have to 2nd the comments here that this workflow is great. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? Thanks in advance! My ComfyUI workflow was created to solve that. To be fair, I ran into a similar issue trying to load a generated image as an input image for a mask, but I haven't exhaustively looked for a solution. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. The one I've been mucking around with includes poses (from OpenPose) now, and I'm going to Off-Screen all nodes that I don't actually change parameters on. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. load your image to be inpainted into the mask node then right click on it and go to edit mask. Flux Schnell is a distilled 4 step model. I have like 20 different ones made in my "web" folder, haha. These are examples demonstrating how to do img2img. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Is there a common place to download these? Nome of the reddit images I find work as they all seem to be jpg or webp. It's simple and straight to the point. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. Nobody needs all that, LOL. Have fun. After borrowing many ideas, and learning ComfyUI. I'm not really checking my notifications. Load Image Node. I thought it was cool anyway, so here. I can load the comfyui through 192. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. Welcome to the unofficial ComfyUI subreddit. maut fdhxclo zzy vugjpq ibsq hsio ypvodyy uvjl fxw fwjzk