Why is comfyui faster reddit. Welcome to the unofficial ComfyUI subreddit.

Why is comfyui faster reddit Locked post. But than I have to ask myself if that is really faster. So, I always wanted to try out ComfyUI in the past. You have to draw a mask, save the image with the mask, then upload to the UI again to inpaint. Share Add a Comment. It's still 30 seconds slower than comfyUI with the same 1366x768 resolution and 105 steps. PSA: RealPLKSR is a new, FANTASTIC (and fast!) 4x upscaling architecture /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Draw Things (which has a lot of configuration settings). SDXL running on ComfUI at 1. I started not too long Hi! Does anyone here use ComfyUI professionally for work, and if so how/why? Also, why do you prefer it over alternatives like Midjourney, A1111, etc. I ignored it for a while when it first came out. 13s/it on comfyUI and on WebUI i get like 173s/it. About knowing what nodes do, this is the hard thing about ComfyUI, but there's a wiki created by the dev (comfyanonymus) that will help to understand many things /r/StableDiffusion is back open after the Introducing "Fast Creator v1. Even just 6 months ago having tensorrt in comfy would have been decently big news. All it takes is taking a little time to compile the specific model with resolution settings you plan to use. Fast ~18 Steps Images (2s inference time on a 3080 Warning. Takes a minute to load. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Welcome to the unofficial ComfyUI subreddit. Key improvements over DMD: Eliminates the need for a regression loss and expensive dataset construction 31 votes, 70 comments. Hope I didn't crush your dreams. com find submissions from "example. Too high on the height and you get multiple heads. 25K subscribers in the comfyui community. (mostly comfyui) with a 3070ti laptop (8gb vram), and I want to do an upgrade getting a good gpu for my desktop pc. Like 20-50% faster in terms of images generated per minute. Possibly some Custom Nodes, or a wrongly installed startup package, like torch or xformers. 1) in A1111. how can i fix that? with my 8 gb rx 6600 which I was only able to run sdxl with sd-next (out of memory after 1-2 runs and on default 1024x1024), I was able to use this is comfyui BUT only with 512x512 or 768x512 - 512x768 (memory errors even with these from time to time) Curiously it is like %25 faster run running a sd 1. ComfyUI is still way faster on my system than Auto1111. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to I'm mainly using ComfyUI on my home computer for generating images. Sort by: Once you figure that out you'll see how /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. With ComfyUI you have access to ready-made workflows, but this can be overwhelming, especially for beginners. A1111 does a lot behind the scenes with prompts, while ComfyUI Doesn't, making it more sensitive to the Prompt length , sampler shouldn't affect but i always use Euler normal , try it out. it has been noticeably faster unless I want to use SDXL Why is there such big speed differences when generating between ComfyUI, Automatic1111 and other solutions? And why is it so different for each GPU? A friend of mine for example is doing this on a GTX 960 (what a madman) and he's experiencing up to 3 times the speed when doing inference in ComfyUI over Automatic's. It's just the nature of how the gpu works that makes it so much faster. A lot of people are just discovering this Go to comfyui r/comfyui • by meap158. Definitely no nodes before that quickly flick green before the KSampler? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Welcome to the unofficial ComfyUI subreddit. I no longer use automatic unless I want to play around with Temporal kit. (and fast!) 4x upscaling architecture After all, the more tools there are in the SD ecosystem, the better for SAI, even if ComfyUI and its core library is the official code base for SAI now days. 6 I couldn't run SDXL in A1111 so I was using ComfyUI. Seems to have everything I need for image sampling. Some of the ones with 16gb vram are pretty cheap now. ( Maybe it's got something to do with the quantization method ? The T5 FP8 + Flux Q3_K_S obviously don't fit together in 8 GB VRAM, and still the Flux Q3_K_S was loaded completely , so maybe I'm just not reading the console right Comfyui makes things complicated but people becomes bored. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the At the end of the day, i'm faster with a111, better ui shortcut, better inpaint tool, better using of copy/paste with clipboard when you want to use photoshop. Tested failed loras with a1111 they were great. Bit it also produces some different colors and its more blurry. I'll try in in ComfyUI later, once I set up the refiner workflow, which I've yet to do. A1111 is like ComfyUI with prebuilt workflows and a GUI for easier usage. tbh i am more interested in why lora is so much different. I regularly get several hours before it breaks. So after someone recently pointing out to me, that Comfy among other things wouldn't be as much of a VRAM hog. But if you want to go into more detail and have complete control over your composition, then ComfyUI. More info: https://rtech Creator mode: Users (also creators) can convert the ComfyUI workflow into a web application, run the application locally, or publish it to comfyflow. You're also super fast--latent consistency models will be added officially to optimum-intel soon. Forge's memory management is sublime, on the other hand. PSA: RealPLKSR is a /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Some common sources of user errors: SD. and don't get scared by the noodle forests you see on some screenshots. On the one hand, EXT is much faster for some operations, on the other, file corruption on NTFS is basically non existent and has been for decades. I started on the A1111. A few weeks ago I did a "spring-cleaning" on my PC and completely wiped my Anaconda environments, packages, etc. When you build on top of software made by someone else, there are many ways to do it. you define the complexity of what you build. Doesn't negate discussing the reasons on why it is being implemented now, which was my point. true. I tested with CFG 8, 6 and 4. to me Comfy feels like something better suited for post processing instead of image generation there is no point using a node based UI for just generating a image but layering different models for upscale or feature refinement is the main reason comfy is actually good after the image generation part, atm using Loras and TIs is a PITA not to mention a lack Welcome to the unofficial ComfyUI subreddit. If I restart the app, then it will be faster again, but again, the second generation and so on will be slower again. for me its the Comfy is faster than A1111 though--and you have a lot of creative freedom to play around with latents, mix-and-match models and do other crazy stuff in a Comfyui has this standalone beta build which runs on python 3. 11. From what I gather only A1111 and its derivatives can correctly append metadata like prompts, CFG scale, used checkpoints/LoRAs and so on while ComfyUI cannot, at least not To verify if I'm full of shit, go generate something and check the console for your iterations per second. But everything goes smooth and fast only on 4090. Fooocus would be even faster. More info: https://rtech If you are looking for a straightforward workflow that leads you quickly to a result, then Automatic1111. the diffusion process so first I made 3 outputs of 10 20 30 samples. It adds additional steps. Turbo SDXL-LoRA-Stable Diffusion XL faster than light My civitai page: https few seconds = 1 image Tested on ComfyUI: workflow. Use all the DevOps services or choose just what you need to complement your existing workflows from Azure Boards, Azure Repos, Azure Pipelines, Azure Test Welcome to the unofficial ComfyUI subreddit. Comfy1111 SDXL Workflow for ComfyUI. 4". 10 votes, 14 comments. comfyUI takes 1:30s, auto1111 is taking over 2:05s Comfy is maybe 10 Everything that has to do with diffusers is pretty much deprecated in comfy rn. But I'm getting better results - based on my abilities / lack thereof - Welcome to the unofficial ComfyUI subreddit. VFX artists are also typically very familiar with node I used ComfyUI for a while but on Linux on my AMD card I found I was constantly getting OOM driver freezes and graphical glitches. When I first saw the Comfyui I was scared by so many options of what can be set. Colab does break in my normal operation. Just write a regular Python function, annotate the signature fully, then slap a \@ComfyFunc decorator on it (The \ shouldn't actually be there, reddit's just being a pain and wants to turn any unescaped @ into a u/). Also it is useful when you want to quickly try something out since you don't need to set up a workflow. ComfyUI is the least user-friendly thing I've ever seen in my life. In ComfyUI using Juggernaut XL, it would usually take 30 seconds to a minute to run a batch of 4 images. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the The main problem is that moving large files from and to an ssd repeatedly is going to wear it out pretty fast. Just a quick and simple workflow I whipped up this morning to mimic Automatic1111's As you get comfortable with Comfyui, you can experiment and try editing a workflow. I was hoping ComfyUI would be even faster than the latest version Adjusting settings, using efficient workflows, and ensuring system resources are optimized will result in faster rendering times and a smoother experience. 1) in ComfyUI is much stronger than (word:1. It also seems like ComfyUI is way too intense on using heavier weights on (words:1. next (still experimental), ComfyUI's performance is significantly faster than what you are reporting. Question - Help Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Shouldn't you be able to reach the same-ish result faster if you just upscale with a 2x upscaler? Is there some benefit to this upscale-then-downscale approach, or is it just related to availability of 2x Comfyui is much better suited for studio use than other GUIs available now. But once you get the hang of it, you understand its power and how much more you can do in it. app, and finally run ComfyFlowApp locally. More info: https Welcome to the unofficial ComfyUI subreddit. This is generated by ComfyUI. Healthy competition, even between direct rivals, is good for both parties. Belittling their efforts will get you banned. For example, SD and MJ are pushing themselves ahead faster and further because of each other. 2) and just gives weird results. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. 9 and it was quite fast on my 8GB VRAM GPU (RTX 3070 Laptop). For instance (word:1. which I rent with a Vast. use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example. Take it easy! 👍 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. #Comfyui #Ultimate upscale - a faster upscale, same quality . 5 to 3 times faster than automatic1111. but can it be used with ComfyUI? In my site-packages directory I see "transformers" but not "xformers". Whether that applies to your case or not really depends on what you’re trying to do. It should be at least as fast as the a1111 ui if you do that. No matter what, UPSCAYL is a speed demon in comparison. Workflows are much more easily reproducible and versionable. Too much and you get side by side people. Bf16 is capable of much better representation for very small decimals. View community ranking In the Top 20% of largest communities on Reddit. Don't know if inpainting works with SDXL, but ComfyUI inpainting works with SD 1. That said, Upscayl is SIGNIFICANTLY faster for me. but it is simply not. And this is generated by webui. Results using it are practically always worse than nearly every other sampler available. Welcome to the unofficial ComfyUI subreddit. Save up for a Nvidia card, and it doesn't have to be the 4090 one. once you get comfy with comfy you don't want to go back. Only problem I have is that it's difficult to undo stuff (cmd/ctrl + z doesn't work?). You also just see everything clearly when using comfy Welcome to the unofficial ComfyUI subreddit. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by default, which makes - I have an RTX 2070 + 16GB Ram, and it seems like ComfyUI has been working fineBut today when generating images, after a few generations ComfyUI seems to slow down from about 15 seconds to generate an image to 1 minute and a half. I haven’t spent enough time to optimize it either so Nodes in ComfyUI represent specific Stable Diffusion functions. i heard that comfyUI generate more faster. CUI can do a batch of 4 and stay within the 12 GB. Having used ComfyUI quite a bit, I got to try Forge yesterday and it is great! Things just work. 5-2it/s, with A1111 opened aside its 10-12it/s Welcome to the unofficial ComfyUI subreddit. This does take 20 to 30 minutes. I like web UI more, but comfy ui just gets things done quicker, and i cant figure out why, its breaking my brain. But you an achieve this faster in A1111 considering the workflow of comfy ui. Here are some The floating point precision on fp16 is very very poor for very very small decimals. PSA: RealPLKSR is a new, FANTASTIC (and fast!) 4x ComfyUI allows you to build an extremely specific workflow with a level of control that no other system in existence can match. So yea like people say on here, your negatives are just too basic. It's great to see that you were able to integrate this before that. When I upload them, the prompts get automatically detected and displayed, but not the resources used. For DPM++ SDE Karras I selected scheduller karras /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Plus, Comfy is faster and with the ready-made workflows, a lot of things can be simplified and I'm learning what works and how on them. Then i tested my previous loras with comfyui they sucked also. Except I have all those csv files in the root directory Comfyui indicates they need to be in, so why I can tell you that ComfyUI renders 1024x1024 in SDXL at faster speeds than A1111 does with hiresfix 2x (for SD 1. 0. If you still have performance issues, report them in this thread, make sure to post your full ComfyUI log and your workflow. Compare this to a single 10MB file - now, the first two steps are a very small fraction of the total time, so it seems much faster, and most of the copy time is sequential I/O rather than random. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Comfy is basically a backend with very light frontend, while A1111 is very heavy frontend. On my machine, comfy is only marginally faster than 1111. More info: https://rtech. bat; Run ComfyUI and try your workflow. There has been a loader for diffusers models but its no longer in development, that's why people are having trouble using lcm in comfy now and also the new 60% faster sdxl (both only support diffusers) 80 votes, 35 comments. Both of the workflows in the ComfyUI article use a single image as input/prompt for the video creation and nothing else. ComfyUI is also trivial to extend with custom nodes. com" 123 votes, 148 comments. Sampling method on ComfyUI: LCM CFG Scale: from 1 to 2 Sampling steps: 4 Locked post. I will say don't dump Automatic1111. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Very nice working well way faster than previous method i was using, testing with a bunch of checkpoints and settings to find a happy balance. But the speed difference is far more noticeable on lower-VRAM setups, as ComfyUI is way more efficient when it comes to using RAM and VRAM. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. While comfyUI is better than default A1111, TensorRT is supported on A1111, uses much less vram and image generation is 2-3X faster. I spend many hours learning comfyui and i still doesn't really see the benefits. And above all, BE NICE. Also "octane" might invoke "fast render" instead of "octane style". I used the same checkpoint, sample method, prompt, step, but i got completely different images from webui and comfyUI, I mean they have different style and color, I don't know why. That makes no sense. Also, if this is new and exciting to you, feel free to When you drag an image to the ComfyUI window, you will get the settings used to create THAT image, not the batch. [Please Help] Why is a bigger image faster to generate? This is a workflow I made yesterday and I've noticed, that the second KSampler is about 7x faster, even though the second sampler processes a larger /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I have tried many times. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. If it isn't let me know because it's something I need What I can say is that I (RTX 2060 6 GB, 32 GB RAM, Windows 11) get vastly better performance on SD Forge with Flux Dev compared to Comfy (using the recommended I've been using A1111 for about half a year and I really really liked ComfyUI, it's a real breath of fresh air, but I'm somewhat upset by the slower generation. /r/StableDiffusion is back open after the protest of Reddit killing Welcome to the unofficial ComfyUI subreddit. But those structures it has prebuilt for you aren’t optimized for low end hardware The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. But it is fast, for whatever that counts for. Information and discussion about Azure DevOps, Microsoft's developer collaboration tools helping you to plan smarter, collaborate better, and ship faster with a set of modern dev services. but many anecdotes on this subreddit that ComfyUI is much faster than A111 without much info to back it up. Only the LCM Sampler extension is needed, as shown in this video. Seems relevant here: I wrote a module to streamline the creation of custom nodes in ComfyUI. So far the images look pretty good except I'm sure they could be a lot thank you for your response. so im getting issues with my comfyui and loading this custom sdxl turbo model into comfyui. I'll tell you why I ended up with ComfyUI. By being a modular program, ComfyUI allows everyone to make workflows to meet their own needs or to experiment on whatever they want. Apparently, that is because of the errors logged at startup. Before the Vast /r/StableDiffusion is Update it using: update/update_comfyui. Even if there's an issue with my installation or the implementation of the refiner in SD. Assume you have a base checkpoint (SD1. It is not. 5), and someone trained and fine-tuned it to generate anime images. ComfyUI is a bitch to learn at first, but once you get a grasp of it, and build the workflows you want to use for what you're doing, you are on a plateau and it's really easy. Whether through software UPDATE: In Automatic1111, my 3060 (12GB) can generate a 20 base-step, 10 refiner-step 1024x1024 Euler a image in just a few seconds over a minute. ComfyUI weights prompts differently than A1111. Finally, drop that picture you generated back into ComfyUI and press generate again while checking the iterations per second. A few new rgthree-comfy nodes, fast-reroutes comfyui always says that its workflow describes how SD works. ? Welcome to the unofficial ComfyUI subreddit. Faster and/or more resource efficient and/or B: More flexible and powerful for the deep-diving workflow crafters, code nerds who make their own nodes, and wonks ComfyUI is also has faster startup, and is better at handling VRAM, so you can generate larger images, or so I've heard. This is why WSL performance on the virtualised ext file system is dramatically better than on the NTFS file system for some apps. I think ComfyUI remains far more efficient in loading when it comes to model / refiner, so it can pump things out faster. A lot of people are just discovering this technology, and want to show off what they created. I've noticed it's really faster than normal decode for big pictures. I want a slider for how many images I want in a On my rig, it's about 50% faster, so I tend to mass-generate images on ComfyUI, then bring any images I need to fine-tune over to A1111 for inpainting and the like. That should speed things up a bit on newer cards. Asked reddit wtf is going on everyone blindly copy pasted the same thing over and over. There's a The Flux Q4_K_S just seems to be faster than the smaller Flux Q3_K_S, despite the latter being loaded completely. despite the complex look, it's /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The weights are also interpreted differently. You can lose the top 4 nodes as they are just duplicates, you can link them back to the original ones. and nothing gets close to comfyui here. Easier to install an run but tend ComfyUI has absolutely no security baked in (neither from the local/execution standpoint, nor from the remote/network authentication standpoint), and the custom node Welcome to the unofficial ComfyUI subreddit. But I still need to fixautomatic1111, might have to re-install. I've played around with different upscale models in both applications as well as settings. Then go disable Hyperthreading in the UEFI. The resulting model itself can be used as a checkpoint, but instead of distributing that whole model they can create LoRa wich is the difference between fine tuned model and original model and small in size generally. Learn comfyui faster I recommend you to install the ComfyUI Manager extension, with it you can grab some other custom nodes available. next is faster, but the results with the refiners are worse looking. New comments cannot be posted. 10K subscribers in the comfyui community. Lower the resolution and if you gotta go wide screen, use outpainting or the amazing photoshop beta. “(Composition) will be different between comfyui and a1111 due to various reasons”. I'm on an 8GB RTX 2070 Super card. Comfyui authors are trying to confuse and mislead people into trusting this. Maybe I am doing it wrong, but ComfyUI inpainting is a bit awkward to use. If it's 2x faster with hyperthreading enabled, I'll eat my keyboard. 😁 The actual copy is quite fast, but writing the metadata is slow. I expect it will be faster. Feels like it is barely faster than my Welcome to the unofficial ComfyUI subreddit. Share Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. Forge is built on top of A1111 web-ui, as you said. Studio mode: Users need to download and install the ComfyUI web application from comfyflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which Welcome to the unofficial ComfyUI subreddit. I At the moment there are 3 ways ComfyUI is distributed: 1. In this case he also uses the ModelSamplingDiscrete node from And i don't understand why, because even when A1111 not being used, the simple fact its opened it slow down my comfyUI (SDXL) generations by 500 to 600%. The workflow is huge, but with the toggles, it can run pretty fast. Now I've been on Comfyui for a few months and I won't turn on the A1111 anymore. 4" - Free Workflow for ComfyUI. Standalone: everything is contained in the zip, you could use it on a brand new system. I've found A1111 is still useful for many things like grids which Comfy can do but not as well. Now with comfyui. But yeah it goes fast in ComfyUi. I have tried it (a) with one copy of SDXL running on each GPU and (b) with two copies of SDXL running per GPU. am I doing something wrong with A1111, or is Comfy UI just that much faster and better? Share Add a Comment. everything ai is changing left and right, so a flexible approach is the best imho. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the I am running ComfyUI on a machine with 2xRTX4090 and am trying to use the ComfyUI_NetDist custom node to run multiple copies of ComfyUI server, each using separate GPU, to speed up batch generation. I meant using an image as input, not video. I want a checkbox that says "upscale" or whatever that I can turn on and off. Comfy does launch faster than auto111 though but the ui will Welcome to the unofficial ComfyUI subreddit. i need help (i just want to install normal sd not the sdxl) Share Add a Comment. do i have to use another workflow or why is the images not rendered instant or ´why do i have these image issues? i provide here link to the model from civitai site and the result image and my comfyui workflow in a screenshot: Definitely the width on your resolution. A lot of people are just discovering this /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I think for me at least for now with my current laptop using comfyUI is the way to go. Like, yeah you can drag a workflow into the window and sure it's fast but even though I'm sure it's "flexible" it feels like pulling teeth to work with. I believe I got fast ram, which might explain it. and i get the following results. support/docs/meta /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. But then I realized, shouldn't it be possible (and faster) to link the output from one into the next instead? /r/StableDiffusion is I don't like ComfyUI, because imo user friendly software is more important for regular use. - I was facing similar issues when i first started using ComfyUI, try adjusting CFG scale to 5 and if your prompts are big like in A1111, add a token merging node. I find that much faster. 2. Try using an fp16 model config in the CheckpointLoader node. This is why I have and use both. 5 checkpoint on the same pc BUT the quality -at least comparing a few prompts Had similar experience when I started with Comfyui. More Lol in full agreement on not using it if you don't want to. Sorry to say that it won't be much faster, even if you overclock the cpu. ai account and a Jupyter Notebook for when I'm trying out new things, want/need to work fast and for img2img batch iterative upscaling. It is not as fast but is more reliable. ComfyUI is really good for more "professional" use and allows to do much more, if you know what are you doing, but it's harder to navigate through each setting if you want to tweak, you have to move around the screen much, zoom in, zoom out etc. 24K subscribers in the comfyui community. Comfy doesn't really do "batch" modes, really, it just adds individual entries to the queue very quickly, so adding a batch of 10 images is exactly the same as clicking the "Queue Prompt" button 10 times. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. app to share it with other users. I guess gpu would be faster, have no evidence, just a guess. I'm always on a budget so I stored all my models in an hdd. My experience with ComfyUI is the opposite. While kohya samples were very good comfyui tests were awful. generally the comfyui images are worst if you use CFG > 4. /r/StableDiffusion is back open after the protest of Welcome to the unofficial ComfyUI subreddit. 5 models) to do the same for txt2img, just using a simple workflow. What normal setting are you curious about? I use 24 samples, 4 cfg and usually no lightning/turbo. Before 1. I am curious why Nvidia waited so long to assist with finally making this available. This update includes new features and improvements to make your image creation process faster and more efficient. Please share your tips, tricks, and workflows for using this software to create your AI art. it's the perfect tool to explore generative ai. Eliminates all the boilerplate and redundant information. 1. Sort by: Best /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I merely stop and restart the Jupiter script. Please keep posted images SFW. If someone needs more context please do ask. 5 and 2. When ComfyUI just starts, the first image generation will always be fast (1 minute is the best), but the second generation (no changes to settings and parameters) and so on will always be slower, almost 1 minute slower. Hi :) I am using AnimateDiff in ComfyUI to output videos, but the speed feels very slow. My system is more powerful than yours, but not enough to justify this enormous Welcome to the unofficial ComfyUI subreddit. Ive tried everything, reinstalled drivers, reinstalled the app, still cant get WebUI to run quicker. The more information the better. I also recently tried Fooocus and found it lacked customisation personally, but appreciate the awesome in-painting they have and their midjourney-inspired 56 votes, 17 comments. subreddit was born from subreddit stable diffusion due to many posts about ai wars on the main stable diff sub reddit. The CPP version overheats my computer MUCH faster than A1111 or ComfyUI. infizoom possible in ComfyUI Any experience/knowledge on any of the above is greatly appreciated. For example you can do side-by-side and compare workflows: one with only base and one with base + lora and see the difference. CUI is also faster. I use a script that updates Comfyui and checks all the Custom Nodes. Unless cost is not a constraint and you have enough space to backup your files, move everything to an ssd. In the github Q&A, the comfyUI author had this to say about ComfyUI: QA Why did you make this? I wanted to learn how Stable Diffusion worked in detail. This is the Official Tower Defense Simulator Reddit, this is a place for our community to interact with each other, post memes, ask ComfyUI also uses xformers by default, which is non-deterministic. . It'll parse the DMD2 aims to create fast, one-step image generators that can produce high-quality images with much less computational cost than traditional diffusion models, which typically require many steps to generate an image. Everything feels fast, haven't found any weird bugs. don't load Runpod's ComfyUI template Load Fast Stable The big difference is that looking at Task Manager (on different runs so as not influence results), my CPU usage is at 100% with CPP with low RAM usage, while in the others my CPU usage is very ow with very high ram usage. Thanks for implementing this so quickly! Messing around with this, I feel like the hype was a bit too much. please help me. Here's the thing, ComfyUI is very intimidating at first so I completely understand why people are put off by it. UPDATE 2: I suggest if you meant s/it, you edit No idea why , but i get like 7. Now why does 7zip help? I can link to the paper discussing why the sampler was created and why it's so much faster if you would like to read it. The only cool thing is you can repeat the same task from the Welcome to the unofficial ComfyUI subreddit. This is the first time I've seen diffusion models on desktop CPU fast enough to actually use in practice. So, while I don’t know specifically what you’ve been watching, the short version is ComfyUI enables things that other UIs can’t. Better to generate a large quantity of images, but, for editing, this is not really efficient. they are different. Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. Comfyui has this standalone beta build which runs on python 3. Here are my Pro and Contra so far for ComfyUI: Pro: Standalone Portable Almost no requirements/setup Starts very fast SDXL Support Shows the technical relationships of the individual modules Cons: Complex UI that can be confusing Without advanced knowledge about AI/ML hard to use/create workflows And it's 2. I accidentally tested ComphyUI for the first time about 20 min ago and noticed I clicked on the CPU bat file (my bad🤦‍♂️). i dont really care about getting the same image from both of them but if you check closely while automatic1111 is almost perfect (you dont have to know the model, it is almost real) but the comfyui one is as if i reduced lora weight or something. /r/StableDiffusion is back open after /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Don't know why. A "fork" of A1111 would mean taking a copy of it and modifying the copy with the intent of providing an alternative that can replace the original. If it allowed more control then more people would be interested but it just replace dropdown menus and windows with nodes. I had previously used ComfyUI with SDXL 0. They are completely different. That could easily be why things are going so fast, I'll have to test it out and see if that's an issue with generation quality. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Yeah, look like it's just my Automatic1111 that has a problem, CompfyUI is working fast. It is how comfyui works, not how SD works. kydh dlzoy gndl ljjp odrsfg ugf wkyfu xfll wbi wphrgb
Back to content | Back to main menu