Tikfollowers

Stable diffusion 1080ti. 1024x768 with Euler and no highres fix - 2.

出图速度怎么样111. Oct 3, 2018 · TensorFlow benchmark results – GTX 1080Ti vs RTX 2080 vs RTX 2080Ti vs Titan V The benchmark for GPU ML/AI performance that I’ve been using the most recently is a CNN (convolution neural network) Python code contained in the NGC TensorFlow docker image. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF. 추가) 중국 사이트 보니 성능 차가 2배란다. 出图速度显卡排行:. Oct 14, 2023 · 2023年10月最新显卡ai绘图性能排行(附:天梯图). collect_env Collecting environment information… We would like to show you a description here but the site won’t allow us. Here are my results for inference using different libraries: pure pytorch: 4. Regarding the system, it is recommended to use at least Windows 10 or a newer version. If you guys are looking for a more in-depth video tutorial, let me know GTX 1080 Ti 11GB vs RTX 4070 12GB l 1440pBuy games at the best prices on gamivo. Features: . com - https://gvo. It seems that isn't quite ideal but people are getting it to work. The speed increase outweighs the 1GB VRAM benefit in my view. 5-4 it/s. Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. zip from here, this package is from v1. But if you plan more training 3090 might have slight edge. 3090 will be a lot more flexible as it has more VRAM. I bought 2080Ti 22GB on taobao. Next up we need to create the conda environment that houses all of the packages we'll need to run Stable Diffusion. 4MB in the 3060 Ti/3070 vs 24MB on the 4060 vs 32MB on the 4060 Ti). Anyway, there is benchmark on Stable Diffusion Benchmarked: Which GPU Runs AI Fastest (Updated) | Tom's Hardware (tomshardware. 看看下面的跑分图就一目了然了!. yaml conda activate ldm. If you don't care about speed, then you can pretty much just let this be whatever. You can generate as many optimized engines as desired. fix, I tried optimizing the PYTORCH_CUDA_ALLOC_CONF, but I doubt it's the optimal config for 8GB vram. 9 ・InvokeAI 2. InvokeAI 「InvokeAI」は、「Stable Diffusion」によるテキストからの画像生成を行うためのアプリケーションです。わずか4GBのRAM を搭載したGPU カードを使用して、Windows、macOS、Linuxで動作します。洗練された 「Web We would like to show you a description here but the site won’t allow us. On average, it was about 15% faster than the GTX 1070, 9% faster than the GTX 1080 8GB, and even manages to be a small 1. Extract the zip file at your desired location. 在AI绘图领域,显卡的性能对于运行效率起着至关重要的作用。. 4. from_pretrained (. 04LTS两个系统下进行。. 4% faster than the Titan X 12GB. 3060 will be faster for gen, P40 may be more useful for fine-tuning models. 5 it/s; Intel: Intel Arc A770 16GB 9. Dec 21, 2022 · AIイラスト生成ソフト「Stable Diffusion」の生成速度は、PCの性能で大きく変わります。現在最強のGPU、GeForce RTX 4090を搭載した最新ゲーミングPCと Im not an expert, my knowledge is basically the higher the number the better, lol. Apr 15, 2023 · 有表格的,差距很大1080ti太老不支持半精度,被2060吊打,考虑1080ti不如直接P40了,有24GB显存. 有没有人用1080t. It features 16,384 cores with base / boost clocks of 2. 224 ControlNet preprocessor location: M:\SDSYS\automatic\extensions-builtin\sd-webui-controlnet\annotator\downloads 15:16:53-658904 INFO ControlNet v1. If you have no free disk space to install Linux to dual boot alongside Windows you may start with USB stick (or better USB SSD) Live USB Linux boot. 5 it/s. With how much faster it is it seems like it would be worth saving a bit if you're already considering a 3090 but I'm a total newbie though so I'm not entirely sure. conda env create -f environment. Elated. Execute the below commands to create and activate this environment, named ldm. Been using GTX 1080Ti for years now and it works wonderfully. 3. Also, i can't use the --Turbo one, it doesn't recognize the argument GPU: 1080TI. Any recommendations?? Explore the Zhihu column for a space that encourages free expression and writing as you please. Aug 5, 2023 · What makes Stable Diffusion unique is that it lacks commercially-developed software and instead relies on various open-source applications. You need to upscale the image. While looking for alternative options, the 1080TI seemed very suitable with 11GB VRAM and price, but I can't sell my current GPU. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 一块2080ti11G涡轮卡,在Win10 20H2与Ubuntu 22. 0 down in hopes that was the issue but here you go: (venv) D:\stable-diffusion-webui>python -m torch. ----- 15:16:51-101250 INFO Available models: M:\SDSYS\automatic\models\Stable-diffusion 5 15:16:53-504220 INFO ControlNet v1. It works great, but it’s not very fast (average of 1it/s), but I have a GTX 1080Ti with like 13GB of Vram, so it should work faster no ? I tried to install xformers in the batch file (–xformers), it made no improvement… I searched on this site and found many ways to improve the speed of the software, but I don’t know how GPU SD1. fix" to 1920*1080. Use the train_dreambooth_lora_sdxl. 在启动Stable Diffusion时一直报Torch not compiled with CUDA enabled警告,一开始没在意本着能用就行的态度凑活用,每个图都耗时十多秒,然后本着好奇Torch not compiled with CUDA enabled这个警告去搜索解决方案,都没说这个警告 I'm in the market for a 4090 - both because I'm a game and have recently discovered my new hobby - Stable Diffusion :) Been using a 1080ti (11GB of VRAM) so far and it seems to work well enough with SD. 2080 Ti라면 또 모를까. You'll need a PC with a modern AMD or Intel processor, 16 gigabytes of RAM, an NVIDIA RTX GPU with 8 gigabytes of memory, and a minimum of 10 gigabytes of free storage space available. 2. Stable Diffusion can run on mid-range GPUs with at least 8GB of VRAM. 5GB of RAM, and the model uses about 4GB of VRAM on a GPU # on the nVidia RTX-2060 GPU card it takes about 6. We would like to show you a description here but the site won’t allow us. 512x768에 53초. 粉丝:41 文章:3. As long as you can load the model into RAM, you could generate on a potato and get the same quality as you would on a high end desktop with a 4090, albeit with much Chris Hemsworth is weird. 1 with batch sizes 1 to 4. For awhile now Ive been thinking of upgrading my GPU from 1080Ti to something else for video editing, gaming and art generating. Create beautiful art using stable diffusion ONLINE for free. 提供知乎专栏文章,探讨不同话题与日常生活相关的知识和见解。 Mar 18, 2023 · I switched from 12. There were some fun anomalies – like the RTX 2080 Ti often outperforming the RTX 3080 Ti. I had been looking at a laptop with a RTX 4070 w/ 8gb vram. You can head to Stability AI’s GitHub page to find more information about SDXL and other diffusion Feb 23, 2023 · 提高Stable Diffusion AI画图 Nvidia显卡十倍计算速度. the second has the following characteristics: PC2: Intel i7-6700K 4. "현재 MSI 1080ti 11G 사용중인데 저번달부터 Stable Diffusion 을 하기 시작했는데 사진 몇장 생성해서 P Jul 31, 2023 · IS NVIDIA GeForce or AMD Radeon faster for Stable Diffusion? Although this is our first look at Stable Diffusion performance, what is most striking is the disparity in performance between various implementations of Stable Diffusion: up to 11 times the iterations per second for some GPUs. On theory, 10x 1080 ti should net me 35,840 CUDA and 110 GB VRAM while 1x 4090 sits at 16,000+ CUDA and 24GB VRAM. TensorRT uses optimized engines for specific resolutions and batch sizes. s = seconds. There are like 50% more 4090 than 1080ti already, according to Steam hardware survey I went from a 1080Ti to a 4090 as well at launch (after trying to get a 3080FE for 2 years) and it was a big jump. utils. Watch as I conduct real-time tests, analyze perfor Makes no sense. How is this even possible? I'm so used to waiting 60+ seconds per image on my outdated 1080ti, and then I try sd_xl_turbo_1. I'm currently running it on my 1060 6gb laptop, I Aug 28, 2023 · Stable Diffusion的发展非常迅速,短短不到一年的时间,它能实现的功能也是越来越多,国内社区的发展也是越来越成熟,国内模型作者带来的底模和Lora等数量也是越发丰富。. Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, cultivates autonomous freedom to produce incredible imagery, empowers billions of people to create stunning art within seconds. Generating originally in 1024x1024 probably isn’t the most efficient thing to do, you could start with a lower res and just upscale the image that looks the best. If you have a free slots on motherboard and PSU capability just install it alongside the AMD GPU, do not connect video outputs to display to save on VRAM. I think the only 30 series with more than 12GB is the 3090. Oct 21, 2023 · AMD Radeon RX 7900 XT. it = iterations, or basically the measure of SD's productivity. Your GPU (or CPU for that matter) won't have any effect on the overall quality of the image produced. You guys clearly are knowledgeable about SD and most the things that you are saying I don't even understand. Mar 10, 2023 · 2080ti的stable-diffusion性能测试. 2 / 2. Amazed. Oct 8, 2022 · Saved searches Use saved searches to filter your results more quickly It uses the HuggingFace's "diffusers" library, which supports sending any supported stable diffusion model onto an Intel Arc GPU, in the same way that you would send it to a CUDA GPU, for example by using (StableDiffusionPipeline. They also didn’t check any of the ‘optimized models’ that allow you to run stable diffusion on as little as 4GB of VRAM. Discover the freedom of expressing yourself through writing on Zhihu's column platform, a space for sharing thoughts and ideas. With #it/s higher is faster, with #s/it lower is faster. You can train on 11gb as well. I'm on 11gb and can do pretty much everything. Jul 10, 2023 · Key Takeaways. A GPU with more memory will be able to generate larger images without requiring upscaling. 以上几块卡的GPU-Z We would like to show you a description here but the site won’t allow us. 74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. Mar 10, 2023 · stable diffusion出图速度显卡排行. 显卡AI跑分天梯图. Besides, unlike other similar text-to-image models, it is often used locally on local systems rather than using online web services. You can make your requests/comments regarding the template or the container. Diffusers dreambooth runs fine with --gradent_checkpointing and adam8bit, 0. 这次我们给大家 Hello. Aug 15, 2023 · Here is the official page dedicated to the support of this advanced version of stable distribution. You can choose between the following: 01 - Easy Diffusion : The We would like to show you a description here but the site won’t allow us. 想知道stable diffusion AI绘画用什么显卡好?. 强哥玩特效. On hot days I think I could use the back end of my PC as a hot air fryer. Some hardware monitor app. There's no turbo option though, it's optimized a bit differently. Nov 8, 2023 · 测试的主角登场, 980Ti,1080Ti,TiTan V,2080Ti,3090,4090,价格从1000块钱到1万多块钱不等,都是各代旗舰游戏显卡,之所以用旗舰游戏显卡是综合考虑了价格和显存带宽之后的结果,毕竟各代旗舰卡和甜品卡无论显存大小还是显存带宽差距都非常大。. I use it mostly to look at temps. Whether you’re a creative artist or an enthusiast, understanding the System Requirements for Stable Diffusion is important for efficient and smooth operation. Yes, full precision should work like that in automatic's repo. 0 Intel Arc A380 6GB 2. The RX 7900 XT is AMD's answer to high-end demands. With fp16 it runs at more than 1 it/s but I had problems We would like to show you a description here but the site won’t allow us. The author uses the latest version of Windows 11. Types: The "Export Default Engines” selection adds support for resolutions between 512 x 512 and 768x768 for Stable Diffusion 1. com) From my own tests (using same input that they used with SD 1. It is basically a 1080ti with 24 ram, it does not have tensor cores, that is, it becomes obsolete, when something requires tensor cores (the next stable diffusion) I use a P40 and 3080, I have used the P40 for training and generation, my 3080 can't train (low VRAM). set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0. 1. 00 GiB total capacity; 8. 5 and 2. I've seen plenty of people recommend RTX 3060 12 GB since more memory will be better for training. 来自 iPhone客户端 3楼 2023-04-17 09:39. Dec 3, 2023 · In this video, I explore the capabilities and limitations of running this advanced AI model on a GTX 1080. Based on cost, 10x 1080ti ~~ 1800USD (180USDx1 on ebay) and a 4090 is 1600USD from local bestbuy. Should be fast right off the bat. Remove your venv and reinstall torch, torchvision, torchaudio. 5, and default workflow from Comfy my results on 1080 are: Astounded. 50 GiB (GPU 0; 10. I spent about a day testing out different workflows and came up with one that works well for someone running an old 1080TI GPU. I have been trying for a very long time to get Regional Prompter to work in Automatic1111 and my 12gb 1080ti, and it just flat out does not, to the… May 14, 2023 · So with GPU's like the 1080ti that have a crippled FP16 performance, FP32 runs faster but consumes more memory. 5-3 it/s. Sep 14, 2023 · When it comes to AI models like Stable Diffusion XL, having more than enough VRAM is important. There are no rays to trace when generating an image with Stable Diffusion. Dec 9, 2023 · 適切なグラボを選んで、画像生成をスムーズに進めよう!この記事では、Stable Diffusionを本格的に利用する上で必要となるGPU搭載のグラフィックボード(グラボ)について、その性能を比較しながら紹介しています。また、マルチGPUに効果はあるのか?など気になる疑問にも回答しています。 I use task mgr too. Running SD 2. However, the 1080Tis only have about 11GBPS of memory bandwidth while the 4090 has close to 1TBPS. Online. 0_fp16 on a whim, and I'm generating 9 images in 7 seconds. webui. That would be the best for SD on Windows. So, finally upgrading to the latest model and src; problem is, I was previously running the SDoptimized scripts on 1. Jun 6, 2023 · stable diffusion 한정으로는 3060 12GB가는 게 속도도 빠르고 메모리도 1GB 많을 걸. 1 768 with a GeForce GTX 1080 TI. Main reason I got it was so I could get 4K panels for programming on my mac. In this comprehensive guide, we’ll go deep into the specifics of running Stable Diffusion effectively, from low Aug 13, 2023 · import gradio as gr import torch from diffusers import DiffusionPipeline # this code runs a gradio interface that allows the user to write a prompt, # and generate an image based on the prompt # the gradio interface launche uses about 1. 512x512 with DPM++ SDE Karras - ~3. Mar 19, 2023 · Installing Stable Diffusion locally is increasingly simple, whether via Automatic1111, Invoke AI or Easy Diffusion. Its raw power makes it a formidable choice for those on the AMD side of the fence. Is it possible to use these two GPU for different tasks simultaneously Stable Diffusion XL. It would seem more time efficient to me due to the capability of a larger sample size, and also return a higher quality output to use a modified fork meant to run on lower VRAM hardware. A 16GB 4060 variant is about to be released, maybe it is worth consideration if you can wait a few more weeks. 我们也可以更全面的分析不同显卡在不同工况下的AI绘图性能对比。. I would choose the 3080 in both cases, for me that one 1GB doesn't justify staying with an older architecture, lower it/s, no bf16 support and older CUDA support. The poor 1080Ti was not up to the task. For inference 4070 is better. Everything has been working fine and trust me I use almost all of the 12GB on almost every batch. Dec 4, 2022 · 「InvokeAI」の使い方をまとめました。 ・Windows 11 ・Python 3. A 1080ti is close to a 3060 with more vram. Here "xpu" is the device name PyTorch uses for Intel's discrete GPUs. Put in "Extras" and upscaled 4x to 7680*4320. These also don't seem to cause a noticeable performance degradation, so try them Sep 23, 2023 · I9 12900k32GB ramGigabyte RTX 4060TI 16GB AEROGpu temperature while running SD (~65°c)Max resolution for the 16GB (1000x1000 with 2x upscale = 2000x2000) but Sep 15, 2023 · When it comes to AI models like Stable Diffusion XL, having more than enough VRAM is important. See running on 4GB section for optimization Mar 14, 2024 · In this test, we see the RTX 4080 somewhat falter against the RTX 4070 Ti SUPER for some reason with only a slight performance bump. My current GPU (RX6600) doesn't have enough VRAM for stable diffusion. Actually is more like a RTX 2070, it's the RTX 3060 Ti that's more like a RTX 2080 Super, I think the RTX 3060 is more worth it than the RTX 3070, because of the 12 GB of VRAM, that's 50 % more memory, even though it's slower, pairing 2 RTX 3060's must be the best bang for the buck for ML right now, it should be really good, you could choose between data parallelism or model parallelism. You need 3 conditions, the card must be Nvidia, its architecture must date from at least 2016 (with Pascal) and it must have 8 GB of Vram. However, we always wonder if our current graphics cards will allow it to run. The post above was assuming 512x512 since that's what the model was trained on and below that can have artifacting. deals/TestingGamesUse coupon code and get discount - TG3Ad Upgrade from 1080Ti. py script to train a SDXL model with LoRA. 2 Intel Arc A750 8GB 8. 0-pre we will update it to the latest webui version in step 3. 测试方法:安装automatic1111的sd-webui后,安装sd webui extension里的system info插件。. 5-2 it/s. xformers: 7 it/s (I recommend this) AITemplate: 10. But basically it/s is faster than any s/it. Make sure you install cuda 11. Thats with a less than $300 video card. 0. Aug 21, 2023 · 最终还是在Windows上部署了一下,支持的完整度比Macos版多很多,1080Ti速度也比较快。 Stable Diffusion在MBP上用太慢也有点奢侈,毕竟笔记本硬盘小 Same here, 980ti ive had for years since I got it new, it does great with stable diffusion, unless xformers is installed, which actually makes things take twice as long as without it lol. It's a little confusing because the measurement is easy to misread. Considering the 1080 Ti is about 40% cheaper than the Titan X, this is an excellent result! We would like to show you a description here but the site won’t allow us. This should be able to be counteracted, by running Stable Diffusion in FP16 memory wise, but spoofed to run on FP32 cores as if it was FP32, thereby gaining performance benefits of FP32 while keeping an FP16 memory footprint. Some examples from my 3070TI: 1024x768 with DPM++SDE Karras and no highres fix - ~1. Stable Diffusion is a popular AI-powered image The most important hardware characteristic for SD is the GPU video memory. Stable Diffusion, one of the most popular AI art-generation tools, offers impressive results but demands a robust system. 选择benchmark level为normal和extensive分别测试。. Gotten into Ai art generation for awhile now to create anime pictures for fun. Dec 14, 2022 · Sounds like you venv is messed up, you need to install the right pytorch with cuda version in order for it to use the GPU. Your RTX 3050 ti only has 4 GB, which is far below the recommended specs for running SD. My question is to owners of beefier GPU's, especially ones with 24GB of VRAM. 2023年03月10日 18:34 --浏览 · --点赞 · --评论. 8, max_split_size_mb:512. The SDXL training script is discussed in more detail in the SDXL training guide. bat to update web UI to the latest version, wait till Mar 30, 2023 · Hi, I recently installed stable diffusion locally on my computer. I wanted to switch to a 3060 12GB, but it's a bit beyond my budget. The 1080ti is pretty slow compare to newer rtx gpus. 5 it/s (The default software) tensorRT: 8 it/s. 62 GiB already allocated; 0 bytes free; 8. However, both cards beat the last-gen champs from NVIDIA with ease. Using other sizes / samplers will have differing results. I recently upgraded to a 3090 strix from a 1080ti and it was a night and day difference. Those number are for single images, AKA batch SIZE of "1". 224 Image Browser: ImageReward is not installed, cannot be used. 5 cfg 512x512 klms. Jul 10, 2023 · I am researching a laptop to buy that I intend to use stable diffusion on and it brought me across this forum. Yes, the 3090 is good. Performance gains will vary depending on the specific game and resolution. Do you find that there are use cases for 24GB of VRAM? The webpage provides data on the performance of various graphics cards running SD, including AMD cards with ROCm support. I can usually get a batch of 36 pictures at 30 cfg euler_a in less than 4 minutes. I also tried the prebuilt you listed, and also tried the VS build of it myself too. The RTX 4070 Ti SUPER is a whopping 30% faster than an RTX 3080 10G, while the RTX 4080 SUPER is nearly 40% faster. 30 GHz with 32 ram and a 3050 ti laptop graphics card. The 3080. PC1:11th gen intel i7 2. The RTX 4090 is based on Nvidia’s Ada Lovelace architecture. A very basic guide to get Stable Diffusion web UI up and running on Windows 10/11 NVIDIA GPU. How is this even in the realm of possibility? automatic1111, Steps: 1, Sampler: Euler a, CFG scale: 1, Seed:, Size: 512x512, Model hash Dec 15, 2023 · We've benchmarked Stable Diffusion, a popular AI image generator, on the 45 of the latest Nvidia, AMD, and Intel GPUs to see how they stack up. . The GPU's 20GB VRAM is particularly appealing for software like Stable Diffusion, ensuring detailed creations come to life without a hitch. This was my first attempt at any sort of DeepFake video creation and there is certainly a lot of room for improvement. I have a 2070 Super with 8GB, and just did a 960x540 with "hires. Gaming (frame pushing) does not benefit as much from it, and that is where the complaint comes mostly, but Stable Diffusion and similar CUDA applications benefit a LOT from it. 3 1. ). The newly released Stable Diffusion XL (SDXL) model from Stab Aug 23, 2022 · Step 4: Create Conda Environment. 近期,国外网站”tomshardware”对Stable Diffusion Benchmarked这款AI绘图软件进行了测试,并针对目前主流游戏显卡在该软件中的运行效率进行了排名 Mar 15, 2017 · With that said, the GTX 1080 Ti 11GB did great when rendering previews. I'd rather not use Automatic1111's web GUI; I prefer CLI interfaces and CLI works better with my current workflow (I SSH into my server, run screen, give it my prompt, and let it Feb 9, 2023 · Stable Diffusion is a memory hog, and having more memory definitely helps. Aren't the 1080ti 11gb? That's plently of vram for running SD. Double click the update. I mean if the 1080ti is free, it a good deal. It's just speed. I have a 3060 12GB. The goal of this docker container is to provide an easy way to run different WebUI for stable-diffusion. A 2080ti is pretty fast in fp16, can undervolt really well and maintain good performance. 3 Looking to generate images quickly. Download the sd. DeepFloyd IF The hardware that the author prepared includes a 1200w Corsair power supply, a GTX 1080ti graphics card for display, and a P40 with a self-made cooling fan. stable diffusion Iterations per Second. 🤣 I think the GPU is actually throttling a bit, but the box is 7 years old and it's RAM that I need, not speed really. From the testing above, it’s easy to see how the RTX 4060 Ti 16GB is the best-value graphics card for AI image generation you can buy right now. AMD and Intel cards seem to be leaving a lot of Sep 3, 2022 · Tried to allocate 1. 512x512 with Euler - ~11-13 it/s. Just as a reminder : on a rtx2060 with 6 gigs of VRAM a txt2img render takes exactly 12 seconds with the default parameters ie 50 steps 7. 4. 1024x768 with Euler and no highres fix - 2. 7, if you use any newer there's no pytorch for that. I got a 3060 (non-Ti) which has 12GB memory stock. The 3060s are around $375 new these days. And I have a totally ancient version of "HWMonitor". 00GHz with 16 ram and a 1080 graphics card, the fact is that for the generation of an image, for example, 50 steps, PC 1 takes 8:30 seconds while PC2 only takes 28 seconds. 5 GHz, 24 GB of memory, a 384-bit memory bus, 128 3rd gen RT cores, 512 4th gen Tensor cores, DLSS 3 and a TDP of 450W. The 4060 Ti has a 128bit memory bus, yes, but Nvidia GREATLY increased the L2 cache on the card (ie. 6G显存也能跑SDXL,还能开CN,告别纯抽卡?方法给你。,StableDiffusion出图速度提高十倍,跑一张图只需2秒?A卡,N卡,CPU出图均可使用。,Stable Diffusion 报错及处理方法,Stable diffusion 高清放大插件MultiDiffusion 小显存也能跑出4k图,SD最强优化设置! Aug 2, 2023 · MSI Gaming X GeForce GTX 1080 Ti Vs Palit Storm X GeForce RTX 3060 12GB in 1440p in 2023. These allow me to actually use 4x-UltraSharp to do 4x upscaling with Highres. If you don't plan on training any custom models then probably a 4070. to ("xpu")). set COMMANDLINE_ARGS=**--precision full --no-half**. Assuming the same price; if you plan on training models or generating very high resolution images, definitely go with the 3090 for the additional VRAM. mr za op rx ms ho vq ua jj sc