5 Questions about Dual GPU for Machine Learning (with Exxact dual 3090 workstation)

  Переглядів 56,360

Jeff Heaton

Jeff Heaton

День тому

In this video I cover how to use a dual GPU system for machine learning and deep learning. I look at five questions you might have about a dual GPU system.
1:01 Question 1: Do two GPUs combine into one big GPU?
1:42 Data Paralyzation
2:40 Dual GPU Performance
5:02 Model Paralyzation
7:16 Question 2: Can I buy one now, and one later?
9:37 Question 3: Can I mix multiple types of GPU?
10:35 Question 4: How do you cool multiple GPUs?
12:39 Do I need NVLink?
** System Used **
* TRX40 Motherboard
* Threadripper 3960x
* 128GB Memory (16GBx8)
* 2x 4TB PCIe 4.0 NVME
* 2x NVIDIA GeForce RTX 3090
* NVLINK Bridge
For more information about the machine featured in this video, please visit:
www.exxactcorp.com/Deep-Learn...

КОМЕНТАРІ: 102
@deltax7159
@deltax7159 Місяць тому
just found your channel! I'm a graduate student studying statistics planning on building my own ML/DL PC upon graduation to use for gaming/ my own personal research and your channel is slowly becoming INVALUABLE! thanks for all this great content Jeff!
@adityay525125
@adityay525125 2 роки тому
Can we get a 3090 vs A series, with mixed precision thrown in
@GuillaumeVerdonA
@GuillaumeVerdonA 2 роки тому
This is exactly the video I needed right now, Jeff! Thank you
@lenkapenka6976
@lenkapenka6976 Рік тому
Jeff, Fantastic video.... explained a lot of stuff I was slightly fuzzy on.... your explanations were first class
@hoblikdlouhovlasy2431
@hoblikdlouhovlasy2431 2 роки тому
Great video as always! Thank you for your afford!
@KhariSecario
@KhariSecario Рік тому
Thank you! This answer many question I have for building parallel GPU
@zhyere
@zhyere 2 місяці тому
Thanks for giving off some of your knowledge in all your videos.
@datalabwork
@datalabwork 2 роки тому
I have watched every single bit of your video...those IDS takes interest on me. Would you kindly make a video on reviewing DL based IDS on GPU, in any future?
@simondemeule3934
@simondemeule3934 2 роки тому
Would love to see a 3090 vs A5000 vs A6000 comparaison. These are all very closely related - they use the same processor die - what varies is the feature set that is enabled (notably performance on various data types and compute unit count), the memory type and size (GDDR6X vs ECC GDDR6, 24GB vs 48GB), clock speed, power consumption (350W vs 230W vs 300W), cooling form factor (consumer style vs datacenter style), and datacenter usage agreement. It costs a similar amount to get two 3090s, two A5000s or one A6000, and that can be a sweet spot for researchers, budget-wise. That yields the same total VRAM and a comparable amount of compute performance, but in practice these setups can behave drastically differently depending on how the workload parallelizes. Cooling also becomes a concern with more than two GPUs.
@silverback1861
@silverback1861 Рік тому
Thanks for this comparison. learnt a lot to make a serious decision.
@69MrUsername69
@69MrUsername69 2 роки тому
Hi Jeff, I would like to see more use cases and benchmarks with/without NVLINK as well as various precisions FP 16/32/64 to realize if Tensor Cores also combine with NVLINK Memory. Please illustrate some multi GPU use cases and benefits.
@weylandsmith5924
@weylandsmith5924 2 роки тому
@Jeff: I don't concur about the fact that Exxact has built their workstation so that the cooling is maximized. Quite the contrary: I've not managed to understand which 3090 model they are using, but nobody will convince me that two air-cooled 3090s, stacked tightly (not even one slot separation) won't throttle. And indeed that's demonstrated in your very video. Note that you shouldn't watch for die throttling, BUT for gddr6x throttling. Unless you take some fairly drastic precautions, the memory will throttle, and this has been observed for all 3090s on the market (both open air and blower types). By drastic measures I mean: generous heatsinks on the backplate *and* at least two slot separation *and* a very good case airflow *and* reducing the TDP by at least 15% ("and", not "or"). In any case, note that your upper 3090's die *IS* throttling as well: 86C engages the thermal throttling for the die. It's not surprising that there is such a big difference with the lower one, since the upper card suckles the air heated by the lower card's very hot backplate. And you don't have any margin left: the fan is already at full speed. That's BAD. Stacking the gpus so close just so that you can use the A-series nvlink bridge is a bad policy: you trade a bit more nvlink bandwidth for a card that will severely overheat. Use the 4-slot nvlink bridge for the 3090s, and put MORE distance between the cards. Disclaimer: I'm not in the business of building workstations. I'm just an AI engineer who struggled with his own build's cooling (dual nvlinked 3090s as well), learning something in the process.
@stanst2755
@stanst2755 Рік тому
this copper mod might help ukposts.info/have/v-deo/nmiXapB_eoaH0as.html
@peterklemenc6194
@peterklemenc6194 Рік тому
So did you go the water-cooled option or just multi-fans experiments?
@eamoralesl
@eamoralesl 11 місяців тому
Great video it helped me get a better picture of how dual GPUs are used, a question here. I got one of the newer 2060 with 12gb and wanted to pair with another GPU but can't find the same make and model, would it matter if it's a different make? Is it worth getting 2x 2060 in 2023 just for having 24gb VRAM? should I start saving for newer GPUs? Budget is a concern because latest gen GPUs come to my country almost 3x their price on Amazon so imagine those prices... Thanks any opinion helps
@wentworthmiller1890
@wentworthmiller1890 2 роки тому
Comparison wishlist: 3090 vs (3080 ti, 3080, 3060, vs 3060). A combination also: 3090 + 3080 ti, 3090 + 3080, 3090 + 3060. That's a lot. Thought I'd ask 😊 😁. Thank you so much for putting these vids together - it's nice to see and understand various facets of DL, which are not covered in academics generally. Very helpful to get a holistic perspective for a noob like myself.
@harry1010
@harry1010 2 роки тому
Thank you for this!!!!!!
@atefamriche9531
@atefamriche9531 2 роки тому
Not an expert here, but I think in terms of design, a triple or quad slot nv-link with more spacing between the two GPUs would help a LOT. The top GPU is choked. Also, have you checked the memory junction temp? because if your GPU code is hitting 86 deg-C, then the memory junction temps are probably over 105 deg-C, and that is definitely in thermal throttling territory.
@harrythehandyman
@harrythehandyman 2 роки тому
It would be nice to see RTX 3060 12GB vs RTX 3080Ti 12GB vs RTX 3090 24GB vs A6000 in FP16, FP32, FP64.
@Maisonier
@Maisonier Рік тому
+1
@josephwatkins1249
@josephwatkins1249 2 роки тому
Jeff, I have an 8 GPU 30 series rig that I'd like to use for machine learning. If I wanted to use these for data parallelization, how would I set this up?
@absoluteRa07
@absoluteRa07 Рік тому
Thank you very much very informative
@plumberski8854
@plumberski8854 11 місяців тому
Interesting topics for a beginner with this new ML DL hobby! Can I assume that the difference between 3090 and 3060 GPU here is the processing time (assuming data is small enough for the 3060)?
@JamieTorontoAtkinson
@JamieTorontoAtkinson Рік тому
Another gem, thank you!
@HeatonResearch
@HeatonResearch Рік тому
My pleasure! Thanks!
@British_hunter
@British_hunter 8 місяців тому
Smashed my setup with custom watercooling on a RTX3090x2 gpu's and a separate CPU loop. Temps on core,mem, and power don't reaching over 45 Celsius on full load
@Mi-xp6rp
@Mi-xp6rp 2 роки тому
I would love to see more use of the 12 GB RTX 3060.
@qjiao8204
@qjiao8204 Рік тому
I guess you must be misguided by this guy. Don't buy 3060, for this price range, the memory is not important anymore, get a 3070 or 3080, much much faster than 3060.
@Enterprise-Architect
@Enterprise-Architect 4 місяці тому
Thanks for this video. Could you please post a video on how to create a cluster using NVIDIA Tesla K80 24GB GDDR5?
@markhou
@markhou Рік тому
In general would the 3060ti be a better pick than the non ti / 12gb vram?
@seanreynoldscs
@seanreynoldscs 2 роки тому
I find when I'm working with real world problems my tuning can go quicker with multiple gpu's by just training two models back to back to back as I tune.
@DailyProg
@DailyProg 4 місяці тому
Jeff do you have a comparison between 3060 and 3090 and 4090? I have a 3060 and wondering if it is worth the 6x cost to upgrade to a 4090
@sherifbadawy8188
@sherifbadawy8188 Рік тому
Would you suggest dual 3090TI with nvlink vs two rtx-4090 without nvlink?
@rahuls190
@rahuls190 2 роки тому
Hello, can I use NVlink between Quadro rtx5000 and rtx 3090, kindly please let me know.
@theccieguy
@theccieguy Рік тому
Thanks
@mikahoy
@mikahoy Рік тому
Is it need to be connected via NVLink or just plug and play as it is?
@FrancisJMotionDesigner
@FrancisJMotionDesigner Місяць тому
im trying to install a seccond gpu which is a 3070 on my PC. i already have on 3080ti installed. I have enough power but after installation, there is a lag when I move my mouse and frequent crashes. I tried removing all drivers and doing a fresh install with DDU. my motherboard is asus rog strix x570 E.... Please let me know what I'm doing wrong. could it be something with pcie lane support?
@dhaneshr
@dhaneshr 8 місяців тому
its "parellization" not "paralyzation" 🙂
@siddharthagrawal8300
@siddharthagrawal8300 4 дні тому
in your tests do you use nvlink on the 3090?
@0Zed0
@0Zed0 2 роки тому
I'd like to see the 3090 compared to the 3060 and also a comparison of their power consumption, although with a remote system I doubt you'll be able to do that. Obviously the 3060 would be much slower to train on the same data as a 3090 but would it use more, less or the same power to do it?
@amanda.collaud
@amanda.collaud 2 роки тому
@@kilosierraalpha I have a 2080 ti and a 3060 in my computer, they work good. The 3060 is not horribly slower than my 2080 ti, so... plz dont make it sound like the 3060 is not suitable for ML. You can use overclocking on the buswidth btw, I did it aswell and nothing bad happened yet :D
@97pingo
@97pingo 2 роки тому
I would like to ask you for your opinion regarding notebooks. My question is, which notebook might be worth buying in the scenario where I might have a server for heavy computing? The choice of notebook is linked to the need for mobility
@thewizardsofthezoo5376
@thewizardsofthezoo5376 Рік тому
wolfram?
@97pingo
@97pingo Рік тому
@@thewizardsofthezoo5376 may you add more information?
@Rednunzio
@Rednunzio 2 роки тому
Windows or Linux for ML in a multi gpu system?
@HuyTungST
@HuyTungST 10 місяців тому
Hello Jeff. Thank you for your sharing. However, I see an NVLink bridge in your system that looks like a 3-slot bridge. With this bridge, obviously your two GPUs had to be placed close to each other like in the video. I think, although they may still be compatible with each other, this is not a good combination. With your way, the GPU below will heat up the GPU above, and there is no gap to provide fresh air for the GPU above. This poses a risk of damage, even fire or explosion if the system runs at full load for a long time. Looking at your temperature measurements, I also agree with a guy who commented earlier that the actual highest temperature that your GPU can reach is over 100 degrees C at the hottest point (VRAM). Also, there is no 3-slot NVLink bridge dedicated for RTX 3090 on the market. Only 4-slot bridges are available for this GPU. And I think the manufacturers have their reasons, related to the temperature issue. With a 4-slot bridge, the distance will be wider, there will be more space for fresh air to circulate and cool the RTX 3090 better. I think your system should use another main board, one that has a wider gap between 2 PCIE x16 slots than the current one, and enough to fit a 4-slot NVLink. I see that a mainboard like ROG Strix TRX40-E Gaming meets this condition. And, if anything I say is not accurate, please give feedback so I can update my knowledge. :D
@kailashj2145
@kailashj2145 2 роки тому
hoping to see your suggestions for this year's GTC and hoping for some coupons of the conference.
@HeatonResearch
@HeatonResearch 2 роки тому
Working on that now, actually.
@hanhan-jc5mh
@hanhan-jc5mh 2 роки тому
@@HeatonResearch Thank you for your work, I would want to know which plan is better for GAN project, 4 3080Ti or 2 3090? Thannk you.
@wlyiu4057
@wlyiu4057 8 місяців тому
The upper GPU looks like it is going to overheat. I mean it is only barely drawing in air already heated by the lower card.
@hungle2514
@hungle2514 9 місяців тому
Thank you for your video. I have a question. Suppose that I have two monster 3090 gpus and use the NVLInk to connect together. The system will see only 1 card with 48GB or 2 cards. Can I train a model need at least 32GB on the 3090 gpus ?.
@germanjurado953
@germanjurado953 4 місяці тому
Could you figure out the answer?
@mamtasantoshvlog
@mamtasantoshvlog 2 роки тому
Jeff seems you confused yourself both while editing the video and while shooting the video. It's Data parallelization and not paralyzation. I hope I am correct. Let me know if that's not the case. Also would love your advice on something.
@mohansathya
@mohansathya 7 місяців тому
Jeff, Did the dual 3090 (NVLink) actually give you double the vram seamlessly?
@AnushkaChathuranga-cw7tc
@AnushkaChathuranga-cw7tc 5 місяців тому
I have the same problem
@MichaelDude12345
@MichaelDude12345 Рік тому
This is literally the only place I could find information on this subject. I am trying to decide between starting with a 3080 and either a 4070 or 4070ti. Can anyone share with me their thoughts? Price aside I like how much less power the 4070 uses, but I think it would be a performance drop. Either way I know I need the 12gb of vram for what I want to do. The 4070ti seems like it would make up the difference in the performance that the 4070 lacks, but I really like the price-point of the 3080/4070 range. My options are to get one of those and maybe eventually save up to add another card, or go for a cheaper range and get 2 cards for the data parallelization benefits. I really wasn't sure how much data parallelization would be helpful for me but it seems like it would just be a nice bonus, so I am now leaning more towards just starting with one of the cards I listed. Anyone with more knowledge than me on the topic, could you weigh in please? I could really use some pointers.
@Mr.AmeliasDad
@Mr.AmeliasDad 11 місяців тому
Hey man, I'm currently running a 3080. I know you said pricing aside, but the 3090 has come down to the same price as 4070's so I would strongly consider that. I have the 10GB model and would kill for the extra VRAM. Creating a convolutional neural network I ran out of VRAM pretty fast when trying to expand my model. So I either had to split my model among different GPU's or go with a smaller model. Thats why you want to try for more VRAM on a single GPU. That was also on a dataset with 510 classes for classification, which isn't the easiest. I recommend spending what you would on a 4070 or 4070ti and getting a used 3090 for the VRAM. Barring that, I would consider trying to get a used 3080 12GB and saving up for a second.
@QuirkyAvik
@QuirkyAvik 2 роки тому
I bought one 3090 and was so amazed I got another one. Now I am considering building a proper workstation pc since I have picked up a "hobby" of editing people 4k sometimes 8k footage for them along with learning 3d modelling as I want to get into 3d printing as well. The dual 3090 were bought at more than twice the MSRP which has stopped me from building a workstation even though I finally have a case(no pun intended) for it.
@abh830
@abh830 Рік тому
What the recommended cpu case for dual rtx 3090 ti....is dual system/cpu case are better ?
@HeatonResearch
@HeatonResearch Рік тому
That is a 3-slot GPU, so make sure there is enough space and that you can fit it and have at least decent airflow. This is an area where the gamer recommendations on dual 3090 would apply directly to machine learning, and I've seen YT videos on dual 3090.
@BrianAnother
@BrianAnother 2 роки тому
Parallelization
@whoseai3397
@whoseai3397 Рік тому
It's fine to install RTX2080+RTX3080 together, it works!
@Edward-un2ej
@Edward-un2ej Рік тому
I have two 3090 for almost two years. When I training with two cards together, one of them will reduce about 30% due to the cooling.
@manotmapato7594
@manotmapato7594 Рік тому
Did you use nvlink?
@thewizardsofthezoo5376
@thewizardsofthezoo5376 Рік тому
Dual use behalves the PCI bus?
@AnushkaChathuranga-cw7tc
@AnushkaChathuranga-cw7tc 5 місяців тому
Did the dual 3090 (NVLink) actually give you double the vram seamlessly?
@danielklaffmo4506
@danielklaffmo4506 2 роки тому
Jeff, thank you for making these videos. I think you are the right kind of youtuber, your looking at the practical rather than the overly theoretical. But I wish i could talk more with you because i have ideas i like to share (but after contract offcourse) I have kinda maybe done it and yeah kinda need alot of ML engineers and Personalities to gather up to make an event and annual meeting ehmmmmm please let's talk further
@yosefali7729
@yosefali7729 Рік тому
Does it imporve single precision processing using two 3090 with nvlink
@HeatonResearch
@HeatonResearch Рік тому
Yes, I had pretty good luck with nvlink, more here ukposts.info/have/v-deo/oHJ8l4JvnYSLkmw.html
@sigma_z
@sigma_z Рік тому
I have 6x RTX 3090, would it be possible to join all of them together? More importantly, any real advantage for machine learning? Is it better to just get a RTX 4090?
@andreas7278
@andreas7278 Рік тому
You can't "join all 6" together like you suggest. If you just plug in all 6 you can use them in parallel for machine learning but then they don't share any memory (aka there is no memory pooling). You can get nearly linear speedup as long as the model type you are training is parallizable and no other pc component is creating a bottleneck. You can typically expect 1.92 for two cards, 3.84 for 4 cards so for 6 identical gpus you will get your near linear scaling. However, the rtx3090 does not support nv-bridge etc. What you can (and should) do is get 3x nv-link which allows you to bundle two of them always together. By doing that you can effectively use 48 instead of 24gb memory allowing for bigger models and larger batch sizes. So you can both get a nice speedup (large batchsizes are tyically much faster for transformers etc) and you can play around with larger models. Some software like video editing often times does not support nvlink, but tensorflow and pytorch do (what you are probably using).
@maxser7781
@maxser7781 Рік тому
The word is "parallelization" derived from the word "parallel". The word "paralyzation" could be used as a synonym to "paralysis", which is irrelevant in this case.
@dmoneyballa
@dmoneyballa Рік тому
I'd love to see nvidia compared to amd now that rocm is working with all of the 6000 and 7000 series.
@mmehdig
@mmehdig Рік тому
Data Parallelization
@KW-jj9uy
@KW-jj9uy 6 місяців тому
Yes, the Dual GPUs paralyze the data really well. stuns them for over 10 seconds
@Lorphos
@Lorphos Рік тому
In the video description you wrote "data Paralyzation" instead of "Data parallelization"
@AOTanoos22
@AOTanoos22 Рік тому
Why can't you combine the memory of the 3090s to 48GB when using Nvlink and have a larger batch size ? I thought this is what Nvlink was made for, combining both Vrams into a unified memory pool, in this case 48 GB. Correct me if im wrong.
@andreas7278
@andreas7278 Рік тому
That's exactly what nvlink is for, this is correct
@clee5653
@clee5653 Рік тому
@@andreas7278 I'm still confused, does that mean nvlink provides a 48 GB unified vram but it's not a drop-in replacement and we still need to write some acrobatic code to run models larger than vram of a single card?
@andreas7278
@andreas7278 Рік тому
It is indeed a drop-in replacement if you want to call it like that, i.e. 2x rtx3090 (same goes for 2x nvidia titan rtx from the previous generation) connected via nvlink indeed provide you with one unified 48gb vram memory pool which allows you to train larger models and use larger batch sizes. As long as the library you are using supports unified memory you don't need to do any additional trickery or coding, e.g. pytorch or tensorflow will handle this automatically if you use multi gpu mode so no further coding is needed. However, other math libraries such as numpy won't make use of memory pooling. For modern deep learning this is sufficient though since most people will only need the high vram amounts for deep learning. This is what made these dual cards being so popular for machine learning researchers. A lot of scientific ML papers have been using one of these two setups (with the exception of the big players with their gigantic server farms out there like OpenAI, DeepMind, GoogleResearch etc). It was a very economic way to get nearly twice the performance of the corresponding quadro 48gb card (2 cards mostly end up with like 1.92x performance over a single one in pytorch, taking into consideration that quadra cards with their ECC memory are usually a little bit slower you end up at roughly twice the throughput) at the same memory size for an extremely competitive price. Now we finally have the rtx4090, which pushes linear algebra calculations further at a larger generational jump than ever before. But the reason why the generational jump is higher is that they cut out the nvlink memory controller and used that space for more cuda units. This means that the rtx4090 has a larger generational jump over the rtx3090 than the rtx3090 over the titan rtx at a very competitive price. Also, it means that the rtx4090 in comparison to their rtx4070 and rtx4080 delivers exceptional value for money (just look at the total cost for proper water cooling, energy consumption and ML throughput for an rtx4090 compared to like an rtx4080, it's not just much faster, it's a better deal even though it's the highend card). But if you work with any type of transformer models which are very common right now, 24gb is kinda a very low boundary. Often you may only choose the small models and then in combination with ridiculously small batch sizes (not just making training slower but also resulting in other final network results due to maximum likelihood estimation being applied on too few samples for each epoch). More reasonable SOTA models require 50-60gb upwards and 48gb vram provides you with way better options. There are crazy models out there by the likes of OpenAI which literally needs hundreds of gb of vram but well, ... you can't have everything and you would only analyse or downstream train them anyways. If the rtx4090 would allow for nvlink we could get a reasonably prized 48gb setup but as it stands, you need to buy the rtx6000 ada lovelace which will cost a lot more and you also will only be able to leverage your single card throughput. Furthermore, going to 96gb will be impossible with quadro cards now since also these ones don't allow for memory pooling via nvlink any more. So you will have to get tesla cards which are a whole price tier higher. Basically, this new generation is a disappointment for ML researchers if we take reasonable setups into consideration. Other than that the new generation is pretty amazing.
@AOTanoos22
@AOTanoos22 Рік тому
@@andreas7278 thank you for this detailed explanation, very appreciated ! I’m extremely disappointed that Ada Lovelace 40 series cards have no Nvlink anymore, not even the top end RTX 6000 (Ada). Surely anyone who needs more than 48 GB will go with a last gen RTX A6000 setup. Maybe thats another one of Nvidias ways to get rid of Ampere oversupply? What really surprises me, is that Nvlink is supposed to be removed from Ada Lovelace cards on silicon design…yet the new Nvidia L40 datacenter card, which has an Ada Lovelace chip, does have Nvlink according to their website. I guess that makes it the “cheapest” card for ML with >48 GB requirement.
@clee5653
@clee5653 Рік тому
@@andreas7278 You're awesome, man. Just to be specific, to train large models on nvlinked 2x 3090, all I have to do is enable ddp in pytorch, no need for any model parallelization code right? Looks like nvidia is not going to make any relatively cheap card that has vram more than 48 GB, I'm definitely considering picking up another 3090. Having done two research projects on bert-scale model, I'm fed up with not being able to lay my hands on SOTA, mid-size models. My guess is they might ramp up next-gen 5090 cards to 32 GB, but that is not going to bridge the gap between the demand anyway.
@zyxwvutsrqponmlkh
@zyxwvutsrqponmlkh 2 роки тому
Run it on an RPI.
@TimGtmf
@TimGtmf Рік тому
I have a question; can I run 3090 strix and 3090 zotac together? And what is the difference between running same brand and different brands of gpus? Thank you!
@--JYM-Rescuing-SS-Minnow
@--JYM-Rescuing-SS-Minnow 2 роки тому
so U would need a software controller, like some of the software from intel ?🥩🦖 good luck !! I hope the nVidia 4000 series will be out soon! & AMD say's, it will make it's 7000 series beat nVidia in scientific computing!! some day I guess!
@HeatonResearch
@HeatonResearch 2 роки тому
AMD needs more cloud support, the day I can start to get AMD AWS instances, I will start to consider them. I like my local to mirror what I use in the cloud. I am excited about the 4000 series as well, all the rumor mills that I follow suggest 4K series will be out next year this time.
@sergeysosnovski162
@sergeysosnovski162 7 місяців тому
1:43 - parallelization ...
@infinitelylarge
@infinitelylarge Рік тому
I think you mean "parallelization", not "paralyzation". "Parallelization" is the process of making things to work in parallel. "Paralyzation" is the process of becoming paralyzed.
@synaestesia-bg3ew
@synaestesia-bg3ew 11 місяців тому
Your channel is for the rich kids only, you are the Mac Apple channel
@sigma_z
@sigma_z Рік тому
Can we do more than 2 GPUs? Like 4 RTX 3090s?.😎😍🙈
@danielwit5708
@danielwit5708 Рік тому
yes
@sigma_z
@sigma_z Рік тому
@@danielwit5708 how? NV Link appears to only connect 2x RTX 3090's but not 4? I have 6x RTX 3090s 😛
@danielwit5708
@danielwit5708 Рік тому
@@sigma_z your question didn't specify that you asked about nvlink bridge lol I thought you just asking about more than 2 cards 😅
@marvelousbless9128
@marvelousbless9128 10 місяців тому
RTX a 4500 dual GPUs
@pramilapatil8957
@pramilapatil8957 10 місяців тому
are u the gamer grandpa?
@jonabirdd
@jonabirdd Рік тому
Data paralyzation? Really? FYI, it's parallelisation.
@ProjectPhysX
@ProjectPhysX Рік тому
Sadly Nvidia killed the 2-slot consumer GPUs. You can't buy these anymore, only hilariously oversized 4-slot cards that don't fit next to each other. So that people have to buy the overpriced Quadros for dual-GPU workstations.
@ok6959
@ok6959 2 роки тому
why this guy is so slow
@InnocentiusLacrimosa
@InnocentiusLacrimosa 2 місяці тому
People speak at different speeds. Often highly analytical people speak at a slower pace.
NVIDIA REFUSED To Send Us This - NVIDIA A100
23:46
Linus Tech Tips
Переглядів 9 млн
Surprise Gifts #couplegoals
00:21
Jay & Sharon
Переглядів 26 млн
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
skibidi toilet 73 (part 2)
04:15
DaFuq!?Boom!
Переглядів 27 млн
How FAST is the RTX 4090 for 3D Animation + Rendering??
13:59
Sir Wade Neistadt
Переглядів 331 тис.
Building a Portable PC for AI! 2x RTX 3090, 20-cores, 256GB RAM
30:49
O!Technology
Переглядів 7 тис.
AI/ML/DL GPU Buying Guide 2024: Get the Most AI Power for Your Budget
40:27
Watch this BEFORE buying a LAPTOP for Machine Learning and AI 🦾
18:09
Jesper Dramsch – Real-world Machine Learning
Переглядів 122 тис.
RTX 3090 SLI... This isn't going to be as easy as I thought...
19:37
JayzTwoCents
Переглядів 1,5 млн
(4k) RTX 3090*4! It is a Luxury in Dreams
13:01
XCMOD
Переглядів 2,6 млн
Building a GPU cluster for AI
56:20
Lambda Cloud
Переглядів 95 тис.
Build your own Deep learning Machine - What you need to know
11:58
The AI Hacker
Переглядів 203 тис.
What MINING does to Graphics Cards
10:17
northwestrepair
Переглядів 850 тис.
Наушники Ой🤣
0:26
Listen_pods
Переглядів 125 тис.