Nvidia CUDA in 100 Seconds

  Переглядів 1,031,004

Fireship

Fireship

2 місяці тому

What is CUDA? And how does parallel computing on the GPU enable developers to unlock the full potential of AI? Learn the basics of Nvidia CUDA programming in this quick tutorial.
Sponsor Disclaimer: I was not paid to make this video, but Nvidia did hook me up with an RTX4090
#programming #gpu #100secondsofcode
💬 Chat with Me on Discord
/ discord
🔗 Resources
CUDA nvda.ws/3SF2OCU
GTC nvda.ws/3uDuKzj
CPU vs GPU • CPU vs GPU vs TPU vs D...
🔖 Topics Covered
- How does CUDA work?
- CUDA basics tutorial in C++
- Who invented CUDA?
- Difference between CPU and GPU
- CUDA quickstart
- How deep neural networks compute in parallel
- AI programming concepts
- How does a GPU work?

КОМЕНТАРІ: 1 200
@Fireship
@Fireship 2 місяці тому
Shoutout to Nvidia for hooking me up with an RTX4090 to run the code in this video, get the CUDA toolkit here nvda.ws/3SF2OCU
@universaltoons
@universaltoons 2 місяці тому
🥇
@light-gray
@light-gray 2 місяці тому
ZLUDA be like:
@TuxikCE
@TuxikCE 2 місяці тому
yes mom, I need a 4090 to run CUDA.
@r_a4134
@r_a4134 2 місяці тому
Damn you really put that rtx4090 through hell
@HolyRamanRajya
@HolyRamanRajya 2 місяці тому
So this is sponsored?
@tigerseye1202
@tigerseye1202 2 місяці тому
Little know fact, CUDA is actually so fast, that it can bend spacetime and make 100 seconds last 3 minutes and 12 seconds, truly revolutionary.
@killerdroid99
@killerdroid99 2 місяці тому
Underrated comment
@JJGlyph
@JJGlyph 2 місяці тому
He ran the seconds in parallel with Cuda.
@sarimsalman2698
@sarimsalman2698 2 місяці тому
Serious question, why are these videos never 100 seconds?
@_Nonines
@_Nonines 2 місяці тому
Because it's just the name of the series. A catchy title, really. I don't think anyone cares if they're exactly 100s.
@Clarity-808
@Clarity-808 2 місяці тому
To be fair, he explained it in 90 seconds, the rest is building an app.
@mrgalaxy396
@mrgalaxy396 2 місяці тому
I've done a bit of CUDA in uni for a class in parallelism. Let me tell you, writting truly parallel code is a pain in the ass. Ain't no way all those scientists are writing CUDA code, probably some Python abstraction that uses C++ and CUDA underneath.
@acoupleofschoes
@acoupleofschoes 2 місяці тому
Like PyTorch and Tensorflow
@Imperial_Squid
@Imperial_Squid 2 місяці тому
"model.to("cuda:0") is the only cuda you need to know unless you're developing new algorithms or doing something truly wacky
@MaeLSTRoM1997
@MaeLSTRoM1997 2 місяці тому
some (x) mostly (o)
@oksowhat
@oksowhat 2 місяці тому
yeh thats why pytorch and tensorflow exist, i have parallelism and HPC both this sem, writing openmp and MOI codes, truly a pita
@CraftingCake
@CraftingCake 2 місяці тому
There are a few geniuses who write libraries and then there are thousands of devs who build products out of them....
@mjiii
@mjiii 2 місяці тому
The #1 computing platform for vendor lock-in
@PRIMARYATIAS
@PRIMARYATIAS 2 місяці тому
And so is Apple.
@AchwaqKhalid
@AchwaqKhalid 2 місяці тому
Dell in the server space too
@turolretar
@turolretar 2 місяці тому
Cisco as well
@anonymouscommentator
@anonymouscommentator 2 місяці тому
yall forgetting about aws? 😂
@ps3guy22
@ps3guy22 2 місяці тому
No, Nvidia is an open computing platform dedicated to the development of democratized development and open standa--- Pfff 🤣🤣🤣 hahdahha!!
@meh3lp
@meh3lp 2 місяці тому
0:36 this just taught me matrix multiplication, thanks
@ulz_glc
@ulz_glc 2 місяці тому
fr, this 3 seconds animation was better in explaining it than most other explanaitions, and he didnt even spoke about it really.
@alvinbontuyan8083
@alvinbontuyan8083 2 місяці тому
The best thing that had ever happened to me was figuring our what matrices actually represent (a linear transformation) and I've been able to do matrix multiplication without any memorizing simply because its just intuitive now. Try this also because schooling has failed us
@_rshiva
@_rshiva 2 місяці тому
I think that is taken from @3blue1brown, @Fireship ??
@goddamnit
@goddamnit 2 місяці тому
​@@alvinbontuyan8083 can you give a quick example on what you mean with this? I'm not that smart, thanks!
@AiSponge2
@AiSponge2 2 місяці тому
lmao fr, those 3 seconds are extremally helpful
@0seele
@0seele 2 місяці тому
Seeing "Hi Mom!" continue to be in your videos is such a beautiful thing. Hope you're holding up well
@FengHuang13
@FengHuang13 2 місяці тому
Yes, my eyes got wet when I saw that
@forhadrh
@forhadrh 2 місяці тому
Mom be like: I am proud of you, my son
@kamikaze9271
@kamikaze9271 2 місяці тому
Wait, where?
@forhadrh
@forhadrh 2 місяці тому
Where? What did you watch in this video then, lol. @@kamikaze9271 Here: 1:45, 2:53
@depralexcrimson
@depralexcrimson 2 місяці тому
​@@kamikaze9271 2:52
@smx75
@smx75 2 місяці тому
0:45 IEEE 754 moment
@cloudytheconqueror6180
@cloudytheconqueror6180 2 місяці тому
When you use TFLOPs, is it single precision or double precision? Because I see double precision here.
@adialwaysup8184
@adialwaysup8184 2 місяці тому
Gives me PTSD from my master's thesis. Had to modify 4 flags in clang to get acceptable results. Took me a while to figure out.
@Temari_Virus
@Temari_Virus 2 місяці тому
​@@cloudytheconqueror6180Single precision. Double precision is often much slower, though the rtx 4090 is just able to get into the teraflop range for f64
@WolfPhoenix0
@WolfPhoenix0 2 місяці тому
I did some CUDA programming assignments for my college Parallel Computing class. That course was the second hardest CS course I've ever taken (The hardest one is Compilers but that's in its own league). Human brains really weren't designed to think in parallel.
@DK-ox7ze
@DK-ox7ze 2 місяці тому
Which college and course?
@skyhappy
@skyhappy 2 місяці тому
The teacher probably sucked like most academic teachers. If you had fireship it would be a hundred times easier
@duckbuster1572
@duckbuster1572 2 місяці тому
I hope that was graduate level, cause otherwise that is horrific
@KoaIa200
@KoaIa200 2 місяці тому
I would argue that people were not really "designed" to think in any specific way... neuroplasticity for the win... same way that most programmers can think of code. Practise makes perfect.
@KoaIa200
@KoaIa200 2 місяці тому
@@duckbuster1572 It's common for it to be a course in your last year of undergrad... I dont see why it would be horrific.
@Julzaa
@Julzaa 2 місяці тому
1:09 still day zero of not mentioning AI
@2099EK
@2099EK 2 місяці тому
AI is definitely worth mentioning.
@upolpi3171
@upolpi3171 2 місяці тому
​@@2099EKPlease, can we just don't? Physics models (for example) are much more interesting (in my opinion) than curve fitting on steroids. (Just a matter of avoiding a cliche and showing a greater range of GPU computing applications)
@thecutepika
@thecutepika 2 місяці тому
​Why, fitting so much complex curves that reflect reality is indeed worth mentioning ​@@upolpi3171
@devrim-oguz
@devrim-oguz 2 місяці тому
It’s more like zero minutes 😂
@mechadeka
@mechadeka 2 місяці тому
@@anon8510You're literally on a technology channel, you Twitter drone.
@r.y.z.
@r.y.z. 2 місяці тому
ngl, I'm really loving how often these videos are being uploaded. It's often, but not so often that I feel overwhelmed and just spaced out enough that I feel a little excited when a new one comes out!
@YOTUBE8848
@YOTUBE8848 2 місяці тому
wait until he drops some existential crisis type content lol
@johnfrusciantefan90
@johnfrusciantefan90 2 місяці тому
Wrote Cuda at university .. getting the indices, blocks etc right ... that was fun (also since thread count depends on the actual GPU model). For the final project, we were allowed to use libraries such as thrust which made my life a ton easier by abstracting away most of the fun stuff.
@KoaIa200
@KoaIa200 2 місяці тому
thread count is not depended on GPU model (max 1024 threads per block), total block size and number of cores are depended on number of SMs and cuda computability.
@Brahvim
@Brahvim 2 місяці тому
Sounds like the "fun" was actually "fun boilerplate but it's still just boilerplate". Correct? Or... are you being _purely_ sarcastic?
@johnfrusciantefan90
@johnfrusciantefan90 2 місяці тому
@@BrahvimBoth actually. It was fun in the beginning, but with more complex projects/tasks it became harder to understand how to use it correctly (espeically kernel launch configs with the dimensions, etc). Mabye, with more experience, it would be easier for me today than it was at that time. But don't get me wrong, they also showed how to do the same thing with OpenCl and the amount of boilerplate code for this to run was way more than with Cuda. And when they allowed using thrust for the final project, most of the boilerplate code was gone because thrust abstracts that away. It was more fun to work with an API that offers host and device vectors and a standard library for common tasks. But, thrust also abstracts away the launch configurations for kernels etc, so you loose control (which was fine for me because I struggelded with the more advanced concepts). But I guess you will loose some speed/memeory effeciency like with all abstractions.
@johnfrusciantefan90
@johnfrusciantefan90 2 місяці тому
@@KoaIa200you are right. I am sorry. The more advanced kernel launch configs with block size etc was quite hard for me and I haven't used Cuda in years now. But I remeber struggeling with the concepts after the initial easy tasks
@johnfrusciantefan90
@johnfrusciantefan90 Місяць тому
@@BrahvimNo, it actually was fun, but it is also hard. And if you compare to OpenCL it is actually much much less boilerplate code. In the beginning, exercise were quite easy but with more complex tasks, it became much harder. For the final project we were allowed to just thrust which is a library that makes things much easier. E.g. it provides host and device vectors and it also handles all boilerplate stuff. However, you will loose control because it is a abstraction and probably some speed. But today, if I would need to do Cuda again it would be with thrust (at least in the beginning)
@imWaytooRad
@imWaytooRad 2 місяці тому
Thanks! I was having this discussing with my coworkers the other day about what separates a gpu from a cpu and this was an excellent explanation!
@petrsehnal7990
@petrsehnal7990 2 місяці тому
Man, you are a genius. I wrote my masters thesis on CUDA and there's no way how I would be able to explain this in 100 seconds. Respect! 🎉
@klekaelly
@klekaelly 2 місяці тому
Can I read your master's thesis?
@PappuGongA
@PappuGongA 2 місяці тому
same , LMK when you get it@@klekaelly
@maymayman0
@maymayman0 2 місяці тому
Could you do it in 192 seconds??
@noanyobiseniss7462
@noanyobiseniss7462 2 місяці тому
Really, I thought Opencl will do this just fine. Funny thing is ALL GPU's are designed to be parallel computers and AMD in actually more massively parallel than Ngreedia. He didn't describe anything that is just cuda specific, did you really not get that when writing your thesis?
@petrsehnal7990
@petrsehnal7990 2 місяці тому
@klekaelly thank you, but it was on cuda version 1.0, which is really outdated from both software and hardware perspectives. Furthermore it is not in English. But I really appreciate your interest!
@ucantSQ
@ucantSQ 2 місяці тому
Whoa, my universes are operating in parallel. I just learned about CUDA this morning for the first time, and here's a new fireship video about it.
@munto7410
@munto7410 2 місяці тому
Bruh, are you my FBI agent? I just looked CUDA up a few hours ago.
@guinea_horn
@guinea_horn 2 місяці тому
Yeah man, he monitored your web traffic, saw that you wanted to learn about cuda, and then made this video as fast as he could since he knew you would watch it.
@MrMudbill
@MrMudbill 2 місяці тому
Now I'm scared about tomorrow's video
@bbom9197
@bbom9197 2 місяці тому
I was thinking to learn about CUDA. He is a mind reader
@gosnooky
@gosnooky 2 місяці тому
That's classified.
@soufianenajari8900
@soufianenajari8900 2 місяці тому
literally doing an homeword in cuda rn
@Rohinthas
@Rohinthas 2 місяці тому
Not using or planning to use CUDA but man did this just help me make sense of some terms I see being thrown around! Awesome!
@bartlx
@bartlx 2 місяці тому
Nice to see a video touching C++'s ecosystem for a change. Now make one about SYCL, so even people who don't find free RTX 4090 cards in their mailbox can get into high performance parallel computing using modern ISO C++ instead of custom CUDA syntax.
@vladislavakm386
@vladislavakm386 2 місяці тому
yeah, Nvidia dominates in parallel computing because software engineers only know CUDA.
@TheRealFFS
@TheRealFFS Місяць тому
@@vladislavakm386 You got that backwards, but ok.
@wombletonian
@wombletonian 2 місяці тому
Best 100 seconds I've had in a bunch of seconds. Thanks!
@etrestre9403
@etrestre9403 2 місяці тому
Who asked you?
@slick3996
@slick3996 2 місяці тому
@@etrestre9403 me?
@Mkrabs
@Mkrabs 2 місяці тому
​@@etrestre9403 Not allowed to speak their mind?
@etrestre9403
@etrestre9403 2 місяці тому
@@Mkrabs yeah I was just wondering who asked them
@BlueDragonix
@BlueDragonix 2 місяці тому
@@etrestre9403 sorry for your mental illness
@scapegoat079
@scapegoat079 2 місяці тому
Yo I just wanted to say thank you for making this kind of stuff so interesting and digestible. You make these extremely complex, time intensive languages, apis, tools, etc., and make them incredibly approachable. Love your content. Cheers.
@TheHackysack
@TheHackysack 2 місяці тому
1:39 Complier :D
@YuriG03042
@YuriG03042 2 місяці тому
no, complier
@Sarfarazzamani
@Sarfarazzamani 2 місяці тому
Gotcha moment😀
@incognito3678
@incognito3678 Місяць тому
Marcomplier
@davidf6592c
@davidf6592c 2 місяці тому
I'll admit, I tear up a little every time I see the "Hi Mom" in your vids.
@MaxoticsTV
@MaxoticsTV 2 місяці тому
Funny, I had to install NVIDIA CUDA for a thing I'm doing and forgot what CUDA does, searched it, and found this video that was just posted an hour ago! WHAT TIMING!!!
@neuronscale
@neuronscale 2 місяці тому
Great presentation of the topic of CUDA architecture and Nvidia GPUs in such a compact and fast form. As always, brilliant video!
@Officialjadenwilliams
@Officialjadenwilliams 2 місяці тому
Surprised that it took this long to get a CUDA in 100 seconds. 😆
@scapegoat079
@scapegoat079 Місяць тому
I did not expect this... I'm calling Miguel.
@arinahomuleba4165
@arinahomuleba4165 2 місяці тому
You just explained parallel computing in 100s better than my lecturer did in more than 100 days🔥
@noanyobiseniss7462
@noanyobiseniss7462 2 місяці тому
Yet misses the fact this is NOT cuda specific.
@bakedbeings
@bakedbeings 2 місяці тому
Or your lecturer set you up well to follow this very basic, high speed summary. Like a reader of the LOtR series can see meaning in the film series' long, dreary shots.
@boredofeducation-sb6kr
@boredofeducation-sb6kr 2 місяці тому
I loved the animations and thr explanation..i just finished a cuda course for my masters so it was minx blowing to see a whole weeks worth of lectures effortlessly compressed in ... 100 seconss
@khSoraya01
@khSoraya01 Місяць тому
Can I see the course?
@BattlewarPenguin
@BattlewarPenguin 2 місяці тому
Awesome video! Thank you for the heads up in the conference!
@n.w.4940
@n.w.4940 2 місяці тому
Aside from this very informative video ... Heartwarming that you put in that "Hi mom"-message. Probably one of the most concise videos on this topic.
@wywarren
@wywarren 2 місяці тому
The SDK has already gotten alot more convenient in the last 5-6 years. Memory used to require the SDK to manually copy back and forth. From what I remember the manual copying is still available, but in my DLI course when I was trying it out, having it be auto managed is slower than manually moving it all into memory first and running the operation. Using it in managed improves the developer experience signficantly but on each access if the memory block hasn't been copied I believe the managed system will still need to move it over on demand. To pass my CUDA DLI exam to meet the passing criteria, one of the steps I opted to manually copy. One can only dream of the day we have unified memory architectures then we don't have to deal with the copies.
@niamhleeson3522
@niamhleeson3522 2 місяці тому
Yeah, you can probably keep on dreaming about that. Memory management is the primary contradiction that you must solve if you want your CUDA program to go fast. Either you need to get all of the data in the register file / shared memory or you have Too Much Data and have to do horrible things and maybe even have some of that data out of core and it will go much slower than it could. There's no cache coherence protocol so if you need it you have to move things around manually and do some synchronization. Fun stuff.
@4RILDIGITAL
@4RILDIGITAL 2 місяці тому
Impressive explanation of how we can harness the power of our GPU using Nvidia's CUDA for more than just gaming. The practical demonstration expounded the potential of parallel computing considerably.
@gagd7351
@gagd7351 27 днів тому
As a programmer I absolutely love your series on programming languages and tools ! Cannot be more clear, and full of knowledge. Thank you. This also refresh common knowledge such as the C video!
@batoczki93
@batoczki93 2 місяці тому
But can CUDA center a div?
@abhishekpawar921
@abhishekpawar921 Місяць тому
💀💀💀
@drangertornado
@drangertornado Місяць тому
Yes when you center a div in CSS, the browser uses your GPU for rendering the pages on your browser
@mulletmate8
@mulletmate8 27 днів тому
center div exit vim I use arch btw hmm yes, very original "I've been programming for two weeks" joke
@lucasgasparino6141
@lucasgasparino6141 2 місяці тому
Hey, that was nice! I use both CUDA and OpenACC EXTENSIVELY to build CFD applications, and the performance on gpus is really fantastic... when done well xD strongly recommend against managed memory for complex production codes, if only for the fact that it seems to disable device/device DMA comms when using MPI. For anyone thinking about porting to GPUs, recommend to not half-arse it, and just make all data available to devices. Host/device exchanges can be brutally costly, and will likely eat up all your gains. Finally, it works with C and Fortran as well, for anyone curious about it :) Fireship, be nice to see a beyond 100 seconds of this, covering OpenACC and offloaded OpenMP as well😊
@jaiveersingh5538
@jaiveersingh5538 2 місяці тому
Which CFD software has CUDA acceleration? Just Ansys Fluent right now right?
@lucasgasparino6141
@lucasgasparino6141 2 місяці тому
@adialwaysup8184 not really, we performed some testing on A100s and H100s and offloaded omp was WAY slower. Sure it's portable, but acc is still getting love. It's also syntatically easier and cleaner in my opinion.
@lucasgasparino6141
@lucasgasparino6141 2 місяці тому
@jaiveersingh5538 take a look at research code. Nek5000 uses CUDA, and as well as NekRS if I remember well. Our own code started as CUDA Fortran but we eventually moved to OpenACC. Easier to use and explain to other users. Quite a few libraries behind research soft also uses CUDA, or even OpenCL. For matrix free SEM methods, CUDA might be a bit hard to implement, but it's as fast as it gets.
@adialwaysup8184
@adialwaysup8184 2 місяці тому
@@lucasgasparino6141 For us, omp was performing 2% slower than acc and 6-8% slower than cuda. Though, the performance was much worse on clang than nvhpc
@adialwaysup8184
@adialwaysup8184 Місяць тому
@@lucasgasparino6141 In my experience, currently, there's a major discrepancy in how well a compiler optimizes code for accelerators. The is doubly important when it comes to nvidia, since the nvptx backend is far from perfect. But if the same tests are done on nvidia say with nvhpc. I found an overall 2-3% gap between openmp and openacc. I do agree with your second point, openacc is much cleaner to write and integrates well, but at that point you're backing up in a corner with nvidia's hardware. Openacc might be an open standard, but no one except nvidia gives it a serious consideration. If you're going all in with nvidia anyway, why bother with openacc and just move to cuda.
@h3lpkey
@h3lpkey 2 місяці тому
Many thanks for every video on your channel, you doing very big and cool work
@TheFSB400
@TheFSB400 Місяць тому
Thanks for the video! Easy to understand and that helped me a lot to get a basic understanding of CUDA
@desoroxxx
@desoroxxx 2 місяці тому
Next please do OpenCL in 100 Seconds, seriously
@noanyobiseniss7462
@noanyobiseniss7462 2 місяці тому
He didn't get paid for that.
@whamer100
@whamer100 2 місяці тому
id love to see that
@Sarfarazzamani
@Sarfarazzamani 2 місяці тому
Savage comment 😁@@noanyobiseniss7462
@ProjectPhysX
@ProjectPhysX 2 місяці тому
OpenCL for the win! Same performance as CUDA, yet runs on literally every GPU from Nvidia, AMD and Intel.
@otakuotaku6774
@otakuotaku6774 2 місяці тому
Bro, Can you do more Hardware videos, just like this
@recursion.
@recursion. Місяць тому
Hardware videos 💀
@ace9463
@ace9463 Місяць тому
Having used the CUDA Toolkit for implementing LSTMs and CNNs for Computer Vision and Sentiment Analysis projects using Tensorflow GPU and ScikitLearn libraries of Python which utilized my laptop's NVIDIA GPU, the process of writing raw CUDA Kernels in C++ is somewhat new for me and seems fascinating.
@dfsafsadfsadf
@dfsafsadfsadf 2 місяці тому
That was a great summary! Thank you!!!
@sepro5135
@sepro5135 2 місяці тому
Im using cuda for fluid simulation, it’s a real game changer in terms of speed
@bnaZan6550
@bnaZan6550 2 місяці тому
You didn't explain what CUDA does you explained what a GPU does... CUDA just has special optimizations over normal GPU parallels. Your example will work fine on every GPU and doesn't require CUDA to be parallel. All GPUs calculate the pixels using multi threading and multiple cores.
@Aoredon
@Aoredon 2 місяці тому
I mean he explained how to get started with it and clarified how it's different to programming on the CPU. Also I'm pretty sure the > syntax is specific to CUDA so you wouldn't be able to just run this anywhere. And GPUs in graphics are usually just dealing with essentially a 2D array of pixels rather than 3D like here.
@HoloTheDrunk
@HoloTheDrunk 2 місяці тому
@@Aoredon AMD's ROCm also uses the > syntax and I kinda agree with OP, this would've been good if it was titled "GPUs in 100 seconds" but as things stand it's hardly anything CUDA-specific
@oghidden
@oghidden 2 місяці тому
This is a summary channel, not overly detailed.
@noanyobiseniss7462
@noanyobiseniss7462 2 місяці тому
Correct and well said!
@julesoscar8921
@julesoscar8921 2 місяці тому
The extension of the file was .cu tho
@sachethana
@sachethana 2 місяці тому
Cuda is Awesome! I did one of my thesis on parallel processing in 2016 using CUDA for a super fast blood cells segmentation. Then used CUDA for mining crypto on the GPU.
@KorruFreez
@KorruFreez Місяць тому
Sometimes I regret my career choices
@StefanoBorini
@StefanoBorini 2 місяці тому
Interesting little factoid: if you are doing parallel cuda programming, and have to compute on a subset of a large block of memory, often it's faster to operate on the whole block and simply ignore the additional data, without checking for actual boundaries. If conditions kill performance in cuda kernels, at the point that often it pays off to just compute garbage and discard it at the end, rather than prevent it from computing it.
@9SMTM6
@9SMTM6 2 місяці тому
If conditions are usually translated to compute discard. But they give false appearances, and also if the if condition is difficult to compute that adds to the runtime cost.
@KoaIa200
@KoaIa200 2 місяці тому
warp divergence does not matter if the other threads are doing nothing in the first place... just dont have if else and you are fine.
@janisir4529
@janisir4529 Місяць тому
Better add those bounds checks, don't want to crash with access violations...
@NEOchildish
@NEOchildish 2 місяці тому
Great Video! A ROCM video would awesome too. Could help me explain my suffering to friends on using CUDA native apps in a crappy docker container for less performance vs native Nvidia.
@bramvdnheuvel
@bramvdnheuvel Місяць тому
I would love to see Elm in 100 seconds soon! It definitely deserves more love.
@augustinmichez8874
@augustinmichez8874 2 місяці тому
0:46 truly a masterpiece from our beloved GPU
@augustinmichez8874
@augustinmichez8874 2 місяці тому
@@starsandnightvision not a native speaker but ty for pointing it out
@Ibbysz
@Ibbysz 2 місяці тому
Great video, Fireship. However, it's worth noting that writing performant and optimized raw CUDA code is very difficult and not practical. Usually, you aren't writing your own CUDA code but rather using NVIDIA's highly optimized CUDA libraries, such as cuBLAS, cuFFT, and cuDNN. These libraries implement common primitives such as matrix multiplication, neural net operations, etc
@yogsothoth00
@yogsothoth00 2 місяці тому
Yes, but where is the fun in that
@niamhleeson3522
@niamhleeson3522 2 місяці тому
@@yogsothoth00 If you think that is fun you would probably get hired by Nvidia to write more libraries for them
@el_teodoro
@el_teodoro 2 місяці тому
He did a 100 seconds video on PyTorch. So, he probably expand on this too. This video is specifically about CUDA.
@masteraso
@masteraso 2 місяці тому
Yes , if you can install them and find the right version
@RudolfJvVuuren
@RudolfJvVuuren Місяць тому
So basically: "when writing code one uses libraries." Thank you Capt. Obvious.
@TheVilivan
@TheVilivan 2 місяці тому
Would love to see some more videos on parallel computing, with more explanation of this kind of code. Maybe a more in-depth video on Beyond Fireship?
@practicalsoftwaremarcus
@practicalsoftwaremarcus 2 місяці тому
Nice! I use Thrust to abstract a bit on those cuda and apply generic programming. Maybe do a video on openCL? 😊
@goreldeen
@goreldeen 2 місяці тому
The title: "Nvidia CUDA in 100 Seconds" The duration: 3:12
@el_teodoro
@el_teodoro 2 місяці тому
You must be new here
@demonfedor3748
@demonfedor3748 2 місяці тому
Just recently seen the news abour Nvidia banning the use of translation layers on CUDA software like ZLUDA for AMD. That video's right on time.
@noanyobiseniss7462
@noanyobiseniss7462 2 місяці тому
Which is what he should be making a video on but you don't get free 4090's for that content.
@demonfedor3748
@demonfedor3748 2 місяці тому
@@noanyobiseniss7462 NVIDIA doesn't wanna let go that sweet sweet monopoly type proprietary stuff.
@noanyobiseniss7462
@noanyobiseniss7462 2 місяці тому
@@demonfedor3748 Pretty anti competitive company that bleeds users dry. I have no clue why its userbase is so filled with gaslit fanbois. I guess it comes down to the misery likes company mantra.
@demonfedor3748
@demonfedor3748 2 місяці тому
@@noanyobiseniss7462 Every big company wants to get as much profit as the next guy. NVIDIA does it through proprietary stuff, AMD does it by open standarts to claim the moral high ground. Pros and cons to each approach but the goal remains the same. NVIDIA has a lot of fans because they innovate a lot and are trailbrazers in multiple areas. Real time hardware ray tracing, DLSS, G-SYNC, frame generation, GPGPU aka CUDA, OPtiX, just to name a few. I know most of this stuff is proprietary and/or hardware locked but it's still innovation. I don't mean that AMD doesn't innovate. Mantle that subsequently led to Vulkan was a big deal, chiplet GPU and CPU design, 3D-Vcache on CPUs and GPUs, SAM. There's no clear winner, however NVIDIA is currently performance king. Intel wants in the game for over 15 years but they got big shoes to fill. Was a big blow when Larrabee failed.
@demonfedor3748
@demonfedor3748 2 місяці тому
@@noanyobiseniss7462 Every big company wants to get as much profit as the next guy. NVIDIA does it through proprietary stuff, AMD does it by open standarts to claim the moral high ground. Pros and cons to each approach but the goal remains the same. NVIDIA has a lot of fans because they innovate a lot and are trailbrazers in multiple areas. Real time hardware ray tracing, DLSS, G-SYNC, frame generation, GPGPU aka CUDA, OPtiX, just to name a few. I know most of this stuff is proprietary and/or hardware locked but it's still innovation. I don't mean that AMD doesn't innovate. Mantle that subsequently led to Vulkan was a big deal, chiplet GPU and CPU design, 3D-Vcache on CPUs and GPUs, SAM. There's no clear winner, however NVIDIA is currently performance king. Intel wants in the game for over 15 years but they got big shoes to fill. Was a big blow when Larrabee failed.
@OK-ri8eu
@OK-ri8eu 2 місяці тому
I worked on a porject using CUDA enviornment, this brought some memory like the copying from host to device and vice versa. I'm sure I'll be working on it again in the future.
@klaotische5701
@klaotische5701 Місяць тому
Just as I needed. Simple and quick introduction for it.
@markosdelaportas3089
@markosdelaportas3089 2 місяці тому
Can't wait to install ZLUDA on my linux pc!
@noble.reclaimer
@noble.reclaimer 2 місяці тому
I can finally build my own LLM now!
@JLSXMK8
@JLSXMK8 2 місяці тому
Can I mention this video as part of my channel intro? I use NVIDIA CUDA to re-render and upscale all my video clips for UKposts nowadays!! You give a really good explanation of how it all works.
@somerandomdudemc6201
@somerandomdudemc6201 Місяць тому
Hello sir, Today is my High school IT exam. I thank you for giving so much knowledge in these years. Thank you sir
@stefantanuwijaya8598
@stefantanuwijaya8598 2 місяці тому
Opencl next!
@noanyobiseniss7462
@noanyobiseniss7462 2 місяці тому
I doubt AMD will pay him a 7900XTX to do it.
@historyrevealed01
@historyrevealed01 2 місяці тому
A: how complex the CUDA is ? B: Even the Fireship doesnt make sense
@lucasgasparino6141
@lucasgasparino6141 2 місяці тому
Honestly, it's a rather low-level API, so it CAN get excessively complicated. That being said, you'd mostly use the basics of CUDA, and complexity would come from making the algorithm you're trying to implement parallel itself. Of course, the real magic is that you can optimize the SHIT out of it, I.e. overengineer the kernel 😅 but yeah, trust me when I say he covers only the intro bits about CUDA, this thing is a rabbit hole.
@marcellsimon2129
@marcellsimon2129 2 місяці тому
Love how this video came out 20 minutes after I did intensive google search about CUDA :D
@dheovanixavierdacruz3043
@dheovanixavierdacruz3043 2 місяці тому
YES! I was waiting for this one
@3lqm89
@3lqm89 2 місяці тому
hey, that's more than 100 seconds
@aghilannathan8169
@aghilannathan8169 2 місяці тому
Data Scientists don’t use CUDA, they use Python abstractions like Tensorflow or Torch which parallelize their work using CUDA assuming an NVIDIA GPU is available.
@el_teodoro
@el_teodoro 2 місяці тому
"Data scientists don't use CUDA, they use CUDA" :D
@drpotato5381
@drpotato5381 2 місяці тому
​The guy above you doesnt knows what the word abstraction means lmao​@@el_teodoro
@HUEHUEUHEPony
@HUEHUEUHEPony 2 місяці тому
@@el_teodoroor rocm? or vulkan? or metal?
@zard0y
@zard0y 2 місяці тому
This channel should go down the history is the greatest work done by humanity. Absolutely legendary introductions & quality level
@sn5806
@sn5806 2 місяці тому
Great timing! Just got a new green GPU to mess around with and this'll help.
@zainkhalid3670
@zainkhalid3670 2 місяці тому
Getting CUDA to run on your Windows machine is one of the greatest problems of modern computer science. Edit: "getting CUDA-related libraries in a Python environment to correctly run neural networks"
@eigentensor
@eigentensor 2 місяці тому
lol, holy wow this really is a noob channel
@user-qm4ev6jb7d
@user-qm4ev6jb7d 2 місяці тому
Getting it to run the "official" way, from Visual Studio, is not much of a problem. Now, getting CUDA-related libraries in a Python environment to correctly run neural networks - THAT's a challenge. Especially with how much of a bother Conda is.
@MrCmon113
@MrCmon113 2 місяці тому
Lots of ML stuff doesn't have good support on windows. Probably good idea just to run an Ubuntu VM if you plan to do much locally.
@bradenhelmer9795
@bradenhelmer9795 2 місяці тому
I literally just finished an exam on cuda wtf
@acestandard6315
@acestandard6315 2 місяці тому
What course do you offer
@SalomDunyoIT
@SalomDunyoIT 2 місяці тому
@@acestandard6315 where do u study?
@bradenhelmer9795
@bradenhelmer9795 2 місяці тому
@@SalomDunyoIT Nunya University
@AO-ek9qw
@AO-ek9qw 2 місяці тому
0:36 this matrix multiplication animation is really REALLY good!!!!!
@vladislavkaras491
@vladislavkaras491 Місяць тому
Thanks for the video!
@gourav7315
@gourav7315 2 місяці тому
0:25 what is the game name
@pramodgoyal743
@pramodgoyal743 Місяць тому
Leaving a dot here for a captain to show up.
@BinaryBlueBull
@BinaryBlueBull Місяць тому
I also would like to know this. Anyone?
@Joey-dj4cd
@Joey-dj4cd 2 місяці тому
Use me as the button "I understood NOTHING"
@gamemotronixg3965
@gamemotronixg3965 2 місяці тому
Finally 🎉🎉🎉 I challenge you to do CUDA matrix multiplication using C
@M7ilan
@M7ilan 2 місяці тому
Valuable video!
@MaybeBlackMesa
@MaybeBlackMesa 2 місяці тому
Nothing worse than buying an AMD card and being locked out of anything AI (and these days it's a LOT of things). Never again.
@noanyobiseniss7462
@noanyobiseniss7462 2 місяці тому
Your not too bright are you.
@montytrollic
@montytrollic 2 місяці тому
Google ZLUDA my friend ...
@noanyobiseniss7462
@noanyobiseniss7462 2 місяці тому
Cuda is closed source and therefor a non starter for anyone that believes in freedom standards.
@Volian0
@Volian0 2 місяці тому
I wouldn't recommend nvidia to anyone, their CEO is crazy!!
@MrCmon113
@MrCmon113 2 місяці тому
And the alternative is what? Hospitals, the garbage collection, fire departments, etc aren't open source either, but you're kinda forced to use them. Nvidia has got us all by the balls. Your balls are firmly placed in Nvidia's hands. God speed your efforts to come up with a freedom alternative.
@Volian0
@Volian0 2 місяці тому
@@MrCmon113 the alternatives exist! In case of CUDA, OpenCL is the alternative that works on all GPUs. And in case of gaming, AMD cards preform very well (and their drivers are open source)
@NoDebut
@NoDebut 2 місяці тому
This is great! Thank you 👏
@romanino
@romanino Місяць тому
I didn't understand MOST of it, but still loved it , thanks!
@livelife3051
@livelife3051 2 місяці тому
Bro, your way to teach, much faster than my mind..
@joshDotJS
@joshDotJS Місяць тому
Thank you for the video!
@devrim-oguz
@devrim-oguz 2 місяці тому
You should do a video on SHMT (simultaneous and heterogeneous multithreading)
@xbozo.
@xbozo. 2 місяці тому
awesome animations on the video man
@BingleBangleBungle
@BingleBangleBungle 2 місяці тому
This is a very slick advert for Nvidia 😅 didn't realize it was an ad until the end.
@pherd-0884
@pherd-0884 2 місяці тому
I would really enjoy a follow-up to this, maybe on the other channel to discuss ROCM.
@bonobo3748
@bonobo3748 2 місяці тому
The video editing must take hours for each upload Well done brother
@CoughSyrup
@CoughSyrup 2 місяці тому
While you are correct for crediting both Buck and Nichols for the prior work leading up to CUDA, I felt like it was important to point out that they did not both contribute equally to the research in question, as most people will agree that one Buck is worth about 20 Nichols.
@julendominadas4040
@julendominadas4040 Місяць тому
The fun part of your program is that it would take the same time to allocate that memory on the GPU than making the summ. Because of cpu pipelines, u would probably make about 4 integer sum per cycle. I dont know if this is dependant of AVX register. If someone can give more extended explanation i would be so glad !
@RobsonLanaNarvy
@RobsonLanaNarvy 2 місяці тому
I've used a bit of Cupy for some array calculations, is not a heavy loaded script, but at least it was nice to configure and start utilizing Cuda on Python
@ren3105
@ren3105 2 місяці тому
dam bro i have my linear algebra exam next week and you just taught me how to matrix multiply at 0:36 (teacher took 3 classes to explain)
@hyperpug2898
@hyperpug2898 2 місяці тому
Wow what great timing to mention ZLUDA
@superspies32
@superspies32 Місяць тому
I'm working on sequence alignment for NIPT results. Barracuda is the best thing I never heard.
@judevector
@judevector 2 місяці тому
This is just mind-blowing 😮
@MatheusLB2009
@MatheusLB2009 2 місяці тому
I honestly recommend the GTC if you're into graphics or just interesting curiosities
@uDubRiceBoy
@uDubRiceBoy 2 місяці тому
Thanks @fireship, does amd gpus enable parallel math processing ?
@k7ufo819
@k7ufo819 Місяць тому
Just subscribed for more "in 100 seconds" videos 👍🏻
@vectoralphaAI
@vectoralphaAI 2 місяці тому
Game Developers Conference (GDC) is also that week.
@vaclavsisl175
@vaclavsisl175 2 місяці тому
I would love to have a more detailed video comparing cuda to openCL (or others) for practical workflows. Kind of trying to answer the question "for all the other applications except for gaming, should I buy an Nvidia or AMD GPU?".
@SuvviSanthosh
@SuvviSanthosh 2 місяці тому
Very informational on CUDA and NVDIA ,👌👌👌Do you own research but dont' miss out on AI & NVIDIA its touching all companies & all sectors.
@Jechob
@Jechob 2 місяці тому
Thanks, Jeff!
@radumihaidiaconu
@radumihaidiaconu 2 місяці тому
RocM next
Erlang in 100 Seconds
2:44
Fireship
Переглядів 418 тис.
Артем Пивоваров х Klavdia Petrivna - Барабан
03:16
Artem Pivovarov
Переглядів 6 млн
Піхотинець - про рутину на фронті
00:46
Суспільне Новини
Переглядів 231 тис.
когда одна дома // EVA mash
00:51
EVA mash
Переглядів 9 млн
"Поховали поруч": у Луцьку попрощались із ДВОМА Героями 🕯🥀 #герої #втрати
00:15
Телеканал Конкурент TV - новини Луцька та Волині
Переглядів 294 тис.
How to Automate INCREDIBLE Midjourney Images
41:48
Jack Roberts
Переглядів 178
Something Strange Happens When You Follow Einstein's Math
37:03
Veritasium
Переглядів 7 млн
CPU vs GPU vs TPU vs DPU vs QPU
8:25
Fireship
Переглядів 1,5 млн
NVIDIA'S HUGE AI Chip Breakthroughs Change Everything (Supercut)
26:08
Ticker Symbol: YOU
Переглядів 1,3 млн
How do Video Game Graphics Work?
21:00
Branch Education
Переглядів 2,9 млн
You probably won’t survive 2024... Top 10 Tech Trends
8:56
Fireship
Переглядів 1,6 млн
How I accidentally learned Python in 6 months
4:57
Thaomaoh
Переглядів 6 тис.
CUDA Simply Explained - GPU vs CPU Parallel Computing for Beginners
19:11
Python Simplified
Переглядів 222 тис.
How a CPU Works in 100 Seconds // Apple Silicon M1 vs Intel i9
12:44
this is why you're addicted to cloud computing
5:25
Fireship
Переглядів 763 тис.
iPhone 16 - Повернення ДО КЛАСИКИ
9:22
Svidomy
Переглядів 32 тис.
Распаковка айфона под водой!💦(🎥: @saken_kagarov on IG)
0:20
Взрывная История
Переглядів 11 млн