Building makemore Part 4: Becoming a Backprop Ninja

  Переглядів 168,169

Andrej Karpathy

Andrej Karpathy

День тому

We take the 2-layer MLP (with BatchNorm) from the previous video and backpropagate through it manually without using PyTorch autograd's loss.backward(): through the cross entropy loss, 2nd linear layer, tanh, batchnorm, 1st linear layer, and the embedding table. Along the way, we get a strong intuitive understanding about how gradients flow backwards through the compute graph and on the level of efficient Tensors, not just individual scalars like in micrograd. This helps build competence and intuition around how neural nets are optimized and sets you up to more confidently innovate on and debug modern neural networks.
!!!!!!!!!!!!
I recommend you work through the exercise yourself but work with it in tandem and whenever you are stuck unpause the video and see me give away the answer. This video is not super intended to be simply watched. The exercise is here:
colab.research.google.com/dri...
!!!!!!!!!!!!
Links:
- makemore on github: github.com/karpathy/makemore
- jupyter notebook I built in this video: github.com/karpathy/nn-zero-t...
- collab notebook: colab.research.google.com/dri...
- my website: karpathy.ai
- my twitter: / karpathy
- our Discord channel: / discord
Supplementary links:
- Yes you should understand backprop: / yes-you-should-underst...
- BatchNorm paper: arxiv.org/abs/1502.03167
- Bessel’s Correction: math.oxford.emory.edu/site/mat...
- Bengio et al. 2003 MLP LM www.jmlr.org/papers/volume3/b...
Chapters:
00:00:00 intro: why you should care & fun history
00:07:26 starter code
00:13:01 exercise 1: backproping the atomic compute graph
01:05:17 brief digression: bessel’s correction in batchnorm
01:26:31 exercise 2: cross entropy loss backward pass
01:36:37 exercise 3: batch norm layer backward pass
01:50:02 exercise 4: putting it all together
01:54:24 outro

КОМЕНТАРІ: 261
@Davourflave
@Davourflave Рік тому
I can say without a doubt that there are not many highly qualified, passionate teachers who are also able to teach their subject. Sharing knowledge in this way is the greatest gift a researcher can give to the world! Me and everyone else thank you for that! :)
@vaguebrownfox
@vaguebrownfox 9 місяців тому
I saw his previous micrograd lecture and it literally moved me to tears. I had endured the struggle of drowning in pytorch source code, trying to understand what it is that they are really doing! For someone who simply can't move past without cutting open abstractions, this is pure blessing.
@uniquescience7047
@uniquescience7047 4 місяці тому
exactly same with me@@vaguebrownfox
@BradCordovaAI
@BradCordovaAI Рік тому
Andrej you are a gifted teacher. I love this teaching style of starting from scratch with a simple specific model to set the structure and ideology of the problem. 2. Add necessary and motivated complexity to get where we are today, 3. Seamlessly transfer to modern technology (eg PyTorch) to solve modern problems. 4. You make it all simple and compress it into the essentials without unnecessary lingo. It reinvigorates my passion for the field. Thank you very much for taking so much time to make this for free for everyone.
@nohcho_9548
@nohcho_9548 Рік тому
Ky .
@kishantripathi4521
@kishantripathi4521 12 днів тому
no words to explain my feelings. karpathy is just Supercalifragilisticexpialidocious
@cojocarucosmin202
@cojocarucosmin202 Рік тому
Bro just want to say that for the past 3 years I've been looking everywhere on the Internet for an explanation like thsi for backpropagation.. Found all kinf of things(e.g. Jacobian differentiable) but none actually made sense until today. U r the best, you bring so much value and let others light their candles at your light
@kshitijbanerjee6927
@kshitijbanerjee6927 11 місяців тому
These lectures are literally GOLD. I'd pay for these, but Andrej is kind enough to give everything for free. I hope others find these gold lectures. Thank you so much for doing this. Please don't lose steam and I hope you continue to create them.
@weystrom
@weystrom Рік тому
Man, what a time to be alive. Imagine how hard it would be to get this kind of information just a couple decades ago. And now it's free and easily accessible at any convenient time. Thank you, Andrey, truly.
@dohyun0047
@dohyun0047 Рік тому
i am still on part 2 but i had to write this comment , your part 4 thumbnail is awesome and funny I am very grateful for these lectures. I could feel that the artificial intelligence knowledge that was intertwined inside me was well aligned because of you.
@kemalware4912
@kemalware4912 Рік тому
I will put your poster on my wall to look at you everyday and remember how a great person you are. Your smile is contagious.
@andonisudupe3446
@andonisudupe3446 Рік тому
yes, I always wanted to be a backprop ninja, now my dream will become true, thanks Andrej!
@Sickkkkiddddd
@Sickkkkiddddd 9 місяців тому
Bruh, I'd be paying a shit ton of money in education for this otherwise free knowledge if it wasn't for your videos. Thank you so much, man. I cannot believe the ease with which you explain what seemed complex to me from a distance years ago. I cannot even believe I understand this stuff, man.
@parasmaliklive
@parasmaliklive Рік тому
Thank you Andrej. I really appreciate your work.
@aaronwill1983
@aaronwill1983 Рік тому
Binge worthy! Ran through all lectures back-to-back after discovering. On the edge of my seat for more. Thanks Andrej!
@martakosiv6483
@martakosiv6483 Місяць тому
Thanks for the great content! That's the best explanation I've ever seen! Also, regarding the last back propagation in the excersise 1 I've found the following method in pytorch: dC = torch.zeros_like(C) dC.index_add_(0, Xb.view(-1), demb.view(-1, demb.shape[2])) cmp('C', dC, C)
@RebeccaBrunner
@RebeccaBrunner Рік тому
Thank you for providing a series that's so approachable but doesn't shy away from explaining the details. Also love the progression through all the impactful papers
@DiogoSanti
@DiogoSanti 6 місяців тому
What a wonderful effort Andrej. Thanks for this!
@efogleman
@efogleman Рік тому
This lecture series is excellent. Seriously, some of the best learning resources for Neural Networks available anywhere: up-to-date, and goes deep into the details. These lectures with detailed examples and notebooks are an amazing resource. Thanks so much for this, Andrej.
@Themojii
@Themojii 10 місяців тому
Hello Andrej, I truly love this approach that you included exercises in your video. Your suggestion to first attempt to solve the exercises and then watching as you provide the solutions is the most effective way I personally grasp the concepts. Thank you for your outstanding work!
@danielkusuma6473
@danielkusuma6473 Рік тому
Just grateful to have the chance to learn from Andrej Karpathy. Thanks heaps, it means a lot!
@kimiochang
@kimiochang Рік тому
Finally completed this one. I have to say this lecture is the most valuable one throughout all my studying of deep learning. As always, thank you Andrej for your generosity. Moving on to the next one!
@ThemeParkTeslaCamping360
@ThemeParkTeslaCamping360 Рік тому
Excellent Andrej!! Can't wait for your next lecture. I'm so excited and motivated 🥰
@tecknowledger
@tecknowledger Рік тому
Thanks for the videos! Please make a lot more! Please continue to share your knowledge with the world! Thanks
@user-oi3be8dm8x
@user-oi3be8dm8x Рік тому
Thanks for top-level video. Can't wait to see more. Thanks 🙏
@kaushaljani814
@kaushaljani814 8 місяців тому
Pure gem...💎💎💎 Thanks Andrej for this amazing lecture.
@peterszilvasi752
@peterszilvasi752 Рік тому
I really appreciate the lectures that you share with us. It is not about definitions, raw memorization, or even exercise per se. Instead, first-principle-thinking: take a big "mess" and then broke down into small manageable pieces. You do not solely demonstrate the problem-solving approaches brilliantly but also ignite curiosity to dig deeper (to go down to the level of atoms) into a specific topic. Thank you for the preparation, the passion, and the memes! :D
@mohammadhomsee8640
@mohammadhomsee8640 6 місяців тому
That's incredible !!! It's impossible to give such a knowledge without very deep knowledge with neural nets. I am really appreciate your work. I hope we can get more videos. This is defiantly a golden video!!! Thank you so much!
@cangozpinar
@cangozpinar Рік тому
Thank you, thank you, thank you ... What you are doing with these videos is amazing !
@Nimrad780
@Nimrad780 Рік тому
Thank you for "making everything fully explicit"!
@michadaniluk9604
@michadaniluk9604 Рік тому
Thanks Andrej for your amazing videos. Here is my implementation of finding dC without for loops: dC = F.one_hot(Xb).float().view(-1, C.shape[0]).T @ demb.view(-1, C.shape[1])
@nikita67493
@nikita67493 Рік тому
Unfortunately it produces inexact results: C | exact: False | approximate: True | maxdiff: 9.313225746154785e-10 The for-loop creates an exact match. Another way to do the same is to use Einstein notation (which is also an inexact result): dC = torch.einsum("ijk, ijm -> km", F.one_hot(Xb, num_classes=vocab_size).float(), demb)
@gembancud
@gembancud Рік тому
This one is another impl, though i dont know if it produces the exact results: dC = torch.zeros_like(C).scatter_add_(0, Xb.view(-1,1).repeat(1,demb.shape[-1]),demb.view(-1, demb.shape[-1]))
@rohitsathya8099
@rohitsathya8099 Місяць тому
@@nikita67493why do you want an exact match?
@ColinKiegel
@ColinKiegel 28 днів тому
On my system all these implementations of dC are equivalent and only match approximately (with the same maxdiff: 5.587935447692871e-09) - including the for-loop I also came up with the same "einsum" solution Xb_onehot = F.one_hot(Xb, num_classes=vocab_size).float() dC = torch.einsum('ija, ijb->ab', Xb_onehot, demb) # shape: [32, 3, 27] @ [32, 3, 10] -> [27, 10]
@nova2577
@nova2577 9 місяців тому
I spent almost a whole day digesting this video. It's definitely worth it!
@srikika
@srikika Рік тому
love your channel and content Andrej.. please keep more videos coming!
@kaspiimal3340
@kaspiimal3340 Рік тому
Andrej, thank you for the work you put into this (and previous) lectures❤. Thanks to you, me and a lot of other people can enjoy learning NN 😍from the best.
@badreddinefarah1127
@badreddinefarah1127 Рік тому
Thanks a lot Andrej, can't wait to see more 🙏🙏
@AlecksSubtil
@AlecksSubtil 4 місяці тому
simply the best! very good lessons with such maestry and passion, thanks a lot for sharing
@hermestrismegistus9142
@hermestrismegistus9142 Рік тому
This lecture really makes me appreciate autograd. I commend the ancient ML practitioners for surviving this brutality.
@lagousis
@lagousis Рік тому
Thanks for all the time you put into that lecture!
@DanteNoguez
@DanteNoguez Рік тому
I was "taught" calculus in high school but didn't really understand anything at all. Now, after seven years of no math formal education at all, I was able to immediately understand this exercise thanks to your lecture on micrograd. You're a brilliant teacher and I'm really grateful for that!
@BlockDesignz
@BlockDesignz Рік тому
I come to each of these videos to like them. I can't keep up with his pace of release but I will watch all of them in due time. Thanks Andrej.
@greatfate
@greatfate Рік тому
These videos are unironically pretty fun! You're not just a genius researcher but an an amazing teacher Andrej
@kapitan104
@kapitan104 Рік тому
Andrej, you are the best techer. I am 100% sure these lectures will become a CORE watching for any student who starts his ML journey. Hope we will have such lectures in CV and RL.
@borismeinardus
@borismeinardus Рік тому
Andrej is providing the world with so much value, be it through his professional work in the industry (e.g. Tesla AI) or through education. He is literally one of the greatest of all time but is so down to earth and such a sweetheart. Thank you very much for your hard work to make it easier for all the rest of us and for inspiring us! 💚
@fbf3628
@fbf3628 Рік тому
Wow! This lecture is truly incredible and i have certainly learned a ton. Thank you very much, Andrej :)
@jayhyunjo141
@jayhyunjo141 Рік тому
As a bioinformatician and a part-time data scientist, I should say this series is the best educational youtube video on deep neural network. Thank you for the video and offering the opportunity to learn.
@muhannadobeidat
@muhannadobeidat Рік тому
Excellent series and delivery as usual. Thanks for all the hard work you put into this. Part of it is challenging to get through but a joy to decipher all the moving parts. I think a good understanding of the math behind back prop helps understand this. A good resource that covers this from a math perspective is Andrew Ng original Neural Net course.
@vivekakaviv
@vivekakaviv 4 місяці тому
This was very insightful. Andrej you are the best!
@sauloviedo2677
@sauloviedo2677 Рік тому
Andrej is on-firee! Thank for this awesome material!
@owendorsey5866
@owendorsey5866 Рік тому
This is the first time truly understood. Thank you!
@ayogheswaran9270
@ayogheswaran9270 Рік тому
Thanks a lot for making this Andrej !!!
@vulkanosaure
@vulkanosaure Рік тому
I just finished part 2 yesterday night, and i was feeling blue that there was only 1 video left ! And this came to my notification, i just had to share my excitement :)))
@rmajdodin
@rmajdodin Рік тому
Thank you Andrej for sharing your experience with us! John Carmack used exactly this learning method, as he told in his interview with Lex Fridmann. In his "larval stage", he implemented the whole NN machinary, including back propagation, in C (so really low-level:)), to make sure that he understands how stuff work!
@TonyStark-cp3tj
@TonyStark-cp3tj 5 місяців тому
Hey Andrej, I don't know if you'll see this, but I just wanted to thank you whole heartedly for your awesome neural network playlist. It's by far the best and the most in-depth content on NNs I've ever come across. I really appreciate you sharing your knowledge for community. You're the best! Excited and awaiting for more such treasures!
@santoshk.c.1896
@santoshk.c.1896 Рік тому
Thanks a lot Andrej for all these awesome lectures. Please enable auto generated subtitle for this lecture.
@mehulajax21
@mehulajax21 Рік тому
This is exactly how I work through my coding problems as well. I also have similar thought process while developing algorithms.
@yagvtt
@yagvtt 7 місяців тому
That is so useful, thank you very much for this series.
@Raix03
@Raix03 2 місяці тому
I almost completed Exercise 1 all on my own, but I had to step back for a day to refresh the basics because my college algebra was a bit rusty from 10 years of not using it. Exercises 2 and 3 totally overwhelmed me. However, when I follow your explanations, I understand everything. This is a huge because I remember that professors at my college couldn't explain complex concepts so easily. Andrej, you are a gift to this world!
@FrozenArtStudio
@FrozenArtStudio Рік тому
my favorite prof with new lecture
@DavidIvan1991
@DavidIvan1991 22 дні тому
Very useful educational videos, thanks for making and sharing them! It's interesting that Andrej also considers the shapes when backpropagating through matrix multiply, just how I came to "memorize" it :)
@art4eigen93
@art4eigen93 9 місяців тому
It took me days to backprop through this lecture. Phew!. got it now.
@steampunkcircus
@steampunkcircus Рік тому
A deluge of knowledge from you so often it's ridiculous. I'm absolutely certain you're a robot. Anyhow, Ninjas are awesome. Wax on Sensei!
@JTMoustache
@JTMoustache Рік тому
Love that he explains matlab as if it is not still used in 80% of labs in the world. Living in a world of tech giants will heal the matlab ptsd This is a masterclass - I've never seen it explained so thoroughly and clearly, and i've been around. PEAK EXPERTISE
@TheOrowa
@TheOrowa Рік тому
I believe the loop implementing the final derivative at 1:24:21 can be vectorized if you just rewrite the selection operation as a matrix operation, then do a matmul derivative like done elsewhere in the video: X_e = F.one_hot(Xb, num_classes = 27).float() # Convert the selection operation into a selection matrix (emb = C[Xb] X_e @ C) dC = (X_e.permute(0,2,1) @ demb).sum(0) # Differentiate like any other matrix operation (dC = X_e.T @ demb; indices to track the batch dimensions)
@barni_7762
@barni_7762 10 місяців тому
Imo it's cleaner if you do this instead: Xe = F.one_hot(Xb.flatten(), num_classes=27).float().permute(1, 0) dC = Xe @ demb.view((-1, demb.shape[2])) I think this method is more understandable because it uses a 2D matmul...
@arashrouhani5388
@arashrouhani5388 9 місяців тому
@@barni_7762 Thanks, it seems to have worked for me.
@user-gk8ri6ww7e
@user-gk8ri6ww7e 9 місяців тому
Very good point on the fact that C[Xb] X_e @ C. It makes things much more clear. I came to the same solution, but from the bottom, experimenting with single records, imagining what I want to get. final solution is: dC = (torch.nn.functional.one_hot(Xb, num_classes=C.shape[0]).float().swapaxes(-1,-2) @ demb).sum(0) and one can investigate what is going on for a single batch element: torch.nn.functional.one_hot(Xb[0], num_classes=C.shape[0]).T.float() @ demb[0]
@inar.timiryasov
@inar.timiryasov 6 місяців тому
dC = torch.einsum('abc,abg->cg', F.one_hot(Xb, vocab_size).float(), demb)
@amogha7332
@amogha7332 2 місяці тому
@@barni_7762 very clean solution, this is what i did too!
@arjunsinghyadav4273
@arjunsinghyadav4273 Рік тому
sprinkling Andrej magic through out the video - had me cracking at 43:40
@joneskiller8
@joneskiller8 2 місяці тому
This dude is based!. I can actually cognitively map and visualize his explanations, and I am so grateful to have found him. Keep the videos coming please, and thank you so much.
@muhammadbaqir3736
@muhammadbaqir3736 Рік тому
01:25:00 Here is the better implementation of the code: dC = torch.zeros_like(C) dC.index_add_(0, Xb.view(-1), demb.view(-1, 10)) Thanks to the ChatGPT :)
@markr9640
@markr9640 Рік тому
Just Brilliant!
@jonathanr4242
@jonathanr4242 Рік тому
very nice. Thank you, Andrej.
@sevarbg83
@sevarbg83 10 місяців тому
Have mercy Andrej, my brain hurts! :D Feels like I'll need years to digest just these few lectures.
@stephennfernandes
@stephennfernandes Рік тому
Excellent content Andrej
@kl_moon
@kl_moon 6 місяців тому
Thank you so much for this lecture!!!!TT..It actually made my day.
@anrilombard1121
@anrilombard1121 Рік тому
Patiently waiting for part 5 :)
@MrEmbrance
@MrEmbrance Рік тому
Can't wait for the next video
@tecknowledger
@tecknowledger Рік тому
Thanks Andrej! I feel like a buff doge! Just understood and backproped ~ 80% of the video and colab code from this video (downloaded and did exercises)! Colab kept occasionally throwing errors. Worked fine on local Jupyter.
@yoonhero3701
@yoonhero3701 Рік тому
that's awesome! thank you for your passion. i'd like to be like you someday :)
@cthzierp5830
@cthzierp5830 8 місяців тому
Thank you very much for an amazing series! The logit backprop derivation can be simplified a bit by realizing that log(f/g) is log f - log g. The second term is log Sum, the derivative will be 1/Sum times dSum/dxi which immediately yields the activation output. The first term is the log of an exponent, this cancels and the result has a trivial derivative of 0 or -1 when the index isn't/is the correct answer. This neatly shows that the derivative is "softmax output minus correct answer".
@veeramahendranathreddygang1086
@veeramahendranathreddygang1086 Рік тому
Awesome. Thank you.
@amortalbeing
@amortalbeing Рік тому
great job.
@sam.rodriguez
@sam.rodriguez 8 місяців тому
You can love people you don't know. I love you Andrej.
@itsm0saan
@itsm0saan Рік тому
Thank you so much for the lecture ;)
@KadeemSometimes
@KadeemSometimes Рік тому
You are a hero!
@afsarequebal
@afsarequebal Рік тому
really grateful, thanks a lot
@frippRulez
@frippRulez Місяць тому
This one kicked my ass! The way of the ninja is not an easy path, but I really enjoyed it, it was amazing as I started to solve it myself as the lecture progressed. Maybe this is the future of education
@user-vn3vd6wq7n
@user-vn3vd6wq7n 10 місяців тому
this is a masterpiece
@juanolano2818
@juanolano2818 Рік тому
"...assuming that pytorch is correct..." hahahaha not only a great lecture but also with very funny nuggets. Thank you!
@yunhuaji3038
@yunhuaji3038 Рік тому
Hi Andrej, congrats on your "new" journey at OpenAI. Thank you very much for this series. It's extremely helpful and arguably the best learning material to go through for deep learning. I've always been looking for something like this series. It solidly deepens my understanding of neural networks even though I have been playing with them for a while. Will you continue on this series after your back to OpenAI? and I look forward to seeing your future work & contribution to this community, to the following generations, and to the world.
@arielfayol7198
@arielfayol7198 11 місяців тому
Please don't stop the series 😢
@lwtwl
@lwtwl 11 місяців тому
Btw, the "low-budget" gray block mask at the end is very creative :D
@thasinatabashum6853
@thasinatabashum6853 10 місяців тому
I'm 3rd year Ph.D. student and I started my Ph.D. right after my undergrad, and I had very little idea how all the calculations are happening in neural networks back then. In the last three years to learn about neural nets I watched lots of videos, attended lectures, and completed summer camp, courses, also read books, papers, and blogs. But undoubtedly this is the best lecture on backprop! Thank you!
@CoolWorm13
@CoolWorm13 10 днів тому
what uni are you study in?
@ronaldlegere
@ronaldlegere 10 місяців тому
This is one of the most valuable videos I have come across for building strong intuition about what is going on in the backpropagation. BTW My solution for dC: dC = torch.einsum('bij,bik -> jk', F.one_hot(Xb, vocab_size).float(), demb). Gotta love einsum :)
@seanconnollymv
@seanconnollymv Рік тому
Huge fan of your videos, Andrej! I'll admit I've had to pause and watch them all twice or more, but they are so useful! Thank you!. I was really excited when you started down the path or RNN and LTSM in your video, only to find you had other plans for us! Is there an ETA on RNN and LTSM videos? Possibly even GAN tutorial? Again, Thank you so much for these videos, they are so helpful, and your ability to teach is phenomenal.
@anrilombard1121
@anrilombard1121 Рік тому
Can't wait to come watch this when school holiday starts!
@anrilombard1121
@anrilombard1121 Рік тому
13 days later: here I am!
@user-co6pu8zv3v
@user-co6pu8zv3v Рік тому
Thank you!
@nirajs
@nirajs Рік тому
Such a great video for really understanding the detail under the hood! And lol at the momentary disappointment at 1:16:20 just before realizing the calculation wasn't complete yet 😂
@reubenthomas1033
@reubenthomas1033 Рік тому
awesome!
@mdrayedbinwahed7126
@mdrayedbinwahed7126 Рік тому
Whatteh lecture! My god was it awesome.
@b0nce
@b0nce Рік тому
Thank you so much :) It was a bit tough but very interesting task. P.S.: 1:25:47 dC can be done with dC.index_add_(0, Xb.view(-1), demb.view(-1, 10)) ;)
@AndrejKarpathy
@AndrejKarpathy Рік тому
very cool, nice find, didn't know about index_add_, ty :)
@ArvidLunnemark
@ArvidLunnemark Рік тому
I arrived at a very similar solution, but I didn't know about index_add_. Instead you can do: Xb_onehot = F.one_hot(Xb.view(-1), num_classes=C.shape[0]).float() dC = Xb_onehot.T @ demb.view(-1, C.shape[1]) ty for the video :)
@oferyehuda6131
@oferyehuda6131 Рік тому
can also be done with torch.einsum without the reshaping (but a little more confusion)
@danieljaszczyszczykoeczews2616
@danieljaszczyszczykoeczews2616 Рік тому
i've done with a basic approach dC = torch.zeros_like(C)# ([27, 10]) for i,iemb in zip(Xb.view(-1).tolist(),demb.view(-1, n_embd)): dC[i]+=iemb # zip (([96]), ([96, 10]))
@KibberShuriq
@KibberShuriq Рік тому
@@ArvidLunnemark Instead of Xb.view(-1), one could also use Xb.flatten(), which is a bit more straightforward to interpret (and I believe is just a wrapper for view() internally anyway).
@beathoven70
@beathoven70 Рік тому
I'm so glad even Andrej forgets how the logits = h @ W2 + b2 backprob works by heart. I've really struggled with to remember that as well and used the same "hack" to just look at the sizes of the matrices and knowing what dimensions i needed to get out of it simply transpose the matrices accordingly, hahaha.
@SahilKhan-yu3oh
@SahilKhan-yu3oh Рік тому
Thanks you soo much sir
@parmarsuraj99
@parmarsuraj99 Рік тому
Love the thumbnail!
@AndyPynch
@AndyPynch Рік тому
LETS GO PART 4 BABY!!!
@atabakp
@atabakp Рік тому
Thanks for the great series! what is the best practice to avoid the zero in denominators in terms of the backpropagation? 1- Sum the denominator with a tiny value? 2- Replace the zeros with a tiny value? max(denom, eps)
@nickgannon7466
@nickgannon7466 Рік тому
Hi Andrej, thanks so much for putting out these lessons, their absolutely phenomenal. Outside of the videos you're creating, what other resources would you recommend for someone who is interested pursuing a career in deep learning?
Building makemore Part 5: Building a WaveNet
56:22
Andrej Karpathy
Переглядів 154 тис.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Переглядів 159 тис.
Кровосток - разговор с легендами / вДудь
2:12:57
Этого От Него Никто Не Ожидал 😂
00:19
Глеб Рандалайнен
Переглядів 4,2 млн
skibidi toilet 73 (part 2)
04:15
DaFuq!?Boom!
Переглядів 18 млн
одни дома // EVA mash @TweetvilleCartoon
01:00
EVA mash
Переглядів 3,9 млн
Building makemore Part 3: Activations & Gradients, BatchNorm
1:55:58
Andrej Karpathy
Переглядів 241 тис.
[1hr Talk] Intro to Large Language Models
59:48
Andrej Karpathy
Переглядів 1,8 млн
host ALL your AI locally
24:20
NetworkChuck
Переглядів 128 тис.
Day in the life of Andrej Karpathy | Lex Fridman Podcast Clips
12:45
How to hack the simulation | Andrej Karpathy and Lex Fridman
7:57
Lex Clips
Переглядів 131 тис.
Making AI accessible with Andrej Karpathy and Stephanie Zhan
36:59
Sequoia Capital
Переглядів 203 тис.
Why flat earthers scare me
8:05
Sabine Hossenfelder
Переглядів 28 тис.
''Бесплатные умные'' домофоны для глупых людей. За чей счет банкет?
12:48
Вадим Шегалов.Оккультные игры элиты
Переглядів 31 тис.
Why spend $10.000 on a flashlight when these are $200🗿
0:12
NIGHTOPERATOR
Переглядів 16 млн
Как должен стоять ПК?
1:00
CompShop Shorts
Переглядів 675 тис.
RTX 4070 Super слишком хороша. Меня это бесит
15:22
Рома, Просто Рома
Переглядів 88 тис.
План хакера 🤯 #shorts #фильмы
0:59
BruuHub
Переглядів 997 тис.
All New Atlas | Boston Dynamics
0:40
Boston Dynamics
Переглядів 5 млн