10.14: Neural Networks: Backpropagation Part 1 - The Nature of Code

  Переглядів 185,028

The Coding Train

The Coding Train

6 років тому

In this video, I discuss the backpropagation algorithm as it relates to supervised learning and neural networks.
Next Video: • 10.15: Neural Networks...
This video is part of Chapter 10 of The Nature of Code (natureofcode.com/book/chapter-...)
This video is also part of session 4 of my Spring 2017 ITP "Intelligence and Learning" course (github.com/shiffman/NOC-S17-2...)
Support this channel on Patreon: / codingtrain
To buy Coding Train merchandise: www.designbyhumans.com/shop/c...
To donate to the Processing Foundation: processingfoundation.org/
Send me your questions and coding challenges!: github.com/CodingTrain/Rainbo...
Contact:
Twitter: / shiffman
The Coding Train website: thecodingtrain.com/
Links discussed in this video:
The Coding Train on Amazon: www.amazon.com/shop/thecoding...
Deeplearn.js: deeplearnjs.org/
Sigmoid function on Wikipedia: en.wikipedia.org/wiki/Sigmoid...
Videos mentioned in this video:
My Neural Networks series: • 10: Neural Networks - ...
3Blue1Brown Neural Networks playlist: • Neural networks
3Blue1Brown's Linear Algebra playlist: • Essence of linear algebra
Gradient Descent by 3Blue1Brown: • Gradient descent, how ...
My Video on Gradient Descent: • 3.5: Mathematics of Gr...
Source Code for the all Video Lessons: github.com/CodingTrain/Rainbo...
p5.js: p5js.org/
Processing: processing.org
The Nature of Code playlist: ukposts.info...
For More Coding Challenges: • Coding Challenges
For More Intelligence and Learning: • Intelligence and Learning
📄 Code of Conduct: github.com/CodingTrain/Code-o...

КОМЕНТАРІ: 213
@programmingsoftwaredesign7887
@programmingsoftwaredesign7887 3 роки тому
You are very good at breaking things down. I've been through a few videos trying to understand how to code my backpropagation. You are the first one to really give a visual of what it's doing at each level for my little math mind.
@abhinavarg
@abhinavarg 4 роки тому
Sir, I don't even know how to express my joy after hearing this from you. Nicely done!!!
@Meuporgman
@Meuporgman 6 років тому
thanks Daniel for ur involvment in this serie and in all the others ! You're probably the best programming teacher on youtube, we can see that you put a lot of effort into making us understand all the concepts you go through ! much love
@onionike4198
@onionike4198 4 роки тому
That was an excellent stroll through the topic. I feel like I can implement it in code now, it was one of the few hang ups for me. Thank you very much 😁
@Twitchi
@Twitchi 6 років тому
The whole series has been amazing, but I particularly enjoy these theory breakdowns :D
@phishhammer9733
@phishhammer9733 5 років тому
These videos are immensely helpful and informative. You have a very clear way of explaining concepts, and think through problems in an intuitive manner. Also, I like your shirt in this video.
@isaacmares5590
@isaacmares5590 5 років тому
You are the master of explaining complicated concepts effectively... my dog sitting next to me now understands backpropagation of neural networks better than roll over.
@phumlanigumedze9762
@phumlanigumedze9762 2 роки тому
Amazing
@TheRainHarvester
@TheRainHarvester Рік тому
I just made a video without subscripts to explain multi hidden layer back propagation. It's easy to understand without so many sub/super scripts.
@pharmacist66
@pharmacist66 4 роки тому
Whenever I don't understand something I immediately come to your channel because I *know* you will make me understand it
@vaibhavsingh1049
@vaibhavsingh1049 5 років тому
I'm on day 3 of understanding backpropation, you made cry "Finally"
@mrshurukan
@mrshurukan 6 років тому
Incredible, as always! Thank you so much for this Neural Network series, they are very interesting and helpful
@narenderrawal
@narenderrawal 2 роки тому
Thanks for all the effort you put together in helping us understand. Best I've come so far.
@sanchitverma2892
@sanchitverma2892 4 роки тому
wow im actually impressed you managed to make me understand all of that
@TopchetoEU
@TopchetoEU 3 роки тому
I'm quite honestly impressed by the simplicity of the explanation you gave. Right now I'm trying to get started with the AI but could not find a good explanation of backpropagation. That is until I found your tutorial. The only thing I didn't like is that this tutorial doesn't include any bias-related information. Regardless, this tutorial is simply great.
@MircoHeinzel
@MircoHeinzel 5 років тому
You're such a good teacher! It's fun to watch your high quality videos!
@manavverma4836
@manavverma4836 6 років тому
Man you are awesome. Someday I'll be able to understand this.
@drdeath2667
@drdeath2667 4 роки тому
do u now? :D
@prashantdwivedi9073
@prashantdwivedi9073 4 роки тому
@@drdeath2667 🤣🤣😂
@billykotsos4642
@billykotsos4642 3 роки тому
The day for me to understand is today.... 2 years later !!!!!
@jxl721
@jxl721 2 роки тому
do you understand it now :)
@kaishang6406
@kaishang6406 2 роки тому
has the day come yet?
@dayanrodriguez1392
@dayanrodriguez1392 Рік тому
I always love your honesty and sense of humor
@SigSelect
@SigSelect 4 роки тому
I read quite a few comprehensive tutorials on backprop with the full derivation of all the calculus, yet this was the first source I found which explicitly highlighted the method for finding the error term in layers preceding the output layer, which is a huge component of the overall algorithm! Good job for sniffing that out as something worth making clear!
@TheCodingTrain
@TheCodingTrain 4 роки тому
Thanks for the nice feedback!
@gnorts_mr_alien
@gnorts_mr_alien Рік тому
Exactly. Watched at least 20 videos on backprop but this one made sense finally.
@sidanthdayal8620
@sidanthdayal8620 6 років тому
When i start working i am going to support this channel on patreon. Helped me so much.
@artyomchernyaev730
@artyomchernyaev730 3 роки тому
Did u start working?
@roshanpawara8717
@roshanpawara8717 6 років тому
I m glad that you came up with this series of videos on Neural Networks. It has inspired me to choose this as a domain to work on as a mini project for this semester. Love you. Big Fan. God bless!! :-)
@Ezio-Auditore94
@Ezio-Auditore94 6 років тому
I love UKposts University
@giocaste619
@giocaste619 4 роки тому
Nicolas Licastro io o ok oo Olivetti
@matig
@matig 5 років тому
Even though we speak different languages you are a thousand times clearer than my teacher. Thanks a lot for this, you are the best
@lucrezian2024
@lucrezian2024 5 років тому
I swear this is the one video which made me understand the delta of weights!!! THANK YOU!!!!!
@artania06
@artania06 6 років тому
Awesome video ! Keep it up :) I love your way of teaching to code with happiness
@SistOPe
@SistOPe 5 років тому
Bro, I admire you so much! Someday I wanna teach algorithms the way you do! Thanks a lot, greetings from Ecuador :)
@drugziro2275
@drugziro2275 5 років тому
I am studying these things in Korea. But before I can see this lecture, can not take classes, but now I can show my professor a smile, not a frustrated face. So..... thanks be my light.
@YashPatel-fo2ec
@YashPatel-fo2ec 5 років тому
what a detailed and simple explanation. thank you so much.
@kae4881
@kae4881 3 роки тому
Dude. Best. Explanation. Ever. Straight Facts. EXCELLENT DAN. You, sir, are a legend.
@robv3872
@robv3872 Рік тому
I commend you for great videos and such an honest video! You are a great person! Thank you for the effort you put into this content you are helping people and a big part of us solving important problems throughout the future. I just commend you for being a great person which comes out in this video!
@nemis123
@nemis123 Рік тому
After watching entire UKposts I had no idea what bp is, thankfully found yours.
@aakash10975
@aakash10975 5 років тому
superb explanation that I ever saw for back propagation
@AI-AF-70
@AI-AF-70 11 місяців тому
Nice ! Simplified and right to the heart of the ideas !! thanks !!! Almost done just this part 1 but have no doubts the rest of the series will be great !
@TheCodingTrain
@TheCodingTrain 11 місяців тому
Keep me posted!
@mzsdfd
@mzsdfd 3 роки тому
That was amazing!! You explained it very well.
@CrystalMusicProductions
@CrystalMusicProductions 4 роки тому
I have used backpropagation in my NNs in the past but I never knew how the math works. Thank you so much ❤❤ I finally understand this weird stuff
@lornebarnaby7476
@lornebarnaby7476 5 років тому
Brilliant series, have been following the whole thing but I am writing it in go
@kalebbruwer
@kalebbruwer 4 роки тому
Thanks, man! This makes it easier to debug code I wrote months ago that still doesn't work, because this is NOT what I did
@magnuswootton6181
@magnuswootton6181 2 роки тому
really awesome doing this lesson, everywhere else is cryptic as hell on this subject!!!
@znb5873
@znb5873 3 роки тому
Man, watching your video after 3Blue1Brown series on back-propagation is a breeze. Thanks for sharing!
@IgorSantarek
@IgorSantarek 6 років тому
You're doing great job! Keep it up!
@moganesanm973
@moganesanm973 8 місяців тому
the best teacher i ever seen☺
@santiagocalvo
@santiagocalvo 2 роки тому
You should stop selling yourself short, iv'e seen dozens of videos on this exact subject because iv'e strugle a lot trying to understand backprop and i have to tell you this might be the best one iv'e seen so far, great work!! keep it up!!
@ulrichwake1656
@ulrichwake1656 5 років тому
good video man. it really helps a lot. u explain it clearly. thank u very much
@kustomweb
@kustomweb 6 років тому
Excellent series
@ksideth
@ksideth Рік тому
Many thanks for simplifying.
@atharvapagare7188
@atharvapagare7188 6 років тому
Thank you, I am finally able to grasp this concept slowly
@sanchitverma2892
@sanchitverma2892 4 роки тому
hello
@roger109z
@roger109z 5 років тому
thank you so much, I watched the 3blue1brown videos and read a few books but this never clicked for me watching you made it click for me.
@kosmic000
@kosmic000 5 років тому
amazing vid as always dan very informative
@skywalkerdk01
@skywalkerdk01 5 років тому
Awesome video. Thank you for this! First time i understand backpropagation. +1
@chiragshahckshhh9696
@chiragshahckshhh9696 6 років тому
Amazing explanation!!
@priyasingh9984
@priyasingh9984 3 роки тому
awesome person you taught so well and kept a tuff topic go on interesting
@afonsorafael2728
@afonsorafael2728 6 років тому
love from Portugal! Nice video!
@samsricatjaidee405
@samsricatjaidee405 Рік тому
Thank you. This is very clear.
@codemaster1768
@codemaster1768 2 роки тому
This concept has been taught way better as compared to my University professors.
@GurpreetSingh-th1di
@GurpreetSingh-th1di 6 років тому
the kind of video i want , thanks
@jiwonkim5315
@jiwonkim5315 5 років тому
You probably know already but you are amazing 💕
@lucaslopesf
@lucaslopesf 6 років тому
I finally understand! It's so simple
@shimronalakkal523
@shimronalakkal523 2 роки тому
Oh yeah. Thank you so much. This one helped a lot.
@lirongsun5848
@lirongsun5848 4 роки тому
Best teacher ever
@qkloh6804
@qkloh6804 4 роки тому
3blues1brown + this video is all we need. Great content as always.
@unnikked
@unnikked 6 років тому
Let me tell you that you are an amazing teacher! ;)
@udayanbirajdar6530
@udayanbirajdar6530 6 років тому
Awesome !! Loved it !!
@luvtv7433
@luvtv7433 3 роки тому
You know what would be nice, that you could teach about algorithms on graphs using matrices, I feel that helped me a lot to practice and understand the importance of matrices in other topics including neural networks. Some exercises are to find if two graphs are isomorphic, find cycles in a vertex, if a graph is complete, planar, bipartite, find a tree and paths using matrices, I am not sure but that might be called spectral graph theory.
@islamulhadi9816
@islamulhadi9816 6 років тому
awesome video ! thanks for tutorials, keep it up :)
@reddyvarinaresh7924
@reddyvarinaresh7924 4 роки тому
awesome videos and teaching
@ganeshhananda
@ganeshhananda 6 років тому
A really awesome explanation which can be understood by a mere human being like me ;)
@nagesh007
@nagesh007 Рік тому
Awesome tutorial
@sz7063
@sz7063 5 років тому
It is amazing! When will you teach us about recurrent neural network and LSTM?? Looking forward to that!!
@jameshale9093
@jameshale9093 3 роки тому
Very nice video; thank you!
@hamitdes7865
@hamitdes7865 4 роки тому
Sir thank you for teaching backpropagation😊😊
@annidamf
@annidamf Рік тому
thank you very much for your videos. incredibly helpful!! :D
@c1231166
@c1231166 6 років тому
Would you mind making a video about how you learn things? because it seems to me you can learn basically everything and be thorough about it. This is a skill I would like to own.
@dolevgo8535
@dolevgo8535 5 років тому
when you try to study something, just practice it. learning how to create a neural network? sweet, try to create one yourself while doing so. there's actually no way that you'd do it perfectly, and you WILL come back to where you study from, or google things that popped up to your head that you started wondering about, and that is how you become thorough about these things. its just mostly about curiosity and practicing :)
@yolomein415
@yolomein415 5 років тому
Find a book for beginners, look at the reviews, buy the book, read it, try it out, watch UKposts videos, google your questions(if not answered ask on stackoverflow)
@user-bf3lt6vi5m
@user-bf3lt6vi5m 4 роки тому
thank you for making this vdo. it is very useful.
@carlosromerogarcia1769
@carlosromerogarcia1769 Рік тому
Daniel, I have a liittle doubt. When I see the weights here I think in Markowitz portfolio model and I wonder if the sum of the weights in Neural Networks should be one w1 + w2 + w3 + ... + w n = 1 Do you know if in Keras it´s possible compute this type of constraints... just to experiment. Thank you I love your videos
@SpiritsBB
@SpiritsBB 4 роки тому
Great Video
@wakeatmethree4023
@wakeatmethree4023 6 років тому
Hey Dan! You might want to check out computational graphs as a way of explaining backpropagation (Colah's blog post on computational graphs and Andrew Ng's video on computational graph/derivatives as well. )
@TheCodingTrain
@TheCodingTrain 6 років тому
Thank you for this, I will take a look!
@phumlanigumedze9762
@phumlanigumedze9762 2 роки тому
@@TheCodingTrain humility,appreaciated, thank God
@KeygaLP
@KeygaLP 6 років тому
This really helped me out so much! Love your work :D
@snackbob100
@snackbob100 4 роки тому
are all errors across all layers calculated first, then gradient descent is done? or are they done in cognate with each other?
@andreujuanc
@andreujuanc 4 роки тому
BRILLIANT !!!!!!!!!!!!!!!!!!!!!!!!!
@marcosvolpato8135
@marcosvolpato8135 6 років тому
do we have to update all the weights before we calculate the all errors or first calculate all the errors and then update all the weights?
@minipy3164
@minipy3164 3 роки тому
When you are about to give up on Neural network and you see this awesome video😍😍😍😍😍
@SetTheCurve
@SetTheCurve 5 років тому
I would love it if you told us how to include activation in these calculations, because in this example you're only including weights. A high activation and low weight can have the same impact on error as a low activation and high weight.
@12mkamran
@12mkamran 5 років тому
How would you deal with the fact that in some cases error of H1 and h2 may be 0. Do you not adjust it or is there bias associated with it as well? Thanks
@tahiriqbal8543
@tahiriqbal8543 3 роки тому
superb
@michaelelkin9542
@michaelelkin9542 3 роки тому
I think you answered my question, but to be sure. Is backward propagation only 1 layer at a time? As in you calculate the errors in the weights to the final layer and then as if the last layer went away. Then use the new expected values that just done to adjust the previous layers weights and so on. The key is that you do not simultaneously adjust all weights in all layers, just one layer at a time. Seems like a very simple question but I have never found a clear answer. Thank you.
@zhimingkoh1029
@zhimingkoh1029 3 роки тому
Thank you so much (:
@BrettClimb
@BrettClimb 4 роки тому
I feel like the derivative of the activation function is also a part of the equation for calculating the error of the hidden nodes, but maybe it's unnecessary if you aren't using an activation function?
@jyothishmohan5613
@jyothishmohan5613 4 роки тому
Why do we need to do backpropagation to all hidden layers but only to the previous layer to the output?
@ignaciomosca2140
@ignaciomosca2140 Рік тому
👏very usefull!
@zareenosamakhan9780
@zareenosamakhan9780 3 роки тому
Hi, Can you please explain the back propagation with cross entropy loss error.
@justassaltyasthesea5533
@justassaltyasthesea5533 6 років тому
Does the coding train have a coding challence about missiles slowly turning towards their target, trying to intercept them? And maybe instead of flying where the target is, the missile uses some advanced navigation? On wikipedia is Proportional Navigation where they talk about a LOS-rate. I think this would be a nice coding challence, but where do I suggest them?
@ashfaqniaz3953
@ashfaqniaz3953 3 роки тому
love you men , you are superb
@mikaelrindmyr
@mikaelrindmyr 4 роки тому
What if the one of the weights are negative? Is it the same formula when you calculate the greatness of the error-Hidden-1? or should I use Math.abs on all the denominators. Like Weight-1 = -5, Weight-2 = 5 error = 0.5, then it should look like this right? Error-Hidden-1 = error * ( W1 / ( Math.Abs(W1) + Math_abs(W2)) ) ty
@mikaelrindmyr
@mikaelrindmyr 4 роки тому
//mike form sweden
@masterstroggo
@masterstroggo 6 років тому
Daniel Shiffman. I'm following along these tutorials and re-creating the neural network in Processing3. I know you´re a bit of a wiz on that topic so I thought I'd ask you about it. I got a working prototype of the neural network up and running in Processing even though I had to do some workarounds and compromises. One Issue I've run into however is that I cannot seem to figure out how to use static variables or methods in processing. Is this not implemented? None of the standard Java ways of doing it works in the processing environment. I've tried the same code snippets in other more standard Java environments and they work there.
@masterstroggo
@masterstroggo 6 років тому
It's kind of solved. I found out that Processing wraps all code in a class which means that all user created classes are treated like inner classes, and from what I understand Java does not support static inner classes unless the parent class (the whole processing sketch in this case) is static. I've found workarounds for that, but I thought I'd share my findings.
@TheCodingTrain
@TheCodingTrain 6 років тому
Yes, this is indeed the case! Let me know how I can help otherwise.
@sathyanirmanifernando
@sathyanirmanifernando 3 роки тому
Great!!!!!!
@FilippoMomesso
@FilippoMomesso 6 років тому
In "How to make a Neural Network" by Tariq Rashid weight notation is reversed. For example, in the book the weight of the connection between input node x1 and hidden node h2 is noted as w1,2 but in your videos is w2,1 Which one is more correct? Or is it only a convention?
@TheCodingTrain
@TheCodingTrain 6 років тому
I would also like to know the answer to this question. But I am specifically using w(2,1) since it shows up in row 2 and column 1 in the matrix. And I believe rows x columns is the convention for linear algebra stuff?
@volfegan
@volfegan 6 років тому
The Notation is a convention. As long you keep using the same notation system, it should not be a problem. Let mathematicians argue about the details. Engineers just have to make the stuff works.
@FilippoMomesso
@FilippoMomesso 6 років тому
The Coding Train ok, just asked my math professor. He said the right convention is (row, column). I read again the section of the book where it talks about matrices. The author contradicts himself. At page 52, he says "it is convention to use row first then columns" but then, when he applies matrices to weight notation he does the opposite. His notation is W(i,h) (i for number of input node and h for number of hidden node) but it is column,rows. Your notation is W(h,i) but with the right convention for matrices row,column. So in the end, using one notation or the other it is the exact same thing because weight w(1,2) in the book is w(2,1) in your videos. Hope I've been enough clear :-P
@jonastjepkema
@jonastjepkema 6 років тому
TariqnRashid actually doesn't use the conventional matrix notation because he actually looks layer per layer, not as a matrix with rows and columns : he writes w11 meaning "weight from the first neuron of the first layer to the first neuron of second layer". And he goes on so that the weights leaving from one neuron have the same first number which is his own way of representing this. Both work thought, as someone said before me, it just notation, he just doesn't look at it as a matrix (which makes the matrix notation to calculate the outputs less readable unfortunately) hope i managed to make myself clear hahaha
@TheRainHarvester
@TheRainHarvester Рік тому
The reason for the swapped position is so that when multiplying matricies, the notation is correct for multiplying: M2x3 X M3x2 Where the 3s need to be in those inner positions next to the times symbol, X.
@pythondoesstuff2969
@pythondoesstuff2969 3 роки тому
What if there are more neurons in the hidden layer. Then how to calculate the error
@sirdondaniel
@sirdondaniel 6 років тому
Hi, Daniel. At 8:44, I have to point something out: you said that h1 is 67% responsible for the error because it has a W1 = 0.2 which is 2 times bigger than W2. Well I think that is false. If in this particular case the value stored by h1 is 0 then nothing is coming from it and for the entire 0.7 output is h2 responsible with its W2 = 0.1. Check at 5:14 the "What is backpropagation really doing? | Chapter 3, deep learning" from "3Blue1Brown". I'm not 100% I'm right. Anyway you are making a really good job with this series. I've watched some videos about this topic on pluralsight, but the way you explain it makes way more sense than over there. I really look forward to see you implement the Digits recognition thing. If you need some assistance please don't hesitate to message me.
@TheCodingTrain
@TheCodingTrain 6 років тому
Thank you for this feedback, I will rewatch the 3blue1brown video now!
@sirdondaniel
@sirdondaniel 6 років тому
I've watched the entire playlist and I saw that you actually take care of the node's value (x) in the ΔW equation. These error-equations that you present in this video are just a technique of spreading the error inside the nn. Also they are present in the "Make Your Own Neural Network" of Tariq Rashid, so they should be right :)
@quickdudley
@quickdudley 6 років тому
I actually made the same mistake when I implemented a neural network the first time. Surprisingly: it actually worked, but needed a lot more hidden nodes than it would have if I'd done it right.
@sirdondaniel
@sirdondaniel 6 років тому
Wait... Which mistake do you mean Jeremy?
@landsgevaer
@landsgevaer 5 років тому
Yeah, I noticed this too! Although understandable, the explanation is wrong. Also, what if w1 and w2 cancel, that is w1+w2=0? Then the suggested formulas lead to division by zero, so infinite adjustments. I find it more intuitive to consider every parameter (all weights and all biases) as parameters. Then you look what happens to the NN's final output when you change any such parameter by a small (infinitesimal) amount, keeping all others constant. If you know delta_parameter and the corresponding delta_output, you know the derivative of the output with respect to the parameter, equal to delta_output/delta_parameter. Gradient descent then dictates that you nudge the parameter in proportion to that derivative (times error times learning rate). Finally, the derivative can be expanded using the chain rule to include the effects of all the intermediate weights and sigmoids separately. Backpropagation is "merely" a clever trick to keep track of these products-of-derivatives. Apart from that, kudos for this great video series!
@esu7116
@esu7116 4 роки тому
Are the cost function and the error the same thing?
@OneShot_cest_mieux
@OneShot_cest_mieux 6 років тому
Thank you so much ^^ I have a question: we divide by the sum of weights, but what if the sum of weights is equal to zero ? and your weights are beetween 0 and 1 or beetween -1 and 1 ?
@oooBASTIooo
@oooBASTIooo 6 років тому
gabriel dhimoila weight 0 would mean tgat there is no edge between the vertices. So if the sum would be 0, the output vertex wouldn't be connected to the graph at all and you couldn't measure anything there... What he does is assign every edge its portion of the area by using the arithmetic mean..
@OneShot_cest_mieux
@OneShot_cest_mieux 6 років тому
I don't understand what do you mean by "graph", "edge" and "area" but if weights are beetween -1 and 1 or if weights are initialized by 0 it's probable that the program has to make a division by 0 sorry for my bad english I am french
@alexanderoldenburg
@alexanderoldenburg 6 років тому
Keep it up!:D
@merlinak1878
@merlinak1878 5 років тому
Question: If w2 is 0.1 and it gets tuned by 1/3 of 0.3, the new weight of w2 ist 0.2. And now the error of that is new w2 - old w2. So the error of hidden2 is 0.1? Is that correct? And do i need learning rate for that?
@amanmahendroo1784
@amanmahendroo1784 5 років тому
That seems correct. And you do need a learning rate because the formula dy/dx = ∆y/∆x is only accurate for small changes (i.e. small ∆x). Good luck!
@merlinak1878
@merlinak1878 5 років тому
Aman Mahendroo ok thank you
10.15: Neural Networks: Backpropagation Part 2 - The Nature of Code
19:17
The Coding Train
Переглядів 100 тис.
Bro smelt it & passed out 😂 #comedy
00:10
MrTalalaa
Переглядів 7 млн
Завтра в школу с... | Шоу-квиз «Вопросики»
00:28
Телеканал СОЛНЦЕ
Переглядів 4,5 млн
The Absolutely Simplest Neural Network Backpropagation Example
9:22
Mikael Laine
Переглядів 144 тис.
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Переглядів 212 тис.
10.6: Neural Networks: Matrix Math Part 1 - The Nature of Code
18:13
The Coding Train
Переглядів 135 тис.
10.16: Neural Networks: Backpropagation Part 3 - The Nature of Code
20:21
The Coding Train
Переглядів 83 тис.
Understanding Backpropagation In Neural Networks with Basic Calculus
24:28
Dr. Data Science
Переглядів 18 тис.
Neural Networks Explained from Scratch using Python
17:38
Bot Academy
Переглядів 308 тис.
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
Layerwise Lectures
Переглядів 643 тис.
What is Back Propagation
8:00
IBM Technology
Переглядів 41 тис.
Bro smelt it & passed out 😂 #comedy
00:10
MrTalalaa
Переглядів 7 млн