Building a neural network FROM SCRATCH (no Tensorflow/Pytorch, just numpy & math)

  Переглядів 1,843,605

Samson Zhang

Samson Zhang

3 роки тому

Kaggle notebook with all the code: www.kaggle.com/wwsalmon/simpl...
Blog article with more/clearer math explanation: www.samsonzhang.com/2020/11/2...

КОМЕНТАРІ: 1 200
@victorafonso4534
@victorafonso4534 Рік тому
Making a neural network from scratch is easy, what I really want to see is how to make a neural network ON scratch.
@d3vitron779
@d3vitron779 Рік тому
Make the scratch cat sentient challenge (gone wrong) (humanity destroyed)
@theRPGmaster
@theRPGmaster Рік тому
Just create a python interpreter in Scratch, easy
@Despatra
@Despatra Рік тому
Ok
@v037_
@v037_ Рік тому
Lmao, understimated comment, but perfect one
@BurNJoE
@BurNJoE Рік тому
lol
@you_just
@you_just Рік тому
i like how numpy has become so ingrained in python that it's basically considered vanilla python at this point
@nathanwycoff4627
@nathanwycoff4627 Рік тому
interestingly much of that functionality is built into other languages used by the ml community such as R, matlab and julia.
@mattrochford6783
@mattrochford6783 Рік тому
@@nathanwycoff4627 matrices and linear algebra are really useful for math and engineering less so for general programming. Different languages focusing on different usability concerns quite interesting.
@machineman8920
@machineman8920 Рік тому
@@mattrochford6783 stop coping julia is just a better language
@HilbertXVI
@HilbertXVI Рік тому
@@machineman8920 ???
@thebluriam
@thebluriam Рік тому
I don't like it. I wish people stopped being overly-lazy with Numpy and just wrote their own libraries so they'd understand what they are actually doing. No, scratch that, if they can't accomplish the same thing using only Assembly, they're a total noob, should put down their keyboard, and get an MBA instead...
@khoa4k266
@khoa4k266 5 місяців тому
I watched this video when I was studying in grade 11. Had no clue what he was talking about but I tried to understand as much as possible. Now I watch it again as a university student, it is so satisfying to understand everything now.
@viCuber
@viCuber 3 місяці тому
Hope that will happen to me to
@CR33D404
@CR33D404 3 місяці тому
@@viCuber same LOL
@viCuber
@viCuber 3 місяці тому
@@CR33D404 lmao
@codevacaphe3763
@codevacaphe3763 2 місяці тому
It happens to me several time. Sometime you just stumble on a knowledge and can't understand a single thing about it then suddenly 1 or 2 years later you completely understand it without any try.
@nachoyawn
@nachoyawn Місяць тому
same
@alperengul8654
@alperengul8654 3 роки тому
If you make more deep learning videos with numpy and math(without any framework) just like in this video, it would be great for begginers to learn basics!!! Do you think to keep continue??
@cemsalta
@cemsalta 3 роки тому
Merhaba Eren!
@kanui3618
@kanui3618 3 роки тому
upp!
@anishojha1020
@anishojha1020 3 роки тому
Hey guys, a reply would be highly appreciated. I want to plot the cost vs the number of iterations but I am not able to figure which parameter to plot ? I am a beginner and I would really appreciate the help. Thank you
@KHM95
@KHM95 2 роки тому
Here's a course you'll need. Face Mask Detection Using Deep Learning and Neural Networks. It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
@whannabi
@whannabi 2 роки тому
@@anishojha1020 you're probably not a beginner anymore so I hope you found your answer! Unfortunately, youtube comment section isn't a forum and a lot of people disable notifications(including me) so an actual forum although people are sometimes really rude and condescending, is your best bet for future questions.
@tecknowledger
@tecknowledger 3 роки тому
This video is one of the best descriptions of neural networks written in only Numpy and Python I've ever seen. Thanks
@anishojha1020
@anishojha1020 3 роки тому
Hey guys, a reply would be highly appreciated. I want to plot the cost vs the number of iterations but I am not able to figure which parameter to plot ? I am a beginner and I would really appreciate the help. Thank you
@tecknowledger
@tecknowledger 3 роки тому
@@anishojha1020 Hi, try posting comment again in regular comments part, so more people see it. This is only a sub-comment.
@waterspray5743
@waterspray5743 2 роки тому
@@KHM95 Hi, are you a bot?
@KHM95
@KHM95 2 роки тому
@@waterspray5743 No man, I am not.
@ME0WMERE
@ME0WMERE 2 роки тому
I advise looking at sendex's 'Neural Network from scratch' series
@TimeRoot
@TimeRoot Рік тому
00:51 Problem statement 01:18 Math explanation 11:18 Coding it up 27:43 Result's
@omgcyanide4642
@omgcyanide4642 Рік тому
Thank you
@Zetzumarshen
@Zetzumarshen Рік тому
Thank you
@Dejwv_
@Dejwv_ Рік тому
Thank you
@Salien1999
@Salien1999 Рік тому
Thank you
@stringstudios2262
@stringstudios2262 Рік тому
Thank you
@jumpierwolf
@jumpierwolf Рік тому
Took a Machine Learning course in university and this is what we did the whole semester in Matlab. Tensorflow was introduced right at the end for the final project.
@gasun1274
@gasun1274 Рік тому
sounds amazing
@marshmellominiapple
@marshmellominiapple Рік тому
oh hell yea matlab
@ElectrostatiCrow
@ElectrostatiCrow Рік тому
​@@marshmellominiapple oh he'll yeah methlab
@dumbfate
@dumbfate 9 місяців тому
@@ElectrostatiCrow LET HIM COOK
@PluetoeInc.
@PluetoeInc. 20 днів тому
@@dumbfate no you let him cook
@hcmcnae
@hcmcnae 11 місяців тому
I'm so glad you actually went in depth with the math explanation. So often people will just explain surface layer and then "alright lets jump into the code".
@MegaJesusini
@MegaJesusini Рік тому
My man really explained how a back propagated neural network works from scratch in 10 minutes
@LydellAaron
@LydellAaron Рік тому
Just your intro alone in your motivations was so capturing. You laid out everything so clearly, including creating those row and column matrices in the early steps. Thank you.
@traviss7740
@traviss7740 Рік тому
I've never heard any of this explained before. After watching this once, I understand the mathematics behind neural networks and why the functions are used. Great job with the explanation here. Many thanks.
@Hex...
@Hex... Рік тому
This was interesting, it certainly made neural networks far more approachable to me as someone who's never needed to/been inclined to try making one, but encounters them frequently by being involved in STEM. Your explanations coupled with my familiarity with numpy as opposed to dedicated libraries for neural networks really helped - thanks!
@Mutual_Information
@Mutual_Information Рік тому
Just discovered this channel. Very cool stuff. Much respect for doing something challenging like this.
@darrellrayford3817
@darrellrayford3817 Рік тому
This was a really good video. I’ve never build a neural network but it was interesting seeing how the fundamentals add up to build something a little more complexed.
@randyscorner9434
@randyscorner9434 Місяць тому
Excellent tutorial and example. Reveals the magic that most don't know about NNs and I love how you go about it.
@Bobbleheads56
@Bobbleheads56 Рік тому
I need to come back to this after learning some more preliminaries but you are a very natural teacher and good at presenting. Keep it up 👍
@albertolemosduran5685
@albertolemosduran5685 Рік тому
Most of the videos are titled “how to create a blabla” when they’re actually teaching how to use… so I really appreciate your video! This really contributes to knowledge 🥰
@lbb2rfarangkiinok
@lbb2rfarangkiinok Рік тому
This was really neat. The math explanation was frustrating the first time around but really made sense after working through the code. Thanks for sharing.
@mauricioledon4498
@mauricioledon4498 Рік тому
You sir, are my hero. You are the first person to actually explain this properly to me. Thank you so much for that.
@robertknopf6207
@robertknopf6207 3 роки тому
Another thing that would be helpful for those of us that want to copy what you did and experiment with it is to have all the code together instead of separated as it is using Kaggle - this way you can put in some comments with the code explaining the different features. Again, very good video.
@KSATica
@KSATica Рік тому
You should continue making video similar to this maybe something a training course for machine learning and reinforcement learning AI. You have a real talent for explaining it in the best way possible then from what most videos I’d watched. 👍
@omlachake2551
@omlachake2551 2 роки тому
this type of learning is honestly the best, i implemented k means clustering by myself in c (pretty easy stuff but still) , and i can never forget it now, makes me happy that i can do stuff too
@Emily-fm7pt
@Emily-fm7pt Рік тому
When I was in high-school algebra I programmed an algebra calculator to do my homework for me, and for some reason I never actually needed it. Programming something really is a great way of learning it, even if it does take significantly longer than just some p-sets or flashcards.
@OT-tn7ci
@OT-tn7ci Рік тому
@@Emily-fm7pt dude are you serious ??? SAME SAME lmao
@auronusben4567
@auronusben4567 11 місяців тому
I remember when I tried to implement a decision tree on paper !! With a very small data dimensions (maybe 5x6 dim? Can't remember). I spent all the night doing the math but after 5-6 hours I realized I made a mistake in an iteration 😂😂 that's when I realized that we're lucky to have computers to help do it because a human mind can't build completely without doing mistakes in the process (can't stay focus for long time)... I also remember when I implemented a PCA from scratch on excel ( still have the Excel 😂)...😮
@momol.9892
@momol.9892 День тому
Just learned basics around the neural networks and saw this video. So satisfied to all the math formulas are laid out clearly in numpy and real-world coding and training neural network with back propagation. It really helps beginners like me. Thank you so much!
@jeandy4495
@jeandy4495 Рік тому
Super cool! Would also recommend the series from The Coding Train about creating a neural network from scratch, going a little more into the details of math and what is a perceptron and so.
@RonClabo
@RonClabo 3 роки тому
What an awesome video! Thank you for sharing this insightful walkthrough, it was really helpful in getting a better understanding of how neural nets works. Thank you!
@KHM95
@KHM95 2 роки тому
Here's a course you'll need. Face Mask Detection Using Deep Learning and Neural Networks. It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
@FreakyStyleytobby
@FreakyStyleytobby Рік тому
Man this video is a masterpiece. I learned a lot and I love your thorough, calm style. Please keep doing similar content!! Best wishes
@minjunkevink
@minjunkevink 2 роки тому
Amazing. Needed to see the low end and finally found it. Thank you for the amazing video!
@anandptyagi5275
@anandptyagi5275 Рік тому
After Andrew Ng's course, this is the first time I'm watching math functions, thanks buddy, it was a nice refresher for me.
@juliocardoza6066
@juliocardoza6066 Рік тому
Samson, Keep doing this kind of videos please!! Very intelligent and understandable video
@luisbq8045
@luisbq8045 9 місяців тому
This is pure gold, MSc in Data Science and Artificial Intelligence, no professor ever gave me the answer to "what is the code inside the libraries we use", until I found you. Thank you
@rushisy
@rushisy 9 місяців тому
thats sad
@stanislavlia
@stanislavlia 7 місяців тому
I don't want to sound too catchy and annoying but the NN's in Tensorflow and PyTorch are not actually implemented like this. They don't store functions to compute gradients for every single option rather they use AutoGradient which does all backpropogation job. I would highly recommend to watch Andrej Karpathy's tutorial on micrograd (mini AutoGradient which you will implement)
@michaelpieters1844
@michaelpieters1844 Місяць тому
I got a master in physics and statistics but I do know how to code a lot of "machine learning" techniques from scratch. Yet human resources look at my degree and think I am incapable, so they rather hire master in AI. I can also code CFD, SPH and FEA from scratch but HR say I am dumber than engineer who just uses third party software (ansys).
@suscactus420
@suscactus420 28 днів тому
@@michaelpieters1844 welcome to recruitment in 2024... you need to feed the recruiters what they want to hear, so that you can then get to the guy who you actually want to talk to about your stuff.
@ItsNaberius
@ItsNaberius 20 днів тому
Really excellent breakdown of a Neural Network, especially the math explanation in the beginning. I also want to say how much I appreciate you leaving in your first attempt at coding it and the mistakes you made. Coding is hard, and spending an hour debugging your code just because of one little number is so real. Great video
@joschkazimdars
@joschkazimdars 2 роки тому
It feels like it took me months to understand programming feedforward neural networks but I finally understand it. Thanks for the video.
@faris.abuali
@faris.abuali Рік тому
Thank you so much Mr. Samson!! This was so informative and enlightening
@GrahamEckel
@GrahamEckel Рік тому
Better lecture and example for understanding and building NN than any in my math and stats MSc
@drgatsis
@drgatsis Рік тому
Thanks for lovely video Samson. I'm a prof and love seeing this kind of content. I'll definitely share with students
@peterweicker77
@peterweicker77 5 місяців тому
This is great. Built a backprop in C thirty years ago to solve the same problem. Just for a goof. It worked well before I finished debugging. These things are awesome and now I want to take another look. Thanks for posting this.
@chessprogramming591
@chessprogramming591 Рік тому
Thank you for your time and effort, Samson, this tutorial is a treasure.
@solleo9
@solleo9 Рік тому
What an impressive speed run! Just nitpicking: 15:45 `rand` is for a uniform dist U(0,1) and `randn` is for the standard normal distribution N(0,1), therefore unbounded, not U(-0.5, 0.5)
@straightup7up
@straightup7up 5 місяців тому
Samson, we need more videos like this from you. Great content, more visuals would be nice, too 🙂
@gnorts_mr_alien
@gnorts_mr_alien Рік тому
This is the first ASMR NN video that I have ever seen. Well done.
@David-ip2sd
@David-ip2sd Рік тому
Hi! I did a recreation of your code with more hidden layers and noticed what I think is a bug in the db calculation. Changing it to db = 1 / m * np.sum(dZ, axis=1).reshape(-1, 1) was able to get me better results. I think the old db = 1 / m * np.sum(dZ) sums the entire dZ to one float. Very good video though!
@Hyngvi
@Hyngvi Рік тому
noticed the same thing. The way it was implemented here returns db to a float and thus b will always be "similar" to the random initialization, only shifted up/down by a constant.
@mattlange00
@mattlange00 6 місяців тому
Hey, I know you posted this a while ago, but I noticed the same thing and saw your comment. I am still not sure how to solve this, dZ is still a 1D array (1 by 10) so in your solution, what does axis=1 do? won't .sum*() just turn the 1D array into a scalar regardless, and then you are back with the same problem of updating all your biases the same way?
@mattlange00
@mattlange00 6 місяців тому
Actually, nevermind, dZ is 10 by m so this does make sense
@gpeschke
@gpeschke 3 місяці тому
Numpy requires some strange things when you have only 1 dimension: Verfied that without this change the final biases weights aren't being updated. With it, training works better. Didn't verify the details of David's solution, just that it was needed, and that it seemed to work. def backward_prop(Z1, A1, Z2, A2, W1, W2, X, Y): one_hot_Y = one_hot(Y) dZ2 = A2 - one_hot_Y dW2 = 1 / m * dZ2.dot(A1.T) db2 = 1 / m * np.sum(dZ2, axis=1).reshape(-1, 1) dZ1 = W2.T.dot(dZ2) * ReLU_deriv(Z1) dW1 = 1 / m * dZ1.dot(X.T) db1 = 1 / m * np.sum(dZ1, axis=1).reshape(-1, 1) return dW1, db1, dW2, db2
@jasonavina8135
@jasonavina8135 Рік тому
In case any beginners to ML came here wondering why they are really confused, this video isn't really for beginners and he doesn't really explain that. Its "from scratch" in the sense of not using any prebuilt models in the code. Its a good explanation for people who are already familiar with neural networks, prebuilt layers, loss functions, etc. not for people starting their understanding "from scratch."
@OT-tn7ci
@OT-tn7ci Рік тому
actually im new to ML, (2-3 months in) and this helped me understand a lot, i am implementing it on my own now, without even using numpy so i can code out stuff like transpose on my own and learn more. Random is tricky tho lol
@edisonbekaj863
@edisonbekaj863 Рік тому
Samson Zhang is the BEST Cinematographer, editor, musician& tech geek in the WORLD
@Crayphor
@Crayphor Рік тому
It's worth noting that softmax IS actually very similar to sigmoid. But it essentially does a sigmoid over multiple classes.
@SnackPack913
@SnackPack913 Рік тому
I’m always too intimidated to try some of these things. But seeing your process makes it really seem feasible. Need to brush up on my linear algebra again tho 😆
@arksodyssey
@arksodyssey 2 місяці тому
This solved a lot of doubts and brought up mu confidence levels to deep dive into AI/ML. Thanks for the explanation.
@ricardogomes9528
@ricardogomes9528 Рік тому
Awesome fundamental class on neural networks equations. Bravo!
@notyou4122
@notyou4122 Рік тому
Musician, filmmaker, data scientist, and etc. bro maxed out on skill trees. 😂
@AAAJ27
@AAAJ27 Рік тому
Samson, this was such a great walk through. Just wanted to say that if you ever made other videos recreating machine learning models from scratch, I'd 100% watch them. In any case, hope all is good and thanks for this great content :)
@kurtameyer
@kurtameyer 2 місяці тому
Thank you. I'm doing this in class right now and your explanations were super helpful!
@aureliencobb199
@aureliencobb199 Рік тому
Brilliant. Kind of the Hello World of neural nets. It shed a lot of light for me on how back propagation works.
@sharmakartikeya
@sharmakartikeya 2 роки тому
Bro, that is exactly how I study! I found out your channel and I am so glad I did. Instantly subscribed! I see you have learnt from Andrew Ng
@rishikeshkanabar4650
@rishikeshkanabar4650 2 роки тому
yeah the notations reminded me of Andrew Ng
@kumaranp8764
@kumaranp8764 2 роки тому
@@rishikeshkanabar4650 usage of the word called "intuition" reminds me of him saying ..."to get a better intuition" in his lectures
@FireNLightnin
@FireNLightnin Рік тому
Great video! I did the same thing in python about a year ago, but I didn’t like relying on numpy so much. Your video gave me the motivation to write both a matrix manipulator and neural network from scratch in Java
@TheJackTheLion
@TheJackTheLion 9 місяців тому
I did it in assembly, easy
@llewsub
@llewsub 7 місяців тому
Most tutorials I watch online about ML, you can just tell that the instructor doens't know whats happening. They've just memorized libraries and tensorflow syntax, and I don't want that to be me! This is exactly what i've been looking for! THANK YOU!!!
@waynesletcher7470
@waynesletcher7470 4 місяці тому
Love your sense of humor! Brought the video to life, thanks! You are appreciated!
@kiaruna
@kiaruna Рік тому
Could you please do more tutorials ? This is such a great video
@DiAMONDBACK85
@DiAMONDBACK85 7 місяців тому
Hi Samson! I'm a developer and trying to learn the basics of ML. Much of the beginner stuff I see is using pre-trained models and frameworks which might be convenient to get things going. However, for me this is something completely new and I really what to understand what happens behind the scenes. Thank you for posting this! /Kevin from Sweden
@jmw1500
@jmw1500 5 місяців тому
Take the statistics course then. Don't learn from programmers. They do not know either.
@paultvshow
@paultvshow 3 місяці тому
⁠Exactly!
@carnap355
@carnap355 2 місяці тому
try jeremy howard part2 of 2022 courses
@deananderson8186
@deananderson8186 Рік тому
I loved this video! Cool stuff. I implemented a tfidf clustering algorithm myself, very satisfying to see it all working
@bf300
@bf300 Рік тому
Really cool video Samson! Great stuff!
@mercedeszkistoth5367
@mercedeszkistoth5367 Рік тому
There is one thing I do not understand. Because the derivation and chain rule stuff, shouldn't the derivative of the softmax activation function also be included somewhere?
@work9466
@work9466 Рік тому
I actually did this exact same thing for my German a level project. Same database. :D good times
@luizarnoldchavezburgos3638
@luizarnoldchavezburgos3638 Рік тому
Keep doing it man, I am from Perú and the information that your are giving is the important I have heared about
@jasonkim1642
@jasonkim1642 Рік тому
Great video! It's really solid in foundation! I will definitely recommend this to those just like to use framework and library without understanding
@dcrespin
@dcrespin Рік тому
An excellent nice video with abundant mathematical insight. It may be worth to note that instead of partial derivatives one can work with derivatives as the linear transformations they really are, and also looking at the networks in a more structured manner thus making clear how the basic ideas of BPP apply to much more general cases. Several steps are involved. 1.- More general processing units. Any continuously differentiable function of inputs and weights will do; these inputs and weights can belong, beyond Euclidean spaces, to any Hilbert space. Derivatives are linear transformations and the derivative of a neural processing unit is the direct sum of its partial derivatives with respect to the inputs and with respect to the weights; this is a linear transformation expressed as the sum of its restrictions to a pair of complementary subspaces. 2.- More general layers (any number of units). Single unit layers can create a bottleneck that renders the whole network useless. Putting together several units in a unique layer is equivalent to taking their product (as functions, in the sense of set theory). The layers are functions of the of inputs and of the weights of the totality of the units. The derivative of a layer is then the product of the derivatives of the units; this is a product of linear transformations. 3.- Networks with any number of layers. A network is the composition (as functions, and in the set theoretical sense) of its layers. By the chain rule the derivative of the network is the composition of the derivatives of the layers; this is a composition of linear transformations. 4.- Quadratic error of a function. ... --- Since this comment is becoming too long I will stop here. The point is that a very general viewpoint clarifies many aspects of BPP. If you are interested in the full story and have some familiarity with Hilbert spaces please google for papers dealing with backpropagation in Hilbert spaces. A related article with matrix formulas for backpropagation on semilinear networks is also available. For a glimpse into a completely new deep learning algorithm which is orders of magnitude more efficient, controllable and faster than BPP search in this platform for a video about deep learning without backpropagation; in its description there are links to a demo software. The new algorithm is based on the following very general and powerful result (google it): Polyhedrons and perceptrons are functionally equivalent. For the elementary conceptual basis of NNs see the article Neural Network Formalism. Daniel Crespin
@f.osborn1579
@f.osborn1579 Рік тому
Haven’t finished video yet, but this looks like the missing piece of my experience learning about neural networks at a high level…I probably lacked the linear algebra skills I have now though. Whoa! This could be incredibly exciting! I can’t wait!
@mrgenetics4063
@mrgenetics4063 Рік тому
Nobody cares what you have to say
@DungeonMasterpiece
@DungeonMasterpiece Рік тому
I've been looking for this video for 6 years.
@sanglar3623
@sanglar3623 Рік тому
The yt algorithm only recommends me this now, 1 year after i've encountered a similar discontent with neural network tutorials. Still very interresting to see how someone else does it. I did give myself a bit of help by using a library called Eigen for the matrixes calculations. Very well done nice video
@themoonlight1922
@themoonlight1922 2 роки тому
Hi, i found this video very helpful for beginners. Could you please tell how you came up the equations of dz,dw and db? That would be really helpful as well
@aryamankukal1056
@aryamankukal1056 Рік тому
watch andrew ng he copied every single equation from his course
@Nanakwaku309
@Nanakwaku309 Рік тому
@@aryamankukal1056 I wouldn’t say he copied every equation. These equations are taught in all ML/AI courses and it is just mathematics
@aryamankukal1056
@aryamankukal1056 Рік тому
@@Nanakwaku309 andrew's notation is a very specific and if u watch carefully he uses all of the same conventions
@ricardo5875
@ricardo5875 2 роки тому
This is a great way to teach ANN - congrats. However, I would like to suggest you to not worry too much about the time to finish the implementation. Double-checking all steps will avoid coding errors.
@dbbyres
@dbbyres 4 місяці тому
Nicely done, Samson, thanks!
@isreallealbertsanchez1156
@isreallealbertsanchez1156 2 роки тому
Timestaps if you forgot 0:51 Problem Statement 1:18 Math Explanation 11:18 Coding It up 27:43 Results
@KHM95
@KHM95 2 роки тому
Here's a course you'll need. Face Mask Detection Using Deep Learning and Neural Networks. It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
@Achrononmaster
@Achrononmaster Рік тому
@18:07 is the time stamp where the other error was made, a2 = softmax(a1) which should be a2 = softmax(z2)
@Achrononmaster
@Achrononmaster Рік тому
@23:30 you also see two errors, there is no axis argument for the np.sum(), the lines should be db2 = 1 / m * np.sum(dZ2) ... and ... db1 = 1 / m * np.sum(dZ1)
@Achrononmaster
@Achrononmaster Рік тому
And @23:00 ReLU_deriv(z) should really be return np.array(zn > 0, dtype=float) if you are aiming for good typing practice.
@elivegba8186
@elivegba8186 Рік тому
I don't understand anything but wow
@tommyhuffman7499
@tommyhuffman7499 Рік тому
It's a shame it isn't taught this way in courses. Excellent video!
@Felix14325
@Felix14325 Рік тому
Amazing video for beginners to gain an insight in how neural networks work. You just have to have programmed a simple neural net from scratch once to have a good basic understanding.
@letticonionepic
@letticonionepic 9 місяців тому
I know the Maths and Programming behind it and listening this guy doing all that on his own is pure respect from my side.
@Kaetemi
@Kaetemi 6 місяців тому
Helpful, thanks. Made my own from scratch in bare C++. From image to 32 to 16 to 10 outputs, using leaky ReLU. 96% accuracy on the test set. 🥳
@GiacomoMiola
@GiacomoMiola Рік тому
It's a MLP, you easily computed the backpropagation step in closed form, but I wonder how those famous frameworks can compute any network's partial-derivatives tensors automatically
@elliott614
@elliott614 Рік тому
usually the partial derivatives in backpropagation are of functions specifically chosen to be convex and have nothing to do with the problem you are working on, but are just ones that work nicely for ML algos
@jamescardenas837
@jamescardenas837 Місяць тому
i have no idea what your were really saying but at the same time i do because you explained how the math is used and implemented for the code. thank you !
@nextcomputerparts
@nextcomputerparts 10 місяців тому
A great introduction to neural networks is Parallel Distributed Programming by Rumelhart and McLelland from about 1986. They do something similar and give a lot of additional background.
@xuxusito
@xuxusito 3 роки тому
Very good video and explanation! Thanks 😊. I just would have liked it if you had explained the backprop a little more in depth. Like how the derivatives are calculated on each layer (chain rule etc.) But other than that one of the best nn videos
@KHM95
@KHM95 2 роки тому
Here's a course you'll need. Face Mask Detection Using Deep Learning and Neural Networks. It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
@cbeezy4733
@cbeezy4733 Рік тому
Perhaps I overcomplicated matters compared to your approach when I did this a couple of years ago, but like you, I wanted to program it "from scratch". My language of choice: java. I actually simulated "neurons" which were a class that stored its activation data value, and its connections to the next layer, so that it "looked" like a K_m,n graph, and the connection was an array which stored the biases along each "synapse" so to speak. Then when the hidden layers activated, I had each neuron simply sum the outputs from each synapse connecting to it from the previous layer, which was just the product of its activation value and its bias, then sigmoided this to get its own activation value. Note that while each neuron's activation was only in (-1,1), I let the biases be free parameters. When I programmed the backprop algo, I did the gradient descent the same as you, but effectively set that alpha parameter to one. It didn't occur to me to mess with that. Starting the network out with random parameters, then training it on randomly chosen sets of 10,000 images five or six times seemed to work pretty well. I saw 93% accuracy on the test data. And just for fun, I put the network on a discord bot so my friends could feed it images of the same size and see its guess. Two interesting results came out. The network fails on inverted colors: i.e., drawing white on black using MS paint or something wouldn't get reliable predictions. Secondly, using MS paint to give it new data did work, but at a much lower rate. Our best guess for why this happened was due to the sharpness of the lines between the number and backgrounds.
@youri655
@youri655 Рік тому
Very impressive! Great commentary/explanation as well
@hayashii5837
@hayashii5837 Місяць тому
thank you for the knowledge Mr. Samsung
@harisjaved1379
@harisjaved1379 Рік тому
I agree with you. I also did this by scratch. It was a lot of fun! What’s the point of masters math degree if I am not going to use it lol. Nice work!
@Pk-tw6li
@Pk-tw6li Рік тому
bro can you help i also wanna learn can you tell us resources which you use to learn this neural network
@juliopaniagua8723
@juliopaniagua8723 Рік тому
@@Pk-tw6li study some basic linear algebra, just with that you'll understand at least 85% of whats going on with the algorithm
@Ari-Matti.Rintala
@Ari-Matti.Rintala Рік тому
Amazing stuff! Just wondering what value does the coding timer add to the video? I mean instead of correcting your mistakes with overlapping text you could have taken a little bit of time to review your code instead of rushing it through. But again, amazing content!
@ignaciomaureirajofre4353
@ignaciomaureirajofre4353 11 місяців тому
Great explanation Sam!
@williamzhang6955
@williamzhang6955 Рік тому
That is very neat and captures the fundamental ideas of neural nets! great job
@danielniels22
@danielniels22 3 роки тому
Hello, it's such a great tutorial. thank you very much. I think people who are over exited because of this AI-hyped should learn this basic, and see whether those people really fit in to this field 🤣🤣
@ashtonoates4798
@ashtonoates4798 Рік тому
Hey, the video was very informative for a beginner to machine learning. I did have a question about the code, I noticed that when I implemented the code as you did in the video I was getting an error: RuntimeWarning: invalid value encountered in true_divide in the softmax function. I looked at your kaggle notebook, and I see that in the notebook you do x_train = x_train / 255 and when I did this it fixed the error. I'm not sure what's going on here, does anyone know why this is happening and why dividing by 255 fixed the issue?
@gillesvanlommel5932
@gillesvanlommel5932 Рік тому
Images consist of pixels, each pixel is a value between 0 and 255. Here 0 is dark and 255 is white. To make it easy, models train more easily on normalised data/ there are function that expect your data to be between 0 and 1 hence dividing every pixel value by 255. When you write it yourself, you can choose ofcorse if u want 0,255 or 0,1 but normally is its 0,1 since then if you're using it on another project with idk different pixel value it will still work. To put it simply it generalises things. Hope it helps =D!
@chriswysocki8816
@chriswysocki8816 Рік тому
Hey, I also found this needed correction (/255) when I was coding along. But despite that I cannot get the code to work. I get an error in the "clever" function one_hot(Y). Python is telling me: "ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()" Any ideas? I'll be looking for the cause myself on the meantime
@kadenmoss8904
@kadenmoss8904 Рік тому
@@chriswysocki8816 I was having a very similar issue. In the end, I looked at the notebook he posted to kaggle and found a couple slight differences, notably in the backward_prop function and its implementation. I don't know exactly what did it, but copying the notebook rather than the video got mine to work
@chriswysocki8816
@chriswysocki8816 Рік тому
@@kadenmoss8904 I forgot to post, but I did find the problems and got it to work. Only then I compared it with kaggle, which would have saved me time, but less satisfaction. Anyway, still a great tutorial as now I can create these simple neural networks. I even expanded this one to 3 levels of "weights" and it worked great as it gave me a better asymptotic recognition rate (i.e. when cranking up the number of learning iterations)
@Zeoytaccount
@Zeoytaccount Рік тому
This thread was a lifesaver lol. Does anyone understand what the line _,m_train = X_train.shape (under X_train = X_train / 255.) does? Not familiar with the API yet
@katarzynaludwikakowalczyk-8232
@katarzynaludwikakowalczyk-8232 Місяць тому
Samson, thank you for this video, it is very helpful! Just like you, I also benefit much more from detailed, equation-based explanations than from the high level, big picture overviews.
@Theeoldmann
@Theeoldmann Рік тому
Really appreciate the example & tutorial, thank you. Can you make another video? Perhaps on adding more hidden layers?
@quanduong8917
@quanduong8917 3 роки тому
You can actually use momentum for gradient descent. The result is slightly better (I tried on your nn and it gets 91% accuracy) // I'm a beginner at ML so your video taught me a lot. Keep up your great work you're doing man. It's really cool.
@akainu3668
@akainu3668 3 роки тому
can you please send link of your code
@123arskas
@123arskas Рік тому
Just 1 minute in the video and I can easily tell that you're gonna own a multi-billion company within a few years. You've got the IQ, the voice, the clarity, the confidence, and the right personality. Best of luck Mr. Zhang
@iggzistentialism8458
@iggzistentialism8458 11 місяців тому
This was really useful to me, and incredibly well explained. Thank you.
@iwasdeleted708
@iwasdeleted708 8 місяців тому
the fact he actually shows the first overconfident of its memory programer stage is actually so real.
@Milorae
@Milorae Рік тому
Everyone praises this video for being so helpful and I'm just sitting here understanding NOTHING. :D I feel so dumb! Maybe I should've stared with something even more basic having learned in a nutshell only print("hello world") so far. I will definitely go back and watch it all again in the future after I learn more. Thank you for the video, Samson. Cheers!
@xianzai_ad1928
@xianzai_ad1928 Рік тому
defintely pick up a book on algorithims and data structures first!
@ReBufff
@ReBufff Рік тому
Now build one IN Scratch
@be7256
@be7256 Рік тому
been done actually
@carloscortes2391
@carloscortes2391 4 місяці тому
I am going to do the same over the next two weeks , at the end I'm coming back to see any differences between our code, thanks for sharing :)
@my14081947
@my14081947 27 днів тому
I heard your name as Samsung- got instantly hooked.
@nitinrohit1669
@nitinrohit1669 2 роки тому
Hey, I found a flaw in your code and would be great if you answer it......The updation that you are doing for the bias' is not all needed as per your code because all the bias are changed by same factor hence it's still random( you have used a scalar to update the bias instead of a column vector)......I found the correct solution to it but getting an error. you should add the axis=1 in the sum function.
@lucasphillips2177
@lucasphillips2177 Рік тому
ya I encountered that, too and fixed it like you said.
@fengzhang9052
@fengzhang9052 3 роки тому
Thank you for sharing . You did a awesome explanation! A quick question, I got brunch of NaN in A2 after Softmax. Besides, can you please re-share the code link? The current link connects to 404 page 😂
@bagavanmarakathalingasivam7117
@bagavanmarakathalingasivam7117 3 роки тому
The Description link wasn't working for me either, but I found the actual link: www.kaggle.com/wwsalmon/simple-mnist-nn-from-scratch-numpy-no-tf-keras?scriptVersionId=47652503
@nq2c
@nq2c 3 роки тому
U need to normalise ur values because they overflow since exponential makes numbers big try using logsumexp or something that lowers ur values
@SamsonZhangTheSalmon
@SamsonZhangTheSalmon 3 роки тому
Thanks for the catch! I've updated the link
@KHM95
@KHM95 2 роки тому
Here's a course you'll need. Face Mask Detection Using Deep Learning and Neural Networks. It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
@jonisamuels
@jonisamuels 2 роки тому
I’m having the same problem, getting NaNs in a2 after using softmax(). How did you solve this? It means I can’t get actual answers out.
But what is a neural network? | Chapter 1, Deep learning
18:40
3Blue1Brown
Переглядів 16 млн
You Don't Understand AI Until You Watch THIS
37:22
AI Search
Переглядів 219 тис.
Їжа Закарпаття. Великий Гід.
1:00:29
Мiша Кацурiн
Переглядів 416 тис.
I Trapped Myself in a Box with Colored Smoke!
00:50
A4
Переглядів 15 млн
Підставка для яєць
00:37
Afinka
Переглядів 53 тис.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Переглядів 143 тис.
How This Pen Changed The World
9:17
Primal Space
Переглядів 322 тис.
Building a Neural Network with PyTorch in 15 Minutes | Coding Challenge
20:34
Nicholas Renotte
Переглядів 142 тис.
Building the FASTEST SR-71 Blackbird Rocket Plane!
17:53
ProjectAir
Переглядів 276 тис.
Watching Neural Networks Learn
25:28
Emergent Garden
Переглядів 1,1 млн
How I’d learn ML in 2024 (if I could start over)
7:05
Boris Meinardus
Переглядів 761 тис.
Let's build GPT: from scratch, in code, spelled out.
1:56:20
Andrej Karpathy
Переглядів 4,2 млн
Coding Adventure: Simulating Fluids
47:52
Sebastian Lague
Переглядів 1,5 млн
LOBODA - Americano (Премьера клипа, 2021)
3:53
LOBODA
Переглядів 3,3 млн
Никто
2:36
NЮ - Topic
Переглядів 507 тис.
OXXXYMIRON - КТО УБИЛ МАРКА?
10:32
oxxxymironofficial
Переглядів 13 млн
OXXXYMIRON - ОРГАНИЗАЦИЯ
3:26
oxxxymironofficial
Переглядів 4,6 млн
Tanir & Tyomcha - Потеряли пацана (Lyric Video)
2:37
Da Gudda Music
Переглядів 2,7 млн
Артем Пивоваров - Міраж (Lyric Video)
3:34
Артем Пивоваров
Переглядів 243 тис.
NЮ - Никто (Официальная премьера трека)
2:36