Neural Network Architectures & Deep Learning

  Переглядів 763,766

Steve Brunton

Steve Brunton

День тому

This video describes the variety of neural network architectures available to solve various problems in science ad engineering. Examples include convolutional neural networks (CNNs), recurrent neural networks (RNNs), and autoencoders.
Book website: databookuw.com/
Steve Brunton's website: eigensteve.com
Follow updates on Twitter @eigensteve
This video is part of a playlist "Intro to Data Science":
• Intro to Data Science
This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company

КОМЕНТАРІ: 396
@mickmickymick6927
@mickmickymick6927 3 роки тому
Does anyone else feel weird when he says Thank You at the end? He just gave me a free, high-quality, understandable lecture on neural networks. Man, thank *you*!
@Eigensteve
@Eigensteve 3 роки тому
:) People watching and enjoying these videos makes it so much more fun to make them. So indeed, thanks for watching!
@antoniofirenze
@antoniofirenze 2 роки тому
@@Eigensteve ..being happy to see other people making progress. Man, you have a great heart..!
@carol-lo
@carol-lo 2 роки тому
Steve, we should be thanking "you"
@oncedidactic
@oncedidactic 2 роки тому
Presenter with true class 👏
@Learner..
@Learner.. 2 роки тому
😁😍
@teslamotorsx
@teslamotorsx 4 роки тому
UKposts's recommendation algorithm is becoming self-aware...
@florisr9
@florisr9 4 роки тому
It was UKposts's turn in the introduction round
@GowthamRaghavanR
@GowthamRaghavanR 4 роки тому
I hope Jus relu and sigmoid
@Xaminn
@Xaminn 4 роки тому
@@GowthamRaghavanR those are the safe ones
@resinsmp
@resinsmp 4 роки тому
Imagine for a second also what the algorithm never recommended to you, because it already knew you were aware.
@Xaminn
@Xaminn 4 роки тому
@@resinsmp Now that's an interesting thought haha. "Since user searched this type of topic, it must already be aware of some other certain type of topics." Simply marvelous!
@farabor7382
@farabor7382 4 роки тому
I don't know why youtube decided I needed that little course, but I'm glad that it did now.
@brockborrmann2931
@brockborrmann2931 4 роки тому
This video has common variables with other videos you watch!
@TonyGiannetti
@TonyGiannetti 4 роки тому
Sounds like you’ve been autoencoded
@fitokay
@fitokay 4 роки тому
That's why the CF algorithm did
@Kucherenko90
@Kucherenko90 4 роки тому
same thing
@user-yp6ze3dh5j
@user-yp6ze3dh5j 4 роки тому
UKposts also uses neural networks
@Savedbygrace952
@Savedbygrace952 10 місяців тому
I am addicted to your series of lectures for the last three months. your "welcome back" intro looks like a chorus to me. thank you!
@theunityofthejust-justifyi7951
@theunityofthejust-justifyi7951 4 роки тому
You really simplify the stuff in a way that has me feel enthusiastic to learn it. Thank you.
@brian_c_park
@brian_c_park 4 роки тому
Thank you, I've always seen the term neural networks generalized and always thought of it as probably a bunch of matrix operations. But now I know that there are diverse variations and use cases for them
@elverman
@elverman 4 роки тому
This is the best short intro to this topic I've seen. Thanks!
@dantescanline
@dantescanline 4 роки тому
This was massively helpful as an intro! When my question is just "yes but how does this ACTUALLY work", you either get pointlessly high level metaphors about it being like your brain, or jumping straight into gradient descent and all the math behind training. A+ video, thanks.
@Jorpl_
@Jorpl_ 4 роки тому
Hey I just wanted to say thank you for making this video. I found it really helpful! I particularly enjoyed your presentation format, and the digestible length. About to watch a whole bunch more of you videos! :)
@PhoebeJCPSkunccMDsImagitorium
@PhoebeJCPSkunccMDsImagitorium 4 роки тому
steve brunton idk who u r before watching this. but this presentation style of a glass whiteboard w/ image superimposed is the best way ive ever seen someone teach tbh. thank u at least for that. but more importantly this actually helped me understand the beast of neural nets a little more and hopefully be more prepared when our new ai overlords enslave us at least we will know how they think
@KeenyNewton
@KeenyNewton 4 роки тому
These were most productive 9 minutes. Great explanation on the architectures.
@easylearn9350
@easylearn9350 4 роки тому
Simple perfect enjoyable expaining of DNNs. Thanks for sharing!
@XecutionStyle
@XecutionStyle 3 роки тому
Sir your deep learning videos are the only ones on UKposts I take seriously.
@josephyoung6749
@josephyoung6749 4 роки тому
Amazing program... I love the thing he's drawing on that projects his diagrams.
@Illu07
@Illu07 4 роки тому
Gosh i needed this intro at the start of my seminar paper...
@culperat
@culperat 4 роки тому
Important note about the function operating on a node. If the functions of two adjacent layers are linear, then they can be equivalently represented as a single layer (compositions of linear transforms is itself a linear transformation and thus could just be its own layer). So, nonlinear transformations are -necessary- for deep networks (not just neural networks). That isn't to say you can't have a composition of linear transformations to compose an overall linear transformation, if there's nonlinear constraints for each operator.
@husane2161
@husane2161 4 роки тому
Awesome concise high level explanation! Thank you
@tottiegod8021
@tottiegod8021 2 роки тому
Great content for existing developers. Wow. Incredible. To say the least I am speechless. You didn’t waste my time and I appreciate that!!
@johnwilson4909
@johnwilson4909 4 роки тому
Steve, you are the first person I have ever seen describe an overview of neural networks without paralyzing the consciousness of the average person. I look forward to more of your lectures, focused in depth on particular aspects of deep learning. It is not hard to get an AI toolkit for experimentation. It is hard to get a toolkit and know what to do with it. My personal interest is in NLR (natural language recognition) and NLP (natural language programming) as applied to formal language sources such as dictionaries and encyclopedias. I look forward to lectures covering extant NLP AI toolkits. Sincerely, John
@pb25193
@pb25193 4 роки тому
John, I recommend Stanford's course on recurrent neural networks. Free on UKposts. It's a playlist with over 20 lectures
@pb25193
@pb25193 4 роки тому
ukposts.info/slow/PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z
@robertschlesinger1342
@robertschlesinger1342 4 роки тому
Excellent overview on neural network architecture. Very interesting and worthwhile video.
@ArneBab
@ArneBab 4 роки тому
Thank you for your video! Seeing your example for principal values decomposition made neural networks much clearer to me than anything else I had seen till now. It allowed me to connect this to SVD-based linear modeling I used almost 10 years ago to create simplified models of visual features seen in fluid dynamics. I did not expect how much easier this suddenly seemed when it connected to what I already knew.
@lucasb.2410
@lucasb.2410 4 роки тому
Amazing video and explication , focusing on key points is very interesting for such sciences, thank you a lot and keep doing that !
@chris_jorge
@chris_jorge 4 роки тому
forget neural networks, this guy figured out that it's better if you stand behind what your presenting instead of in front of it. mind blown
@lightspeedlion
@lightspeedlion Місяць тому
Amazing time spent to understand the Networks a little more.
@MikaelMurstam
@MikaelMurstam 4 роки тому
Very nice. I like the autoencoders. That is basically just understanding. Intelligence is basically just a compression algorithm. The more you understand the less data you have to save. You can extract information from your understanding. That's basically what the autoencoder is about. For instance, if you want to save an image of a circle you can store all the pixels in the image, or store the radius, position and color of it. Which one takes up more space? Well, storing the pixels. We can use our understanding of the image containing a circle in order to compress it. Our understanding IS the compression. The compression IS the understanding. It's the same.
@TheMagicmagic290
@TheMagicmagic290 4 роки тому
shut up
@dizzydtv
@dizzydtv 4 роки тому
profound observation
@bdi_vd3677
@bdi_vd3677 4 роки тому
Thank you for your comment, excellent observance!
@SirTravelMuffin
@SirTravelMuffin 4 роки тому
I dig that perspective. I do think that compression can have some downsides. I feel like my emotional reactions to things are a sort of "compression". I can't keep track of everything I've read about a potentially political topic, but I can remember how it made me feel.
@PerfectlyNormalBeast
@PerfectlyNormalBeast 4 роки тому
I like to think of autoencoder as an architect outputting a blueprint, then a construction company building that building
@VikiGradwohl
@VikiGradwohl 4 роки тому
A really really great video to point out essentials of Neural Network Architecture, thanks for that video
@YASHSHARMA-bf2mm
@YASHSHARMA-bf2mm Рік тому
Thank you so much for the video! The way you teach makes learning so much fun:) If you were born in ancient time, you alone would have shot the literacy rate by over 20%
@goodlack9093
@goodlack9093 Рік тому
Love your videos and your book! Can't wait to start working through it actually!
@carnivalwrestler
@carnivalwrestler 4 роки тому
Clear and concise. Thanks for posting.
@nghetruyenradio
@nghetruyenradio 4 роки тому
Best. I love your lecture. It explains problem in a simple way. Thank you so much.
@parvezshahamed370
@parvezshahamed370 4 роки тому
I have been looking for this content a really long time. Thanks so much.
@bambam10years
@bambam10years 4 роки тому
Such a great explanation, thank you
@myway2mars
@myway2mars 4 роки тому
Great explanation. Thank you!
@userou-ig1ze
@userou-ig1ze 3 роки тому
simply great, thanks for this intro video
@raoofnaushad4318
@raoofnaushad4318 4 роки тому
Thanks for sharing Steve
@mrknarf4438
@mrknarf4438 4 роки тому
Clear, simple, effective. Thank you!
@mrknarf4438
@mrknarf4438 4 роки тому
Also loved the graphic style. We're the images projected on a screen in front of you? Great result, I wish more people showed info this way
@FlowerPowered420
@FlowerPowered420 9 місяців тому
I really appreciate this talk, thank you.
@jonacacarr3839
@jonacacarr3839 4 роки тому
This was most helpful, very clear, thank you
@RolandoLopezNieto
@RolandoLopezNieto 8 днів тому
I just found your channel as a suggestion from a 3Blue1Brown video. I subscribed instantly, easily explained, thanks.
@Eigensteve
@Eigensteve 8 днів тому
So cool! Which video?
@karemabuowda2695
@karemabuowda2695 2 роки тому
Thank you very much for this extraordinary way of teaching.
@solargoldfish
@solargoldfish 4 роки тому
Great explanation. Thank you.
@satoshinakamoto171
@satoshinakamoto171 4 роки тому
thank you. i somehow get inspiration from videos like these.
@yourikhan4425
@yourikhan4425 Рік тому
I need to watch all the videos of this channel.
@neiltucker1355
@neiltucker1355 10 місяців тому
a fantastic overview thanks!!♥
@amegatron07
@amegatron07 4 роки тому
I started to learn NNs in good old early 2000-s. No internet, no collegues, nor even friends to share my excitement about NNs. But even then it was obvious that the future lies with them, though I had to concentrate on more essential skills for my living. And only now, after so many years have passed, I tend to come back to NNs, cause I'm still very excited about them and it is much-much-much easier now at least ot play with them (much more powerful computers, extensive online knowlegde base, community, whatever), not speaking about career opportunities. I'm glad YT somehow guessed I'm interested in NNs, though I haven't yet searched for it AFAIR. It gives me another impetus to start learning them again. Thanks for the video! Liked and sub-ed.
@sitrakaforler8696
@sitrakaforler8696 Рік тому
Really clear. Thanks for the vidéo !
@aminnima6145
@aminnima6145 2 роки тому
Thank you for this beautiful explanation.. I really enjoy it.
@toonheylen4707
@toonheylen4707 4 роки тому
Amazing video, thanks for the information
@kevintacheny1211
@kevintacheny1211 4 роки тому
One of the best introductions to AI I have seen.
@bensmith9253
@bensmith9253 4 роки тому
YES. ☝️this
@beepboopgpt1439
@beepboopgpt1439 4 роки тому
Thank you so much! I needed this.
@flaviudsi
@flaviudsi Рік тому
Very well explained. Thank you
@izainonline
@izainonline 8 місяців тому
Great explanation Thank u Sir
@doctorshadow2482
@doctorshadow2482 Рік тому
He Steve, thank you a lot for all your brilliant videos! One request on the topic, could you please cover how all this works with shift/rotation/scale of the image? Nobody on youtube covers this tricky part of the neuron networks used for image recognition. I keep fingers crossed that you the one who could clarify this.
@tw0ey3dm4n
@tw0ey3dm4n 4 роки тому
Strangely enough. I needed this vid. Thank you YT ALGO
@AllTypeGaming6596
@AllTypeGaming6596 4 роки тому
So youtube know that i am currently learning neural network and this video is appear in my recommendation ,great
@IamWillMatos
@IamWillMatos 4 роки тому
Great work on this video!
@SaidakbarP
@SaidakbarP 4 роки тому
Thank you for a good explanation. This is the quality of content we want to see! 10 folds better than Siraj Raval's channel, in my opinion.
@fzigunov
@fzigunov 4 роки тому
Well, that makes sense given he's a renowned professor =)
@jimparsons6803
@jimparsons6803 11 місяців тому
Liked that the approach was direct and simplistic; and of course you can write your code in this manner too. So that you're not overwhelmed. Say four or five layers being coded, then you have outboard functions that handle the input and out put arrays. This last might take up most of the landscape of a program. Isn't this fellow clever? Dang. He's gotta be a Professor somewhere. Many thanks. The computer training that I had gotten was very rudimentary, first in the 60s and then another drop in the mid 90s. Luckily there's YT where you can catch up. And after a while the 'training' starts to remind you of subliminal sorts of stuff. Maybe?
@ts.nathan7786
@ts.nathan7786 4 місяці тому
Very good explanation. 🎉
@jaredbeckwith
@jaredbeckwith 4 роки тому
Good overall neural net explanation!
@insomnia20422
@insomnia20422 4 роки тому
this is 9 minutes of pure quality education
@abhaythakur8572
@abhaythakur8572 4 роки тому
Thanks for this explanation
@SimulationSeries
@SimulationSeries 4 роки тому
Adore this free online schooling, thanks so much Steve!!
@Eigensteve
@Eigensteve 3 роки тому
Glad you enjoy it! Thanks!
@JohannesSchmitz
@JohannesSchmitz 4 роки тому
Could you please do a follow up on this? I basically came here for the "many many more" you mentioned towards the end. LSTMs and other architectures that are useful for time series processing. It would be nice if you could do an overview video about that class of networks.
@alalalal5952
@alalalal5952 4 роки тому
ty YT, is all joy your latest state of recomendations
@saysoy1
@saysoy1 Рік тому
once you get hold of the back propagation and how to do the chain rule derivatives, you understand that was not the goal! you merely opened the door, and this video is the way to your goal!
@youcanlearnallthethingstec1176
@youcanlearnallthethingstec1176 3 роки тому
I like the way of explaining by projecting on glass board....very very nice...
@latestcoder
@latestcoder 3 роки тому
Ok, gotta bring my notebook, thank you for the content btw
@vijaykumar.jayaraj
@vijaykumar.jayaraj 4 роки тому
Very nice explanation
@BenHutchison
@BenHutchison 2 роки тому
Oh wow I've been educated by your channel for a while now but did not realise you have published a textbook until your remark. Only A$80 here in Aus. Done! purchased..
@m4rc0k1tt3L
@m4rc0k1tt3L Рік тому
Thanks, this was awesome.
@lucyoriginales
@lucyoriginales 4 роки тому
Awesome 😎... well ☺️ i didn’t understand much but i think I could use as inspiration to Spinal Cord my Dark Matter.
@PiercingSight
@PiercingSight 4 роки тому
This is a perfectly compressed overview of neural networks. What autoencoder did you use to write this?
@bunderbah
@bunderbah 4 роки тому
Human brain
@MilaPronto
@MilaPronto 4 роки тому
@@bunderbah Bruman hain
@3snoW_
@3snoW_ 4 роки тому
@@MilaPronto Humain bran
@mbonuchinedu2420
@mbonuchinedu2420 4 роки тому
one hot encoder. lols
@mjafar
@mjafar 4 роки тому
@@mbonuchinedu2420 That's like a robot trying to be funny
@mr1enrollment
@mr1enrollment 4 роки тому
Steve: nice talk,... many questions come up, I'll ask a few 1)Do you distinguish planar vs non-planar networks? 2)Do RNN(s) become unstable? They look like control system time dependent processes. 3)Has anyone applied Monte Carlo toward selection of topology of a NN, or toward the activation function selection,...? Fascinating area to study.
@randythamrin5976
@randythamrin5976 4 роки тому
Amazing good explanation and simple word for non english native speaker like me
@-SUM1-
@-SUM1- 4 роки тому
UKposts is trying to teach us about itself.
@FriendlyPerson-zb4gv
@FriendlyPerson-zb4gv 4 роки тому
Hahaha. Good.
@ImaginaryMdA
@ImaginaryMdA 4 роки тому
It's becoming sentient! Even worse, it's a teenager who just wants to be understood. XD
@MrFischvogel
@MrFischvogel 2 роки тому
Thanks, Sir !
@radhikasece2374
@radhikasece2374 10 місяців тому
Thanks for your explanation in the video. have learned a lot. Am doing research in speech emotion recognition. Can you pls tell me the best Deep learning algorithms that will work?
@nex4618
@nex4618 2 роки тому
Thank you is all I can say but it doesn't feel like enough for this
@arnolddalby5552
@arnolddalby5552 4 роки тому
Loved neural nets since 1998 when I read a book which showed how 3 layer nets can solve difficult problems. In the 21st century the neural nets are magnificent and a credit to the brains of the human race. I am using a 21st century neural net myself and it's great. Hahahaha. Great video
@mikegunner5539
@mikegunner5539 4 роки тому
That was beautiful.
@Didanihaaaa
@Didanihaaaa 3 роки тому
beautiful! thanks.
@namhyeongtaek4653
@namhyeongtaek4653 3 роки тому
I love this man. You are my role model.
@Eigensteve
@Eigensteve 3 роки тому
Thanks so much!
@namhyeongtaek4653
@namhyeongtaek4653 3 роки тому
@@Eigensteve OMG it's my honor😯. I didn't expect you would read my comment lol. I hope I could get in to UW this fall so that I could be in your class in person.
@jeewonkyrapark9153
@jeewonkyrapark9153 2 роки тому
Amazing. Thank you :)
@TURALOWEN
@TURALOWEN 3 роки тому
Thank you!
@tsylpyf6od404
@tsylpyf6od404 9 місяців тому
7:45 Can it be combined with a Decision Tree? I think it would be a good idea, and I have found some research that has a similar idea
@juliocardenas4485
@juliocardenas4485 2 роки тому
Beautiful
@its_me_kirankumar
@its_me_kirankumar 3 роки тому
UKposts recommended it. But i love it.
@kennjank9335
@kennjank9335 5 місяців тому
One of the most effective and useful introductory lectures on neural networks you can attend. It provides basic terminology and enables a good foundation for other lectures. HIGHLY RECOMMENDED. It would be helpful, Mr. Bunton, to say a little bit more about Neurons. Is a neuron strictly a LOGICAL function point in a process (my simple excel cell doing a logical function qualifies as a neuron with your definition), is it a PHYSICAL function point like a server, or is it both? Was there a reason you did not mention restricted Boltzmann motors? Thank you again, Sir, for the quality of this lecture.
@JorgeMartinez-xb2ks
@JorgeMartinez-xb2ks 5 місяців тому
A neuron is pure software, a computational unit that mimics the basic functions of a biological neuron. While software relies on specific hardware for execution, a neuron is not a simple server. Unlike an Excel cell, which takes a single input and produces a straightforward output, a neuron receives multiple inputs from other neurons, processes them, and generates an output based on the combined information. Each input to a neuron is multiplied by a weight, a numerical value that represents the strength of the connection between the neurons. These weighted inputs are then summed together, and a bias value, representing an inherent offset, is added to the result. The resulting value is then passed through an activation function, which introduces non-linearity into the network's decision-making process. Activation functions, such as sigmoid and ReLU, transform the weighted input into the neuron's output, allowing the network to capture complex patterns and relationships in the data. ReLU is often used as an activation function because it requires less computational power compared to other activation functions, such as the sigmoid function. Through a process called learning, artificial neurons adjust their weights over time, enabling the network to improve its performance on a given task. Algorithms like back propagation guide this learning process, allowing the network to minimize errors and optimize its decision-making capabilities. Hope this helps.
@GarimaaThakur
@GarimaaThakur 4 роки тому
Glad I found this channel! Loved everything about this video.
@Eigensteve
@Eigensteve 4 роки тому
Glad you enjoy it!
@dejavukun
@dejavukun 4 роки тому
Thanks a lot to Steve and UKposts for recommending this great video
@rohitschauhanitbhu
@rohitschauhanitbhu 4 роки тому
Very informative
@vinster9165
@vinster9165 3 роки тому
UKposts read my mind this was exactly what I was curious about
@lucyoriginales
@lucyoriginales 4 роки тому
Thank you... 💋
@mbonuchinedu2420
@mbonuchinedu2420 4 роки тому
Thank you Very much
@mahfuzulhaquenayeem8561
@mahfuzulhaquenayeem8561 11 місяців тому
THANK YOU.....
@MistaWu
@MistaWu 4 роки тому
Thank you...
@DanWilan
@DanWilan 3 роки тому
Finally a good presentation
@Eigensteve
@Eigensteve 3 роки тому
Thanks!
@KelvinWKiger
@KelvinWKiger 4 роки тому
Ok, thank you.
@ccdavis94303
@ccdavis94303 2 роки тому
Subbed. TY
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Переглядів 160 тис.
A Neural Network Primer
19:14
Steve Brunton
Переглядів 33 тис.
Спектакль для окупантів та ждунів 🤯
00:47
Радіо Байрактар
Переглядів 548 тис.
Мама и дневник Зомби (часть 1)🧟 #shorts
00:47
But what is a convolution?
23:01
3Blue1Brown
Переглядів 2,4 млн
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
Layerwise Lectures
Переглядів 643 тис.
But what is a neural network? | Chapter 1, Deep learning
18:40
3Blue1Brown
Переглядів 16 млн
Watching Neural Networks Learn
25:28
Emergent Garden
Переглядів 1,1 млн
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Переглядів 933 тис.
Transformer Neural Networks Derived from Scratch
18:08
Algorithmic Simplicity
Переглядів 110 тис.
How Deep Neural Networks Work
24:38
Brandon Rohrer
Переглядів 1,5 млн
Transformers, explained: Understand the model behind GPT, BERT, and T5
9:11
Google Cloud Tech
Переглядів 857 тис.
HUAWEI БЕЗ GOOGLE: ЕСТЬ ЛИ ЖИЗНЬ? | РАЗБОР
11:49
Рекламная уловка Apple 😏
0:59
Яблык
Переглядів 133 тис.
Компьютерная мышь за 50 рублей
0:28
Subscribe for more!! #procreate #logoanimation #roblox
0:11
Animations by danny
Переглядів 3,8 млн