MIT Introduction to Deep Learning (2023) | 6.S191

  Переглядів 1,915,803

Alexander Amini

Alexander Amini

День тому

MIT Introduction to Deep Learning 6.S191: Lecture 1
Foundations of Deep Learning
Lecturer: Alexander Amini
2023 Edition
For all lectures, slides, and lab materials: introtodeeplearning.com/
Lecture Outline
0:00​ - Introduction
8:14 ​ - Course information
11:33​ - Why deep learning?
14:48​ - The perceptron
20:06​ - Perceptron example
23:14​ - From perceptrons to neural networks
29:34​ - Applying neural networks
32:29​ - Loss functions
35:12​ - Training and gradient descent
40:25​ - Backpropagation
44:05​ - Setting the learning rate
48:09​ - Batched gradient descent
51:25​ - Regularization: dropout and early stopping
57:16​ - Summary
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us on @MITDeepLearning on Twitter and Instagram to stay fully-connected!!

КОМЕНТАРІ: 501
@sarveshprajapati3878
@sarveshprajapati3878 Рік тому
Thank you for making this amazing fast-paced boot camp on introduction to deep learning accessible to all!
@melttherhythm
@melttherhythm Рік тому
Best course I've seen in a while! Super friendly to self-teaching. Thank you!
@SuperJAC1969
@SuperJAC1969 6 місяців тому
This was an awesome and easy to follow presentation. Thank you. I have noticed that more and more professionals working in this field are some of the most lucid and eloquent speakers. Thanks again.
@dr.mikeybee
@dr.mikeybee Рік тому
Well done! These are the best descriptions of overfitting and regularization I've heard/seen. Your example of testing loss makes it clear why we take checkpoints. Every topic you cover has a great thought-provoking graphic, and each example is just right for the topic.
@billhab1
@billhab1 Рік тому
Hello, My name is Moro and am enjoying your class from Ghana. A big thank you to all the organizers of such intellectually simulating lecture series.
@jamesannan4189
@jamesannan4189 6 місяців тому
Just perfect!!! Cant wait for more amazing lectures from you. Well done!!!
@amitjain9389
@amitjain9389 Рік тому
Hi Alex, Thanks for sharing the 2023 lectures. I've following your lectures from 2020 and these have helped me immensely in my professional career. Many thanks.
@jazonsamillano
@jazonsamillano Рік тому
I look forward to this MIT Deep Learning series every single year. Thank you so much for making this readily available.
@AAmini
@AAmini Рік тому
Thank you!!
@masternobody1896
@masternobody1896 11 місяців тому
​@@AAminiI like ai
@user-sg4lw7cb6k
@user-sg4lw7cb6k 8 місяців тому
Great Content!Informative, consice and easy to comprehend.What a time to be alive!. Thank you Mit allowing us to watch high quality teaching.
@labsanta
@labsanta Рік тому
Takeaways: • [00:09] Introduction by Alexander Amini as a course organizer of Introduction to Deep Learning at MIT, alongside Ava • [00:42] The course will cover a lot of material in just one week and provide hands-on experience with software labs • [01:04] AI and deep learning have had a huge resurgence in the past decade, with incredible successes and problem-solving ability • [01:38] The past year has been the year of generative deep learning, using deep learning to generate brand new types of data that never existed before • [02:10] Introduction video of the course played, which was synthetically generated by a deep learning algorithm • [03:26] Deep learning can be used to generate full synthetic environments to train autonomous vehicles entirely in simulation and deploy them on full-scale vehicles in the real world • [04:03] Deep learning can generate content directly from the language we speak and imagine things that have never existed before • [05:04] Deep learning can be used to generate software and algorithms that can take language prompts to train a neural network • [06:40] Intelligence is the ability to process information to inform some future decision or action, while artificial intelligence is the ability to build algorithms that can do exactly this • [07:18] Machine learning is a subset of AI, which focuses specifically on teaching machines how to process data and extract features through experiences or data • [07:44] Deep learning is a subset of machine learning, which focuses explicitly on neural networks to extract features in the data to learn and complete tasks • [08:11] The program is split between technical lectures and software labs, with updates this year in the later lectures and guest lectures from industry and academia • [09:13] Dedicated software labs throughout the week will be provided, and a project pitch competition will be held on Friday, with significant prizes for the winners. • 12:13 - The speaker explains the fundamental building block of deep learning, which is extracting and uncovering core patterns in data to use when making decisions. • 15:11 - The speaker introduces the perceptron, a single neuron that takes inputs, multiplies them by corresponding weights, adds them together, applies a non-linear activation function, and outputs a final result. • 17:00 - The speaker uses linear algebra terms to express the perceptron equation as a vector and dot product. They also introduce the sigmoid function as an example of a non-linear activation function. • 18:04 - The speaker introduces more common non-linear activation functions, including the sigmoid function and the ReLU function. They explain the importance of non-linear activation functions in deep learning. • 19:28-19:53: Real world data is highly non-linear, so models that capture those patterns need to be non-linear. Non-linear activation functions in neural networks allow for this. • 21:01-21:35: A perceptron uses three steps to get its output: multiplying inputs with weights, adding the results, and applying a non-linearity. The decision boundary can be visualized as a two-dimensional line. • 23:11-23:39: A multi-layered neural network can be built by initializing weight and bias vectors and defining forward propagation using the same three steps as the perceptron. The layers can be stacked on top of each other. • 27:02-27:55: Each node in a layer applies the same perceptron equation to different weight matrices, but the equations are fundamentally the same. • [28:52] Sequential models can be defined one layer after another to define forward propagation of information from the layer level. • [29:18] Deep neural networks are created by stacking layers on top of each other until the last layer, which is the output layer. • [29:53] A simple neural network with two inputs (number of lectures attended and hours spent on final project) is used to train the model to answer the question of whether a student will pass the class. • [30:52] The neural network has not been trained and needs a loss function to teach it when it makes mistakes. • [32:16] A loss function is a way to train the neural network to teach it when it makes mistakes. • [33:22] A loss function can be referred to as an objective function, empirical risk, or cost function. • [34:29] Different loss functions can be used for different types of outputs, such as binary cross-entropy for binary classification and mean squared error for continuous variables. • [35:32] The neural network needs to find the set of weights that minimizes the loss function averaged over the entire data set. • [37:11] The optimal weights can be found by starting at a random place in the infinite space of weights and evaluating the loss function, then computing the gradient of the loss function to find the direction of steepest descent towards the minimum loss. Introduction to computing derivatives of functions across the space of weights using the gradient, which tells the direction of the highest point. Gradient Descent algorithm involves negating the gradient and taking a step in the opposite direction to decrease loss. Gradient Descent algorithm is initiated by computing the gradient of the partial derivative with respect to the weights, updating weights in the opposite direction of the gradient. The gradient is a line that shows how the loss changes as a function of the weights, and computing it is critical to training neural networks. Back propagation is the process of computing the gradient by propagating these gradients over and over again through the network, from output to input. Challenges in optimization of neural networks include setting the learning rate, which determines how big of a step to take in the direction of the gradient. Setting the learning rate too low may converge slowly or get stuck in a local minimum, while setting it too high may overshoot and diverge from the solution. One option is to try out a bunch of learning rates and see what works best, but there are more intelligent ways to adapt to the neural network's landscape. Adaptive learning rate algorithms depend on how large the gradient is in that location and how fast the algorithm is learning. • The Labs will cover how to put all the information covered in the lecture into a single picture that defines the model at the top [47:24] • For every piece in the model, an optimizer with a learning rate needs to be defined [47:24] • Gradient descent is computationally expensive to compute over an entire dataset, so mini-batching can be used to compute gradients over a small batch of examples [48:20-50:30] • Mini-batching allows for increased gradient accuracy, quicker convergence, increased learning rate, and parallelization [50:30-51:04] • Regularization techniques, such as dropout and early stopping, can be used to prevent overfitting in neural networks [51:41-56:19] Introduction to putting all information into a single picture for defining the model and optimizing the lost landscape with a learning rate. • [48:20] The idea of batching data into mini-batches for faster and more accurate computation of gradients using a batch size of tens or hundreds of data points. • [51:41] Discussion on overfitting and the need for regularization techniques such as Dropout and early stopping to prevent the model from representing the training data more than the testing data. • [56:45] The importance of stopping training at the middle point to prevent overfitting and producing an underfit model. • [57:12] Summary of the three key points covered in the lecture: building blocks of neural networks, optimizing systems end to end, and deep sequence modeling with RNNs and Transformer architecture.
@shriyanshsharma229
@shriyanshsharma229 Рік тому
thanks for this nick
@RahulRamesh91
@RahulRamesh91 Рік тому
Do you use any tools to take notes with timestamp?
@labsanta
@labsanta Рік тому
@@RahulRamesh91 workflow 1. Open Transcript.txt 2. Write bullet points 3. Copy and paste in YT comments
@Mathe_Baendiger
@Mathe_Baendiger Рік тому
@@RahulRamesh91 chatgpt 😂
@1guruone
@1guruone Рік тому
Hi Nick, Thanks for adding. Did you use AI-ML to generate? Regards.
@roba9189
@roba9189 Рік тому
Thank you so much! This is the best explanation to deep neural networks that I could find on UKposts.
@sadiarashid7882
@sadiarashid7882 10 місяців тому
Thank you so much!!! everything is so clearly explained and I finally understood how neural network works, stay blessed. 👏
@vinayaka.b1494
@vinayaka.b1494 Рік тому
I'm doing computer vision research right now and love to watch these every new year.
@lantianyu1050
@lantianyu1050 5 місяців тому
The best intro to deep learning lecture I've ever heard! Thank you so much!!!
@adbeelomiunu7816
@adbeelomiunu7816 Рік тому
I never thought deep learning could be explained so plainly thought it had to be complex since it's called deep learning...but you did justice to this I must admit.
@NStillman
@NStillman Рік тому
Greetings from New Zealand. This is amazing. Thank you so much! So excited for these!
@user-eq9zj5bx9m
@user-eq9zj5bx9m 6 місяців тому
Thank you for such incredible jobs and for making this available to everyone!
@thecoderui
@thecoderui Рік тому
This is the first time that I have watched a course about Deep Learning. I want to say it is the best Intro for this topic, very organized and clear. I Just understanded about 75% of the content but I got what I need to know. Thank you
@nikkione9901
@nikkione9901 9 місяців тому
Thanks for making this video ❤
@kushinvestment1851
@kushinvestment1851 Рік тому
Alexander Amini, you're a gem! I'm taking Machine Learning course this semester and the course lecture is already finished but when I evaluate myself against course goals and how much I understand what Machine Leaning is in general, deep learning/Neural Network/ specifically I felt like I did not either attend the class or I'm not smart enough to know exactly what it does. Then, I directly ran to You tube and came across your great lecture and now I know what it is and I can apply to solve a real business world problem. I need to be honest with you guys this course lecture is really helpful and awesome to attend seriously. Indeed wonderful, easy and great takeaway of this semester for me! Thank you so much!
@guruprakashram2868
@guruprakashram2868 Рік тому
In my opinion, what makes a lecture either interesting or boring is not just the content of the lecture itself, but also the lecturer's approach to presenting the material. A good lecturer is one who is able to empathize with the students and present the information in a way that is easy to understand, making an effort to simplify complex concepts. This is what I believe makes a lecture truly worthwhile and enjoyable. Alexander did an outstanding job in making the lecture engaging and captivating.
@AAmini
@AAmini Рік тому
Thank you! Glad you enjoyed it, next week will be even better 🙂
@sriram.a1407
@sriram.a1407 Рік тому
@@AAmini❤
@hassanjaved906
@hassanjaved906 Рік тому
rrrm r kkmkk r r rr rrm r r rrrmrrr n e rrrrrrrr k 🎉? t🎉 kk k🎉kkoto🎉 k😅km😅k k.. k🎉tk kit g🎉kt🎉🎉🎉kggg🎉t😂
@JeanLuemusic
@JeanLuemusic Рік тому
It's the student job to learn the fundamentals first. Learn how to walk before learning how to run.
@ddaa-te6rz
@ddaa-te6rz Рік тому
person perfect
@acornell
@acornell Рік тому
Awesome lecture and really easy to digest in terms of content, speed, and taking the small moments to re-iterate or go back a bit to bring everyone up to speed. Less lingo == better for new students. Nice work
@woodworkingaspirations1720
@woodworkingaspirations1720 6 місяців тому
Beautiful presentation. Very clear and concise. Everything makes sense with just 1 "watch" iteration.
@micbab-vg2mu
@micbab-vg2mu Рік тому
Thank you for the video - it is easy to understand even for not IT experts.
@capyk5455
@capyk5455 9 місяців тому
Amazing delivery and presentation, thank you for sharing this material with us.
@mdmodassirfirdaus4528
@mdmodassirfirdaus4528 Рік тому
Thank you very much Professor to make this lecture series open to all. Thank you very much again from India
@circuitlover853
@circuitlover853 Рік тому
Thanks for the great lecture , Mr. Alexander
@aroxing
@aroxing Рік тому
The clearest explanation I've ever heard. Thanks!
@ibrahimhasan6619
@ibrahimhasan6619 Рік тому
Thanks a lot Alexander! You are doing great! So excited to watch future lectures.
@justinkim7202
@justinkim7202 6 місяців тому
This lecture is exceptional. Keep them coming!
@AdAstraCan
@AdAstraCan Рік тому
Thank you for making this available.
@flimdejong2030
@flimdejong2030 5 місяців тому
Absolutely fantastic. Thank you!
@alexanderinga4430
@alexanderinga4430 Рік тому
Hello World!
@abdalazezali8440
@abdalazezali8440 10 місяців тому
Hello😊
@subhrajyotibasu830
@subhrajyotibasu830 9 місяців тому
Its not a hello world thing
@user-dp3ff7dy1l
@user-dp3ff7dy1l 8 місяців тому
Hello human!
@utnapishtim307
@utnapishtim307 6 місяців тому
No
@Abishek_Nair1999
@Abishek_Nair1999 6 місяців тому
​@@utnapishtim307😂
@yousefabdelnaby3555
@yousefabdelnaby3555 Рік тому
thanks so much for your great explanation and before that for sharing the knowledge for all!
@Nobody313
@Nobody313 Рік тому
I saw this content since 2018 and I always have learnt something new. Congrats and thank you so much.
@sankalpvk18
@sankalpvk18 9 місяців тому
Thank you so much for making this course accessible for free. I feel so lucky today 🙏
@jimshtepa5423
@jimshtepa5423 Рік тому
Great video! The MIT faculty has done an exceptional job of explaining deep learning concepts in a clear and understandable manner. Their expertise and ability to break down complex ideas into simple terms is impressive. It's evident that they are passionate about educating and inspiring the next generation of AI and machine learning professionals. Thank you for sharing this informative and engaging video. It's no surprise that it has received such positive feedback from viewers. Keep up the excellent work!
@seanleith5312
@seanleith5312 9 місяців тому
I stopped watch when he brought osama on, disgusting, never come back again.
@yashoswal7899
@yashoswal7899 Рік тому
@Alexander Amini. Thanks for such an amazing video. I am currently pursuing my Masters and this video came at the very right time. Thanks once again for your work and publishing the material for students like us.
@haodongzhu8347
@haodongzhu8347 Рік тому
That sounds very aweaomeS!!! We can see deep learing is changing our world!
@28nov82
@28nov82 Місяць тому
Thanks for making this introduction session!
@deepaknarang7717
@deepaknarang7717 Рік тому
Great Content! Informative, consice and easy to comprehend. What a time to be alive!
@ramanraguraman
@ramanraguraman 7 місяців тому
Thank you Sir. I appreciate you from bottom of my heart for your services.
@MALAYAPH24
@MALAYAPH24 Рік тому
Thank you so much for a wonderful lecture. Indeed helpful to understand AI.
@riyaprakash6000
@riyaprakash6000 11 місяців тому
Very informative and precise. Thank you very much.
@max333031
@max333031 Місяць тому
Thank you for this fantastic information about deep learning! It's really helpful!
@md.sabbirrahmanakash7083
@md.sabbirrahmanakash7083 Рік тому
I started it today. I will be continuing with you Cause currently I have started a research work on image processing. Thank You
@nepninja4154
@nepninja4154 Рік тому
Awesome explanation, really loving your way of teaching
@hassal4585
@hassal4585 2 місяці тому
Thanks I have learned a lot from your classes!
@aeronesto
@aeronesto 3 місяці тому
Such a well put together lecture! It was so easy to understand.
@choir2008
@choir2008 Місяць тому
Thanks for the sharing. Very inspired
@sawfhsawfh00
@sawfhsawfh00 11 місяців тому
thank you so much Mr.Amini (ممنون از شما )
@oussamabouaiss7928
@oussamabouaiss7928 5 місяців тому
One of the best courses I hv ever seen, congrats
@syedabdul8509
@syedabdul8509 Рік тому
@48:03 the tape context closes with the indentation coming out, so the line grads = tape.gradient(loss, model.trainable_variables) may give an error since tape is closed after exiting the with context.
@VRVitaly
@VRVitaly Рік тому
Amazing content and education. thank you.
@Djellowman
@Djellowman Рік тому
Happy to say i knew everything that was discussed in this video! Looking forward to the next one
@monsineenakapanant4993
@monsineenakapanant4993 8 місяців тому
Thank you for your wonderful explanation.
@VijayasarathyMuthu
@VijayasarathyMuthu Рік тому
The structure of the course 🔥
@bingo242003
@bingo242003 7 місяців тому
The start of my learning in this field ! Wish me luck 🍀
@DhirajPatra
@DhirajPatra Рік тому
Wonderful way explanied. Thanks a lot
@hatemsabrey
@hatemsabrey 8 місяців тому
thank you Alexander and the team for this great effort. wanted to ask, what is the prerequisites for this course.
@ronaldagamaescobedo3980
@ronaldagamaescobedo3980 9 місяців тому
Thank so much, Alexander. It was a great of explanation.
@aghilannathan8169
@aghilannathan8169 2 місяці тому
Actual legend for making all of this (lecture + labs + lab solutions) accessible and free.
@supergooglestar
@supergooglestar 7 місяців тому
I really loved your lecture. Your lecture is so easy to understand. Thank you for posting this on UKposts
@BurcAKBAS
@BurcAKBAS 2 місяці тому
Thank you Alexander, this is quite capable fundamental lesson
@swatyk6881
@swatyk6881 Рік тому
Loved the class today. Is there any reading material associated to all that was covered - since lots of new concepts was out there.
@user-wb2ob1du9i
@user-wb2ob1du9i Рік тому
Great lecture, explained every aspect and flow of dealing with NN, was Fun!
@fyk
@fyk Рік тому
Amazing video! Thanks for sharing!
@muratdagdelen8163
@muratdagdelen8163 Рік тому
You are awesome. Thank you very much.
@limuell.3421
@limuell.3421 9 місяців тому
This is the best lecture I've seen in UKposts about deep learning.
@terryliu3635
@terryliu3635 25 днів тому
Omg!!! The courses are awesome!!!
@Lewis77681
@Lewis77681 Рік тому
Your lecture is really easy to understand🔥
@sanjgunetileke8836
@sanjgunetileke8836 Рік тому
This is an amazing lecture!! Thank you so much!
@user-qf2oo2ls6s
@user-qf2oo2ls6s 8 місяців тому
Dear Alexander, thank you for your AI course on UKposts! It is the best among all of these on UKposts.
@nikhilsharma1106
@nikhilsharma1106 Місяць тому
The amount of effort that has been put into the presentation is highly commendable.
@wagsman9999
@wagsman9999 9 місяців тому
After watching just a few UKposts videos I have a neural network running on my computer (Python), built from scratch, and no fancy libraries (except NumPy). Forward propagation, non-linear activation, backward propagation, gradient descent... maybe 50 lines of code... that's it. It was able to train itself to recognize handwritten digits (0 - 9) in a couple of minutes. I'm completely blown away - can hardly imagine what serious networks accomplish. Looking forward to this series for a deeper understanding.
@MrTejibaby
@MrTejibaby Місяць тому
Excellent lecture!!! Thank you!!!
@vin-deep
@vin-deep 10 місяців тому
Best explanation ever!!!! thank you
@aaranyaksantra9933
@aaranyaksantra9933 7 місяців тому
Great Explanation! Thank You very much for the knowledge.
@gowripriyathota438
@gowripriyathota438 4 місяці тому
Thank you so much. Your lecture helped me a lot.
@SSMDesignsandresearch
@SSMDesignsandresearch 4 місяці тому
Thank you sir, the way of your explain things mesmerizing.
@vimukthirandika872
@vimukthirandika872 Рік тому
Thanks to this Course and I startd my ML journey...Today I am doing ML Engineer Internship...Thank you MIT..
@isaacbawangisah6096
@isaacbawangisah6096 9 місяців тому
Bravo! This tutorial is exceptional.
@marktahu2932
@marktahu2932 Рік тому
You are so clear and the topic is presented so effectively - in one foul-swoop you put in plain language what I have been using CHATGPT for, so many pennies have dropped and lights went on - thank you.
@naziagillani6640
@naziagillani6640 Рік тому
Excellent. Many thanks for the very good video.
@khalidhasan2624
@khalidhasan2624 2 місяці тому
Thank you for your outstanding presentation
@diary24e
@diary24e Рік тому
It's really informative and helpful for us 💙💙
@Isysnation
@Isysnation 8 місяців тому
Thank you Mit allowing us to watch high quality teaching
@theinvisibleghost141
@theinvisibleghost141 8 місяців тому
this one lecture contains everything in depth.
@deep25Dec
@deep25Dec Рік тому
Always wait for your videos
@soumenghosh-qj7zl
@soumenghosh-qj7zl 10 місяців тому
Hi @Alexander Amini I am a graduated student of Master's of Computer Science and Engineering from KUET, Bangladesh. I have my thesis on Protein Secondary Structure determination by RNN (LSTM & GRU). It took me lots of time and effort to understand the basics of NN. Moreover, I have a paper published on EICT 2021 on this field. However, today as I am watching your lecture, I found you made those complex explanations very easy. I really appreciate your work. I understand I have zero knowledge on NN but if there is a chance to work with you or any way to reach you, I would be very grateful to you. Thanks. Soumen Ghosh.
@sanchaysat9944
@sanchaysat9944 Рік тому
Hi! It is very interesting introduction video. Now I'm working in small company in my country as DS/ML specialist. It's helping me to approve my chances to get a job in foreign country and to be part of AI world. Thanks for sharing with us!
@jennifergo2024
@jennifergo2024 4 місяці тому
Thanks so much for sharing materials.
@Dannydrinkbottom
@Dannydrinkbottom Рік тому
This is why I like MIT. Open Source Knowledge.
@Chuspal
@Chuspal Рік тому
I am 16 and from a rural part of South ASIA and I will be forever thankful for the resources available on UKposts thanks to IVY league Universities like MIT (6S191), Harvard(CS50) and others. Thanks you so much for making these resources publicly available for free of cost. I owe a debt of gratitude to all.
@AA-fy7kn
@AA-fy7kn Рік тому
Awesome! Great way of teaching!
@mustaphedollar1434
@mustaphedollar1434 5 місяців тому
F
@jj2006h
@jj2006h 8 місяців тому
@AAmini thank you very much for a detailed master piece . i am watching this video repeatedly to understand each second. until 30 min , i am clear.
@arunprasad77
@arunprasad77 2 місяці тому
Great information in simple explanations
@spacecowboy7549
@spacecowboy7549 Рік тому
Great study material for the beginner of deep learning
@kanchangangwar4964
@kanchangangwar4964 Рік тому
It is very interesting lecture . Thank you for this .
@technowey
@technowey Рік тому
Thank you for posting this. I'm a retired electrical engineer who spend much of my career doing software. I'm excited and motivated, as well as concerned, by AI breakthroughs.
@HilalShaath
@HilalShaath Рік тому
Alexander, I am a Kaggle expert ( 2 bronze one silver and counting). This lecture is the clearest explanation of deep learning that I came across, thank you so much for sharing this. I hope you are considering writing a book about the topic The clarity you explained this is remarkable. Best wishes for continued success
MIT 6.S191 (2023): Recurrent Neural Networks, Transformers, and Attention
1:02:50
What is generative AI and how does it work? - The Turing Lectures with Mirella Lapata
46:02
MIT 6.S191 (2023): Convolutional Neural Networks
55:15
Alexander Amini
Переглядів 228 тис.
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Переглядів 2,7 млн
Lecture 1: Introduction to Superposition
1:16:07
MIT OpenCourseWare
Переглядів 7 млн
MIT 6.S191 (2023): Deep Generative Modeling
59:52
Alexander Amini
Переглядів 284 тис.
How Deep Neural Networks Work - Full Course for Beginners
3:50:57
freeCodeCamp.org
Переглядів 3,2 млн
Deep Learning: A Crash Course (2018) | SIGGRAPH Courses
3:33:03
ACMSIGGRAPH
Переглядів 2,5 млн
MIT 6.S191 (2023): Robust and Trustworthy Deep Learning
53:50
Alexander Amini
Переглядів 81 тис.
Lec 1 | MIT 9.00SC Introduction to Psychology, Spring 2011
49:44
MIT OpenCourseWare
Переглядів 2,8 млн
How to Get a Developer Job - Even in This Economy [Full Course]
3:59:46
freeCodeCamp.org
Переглядів 1,4 млн
[1hr Talk] Intro to Large Language Models
59:48
Andrej Karpathy
Переглядів 1,7 млн
Как установить Windows 10/11?
0:56
Construct PC
Переглядів 532 тис.
Broken Flex Repair #technology #mobilerepair
0:55
ideal institute aligarh
Переглядів 15 млн
I had no idea SHEIN sold PC parts…
27:10
Linus Tech Tips
Переглядів 1,4 млн
Iphone yoki samsung
0:13
rishton_vines😇
Переглядів 9 млн
RTX 4070 Super слишком хороша. Меня это бесит
15:22
Рома, Просто Рома
Переглядів 51 тис.