Python + PyTorch + Pygame Reinforcement Learning - Train an AI to Play Snake

  Переглядів 325,537

freeCodeCamp.org

freeCodeCamp.org

День тому

In this Python Reinforcement Learning course you will learn how to teach an AI to play Snake! We build everything from scratch using Pygame and PyTorch.
💻 Code: github.com/python-engineer/sn...
✏️ Course developed by Python Engineer. Check out his UKposts channel: / @patloeber
🎨 Art by Rachel: rachel.likespizza.com/
⭐️ Course Contents ⭐️
⌨️ (0:00:00) Part 1: Basics of Reinforcement Learning and Deep Q Learning
⌨️ (0:17:22) Part 2: Setup environment and implement snake game
⌨️ (0:40:07) Part 3: Implement agent to control game
⌨️ (1:10:59) Part 4: Create and train neural network
🎉 Thanks to our Champion and Sponsor supporters:
👾 Raymond Odero
👾 Agustín Kussrow
👾 aldo ferretti
👾 Otis Morgan
👾 DeezMaster
--
Learn to code for free and get a developer job: www.freecodecamp.org
Read hundreds of articles on programming: freecodecamp.org/news

КОМЕНТАРІ: 217
@murtazabadshah8747
@murtazabadshah8747 24 дні тому
Everyone's commenting that its an excellent video but IMO this tutorial is awe-full! The instructor does not explain the process, hes all over the place going back and forth and just rushes through the concepts. If you want to blindly follow an online tutorial watch this video, if you want to actually learn the concept I would look somewhere else....
@30DaysMonkMode-ft1kf
@30DaysMonkMode-ft1kf 13 днів тому
Exactly.
@ShtBall5
@ShtBall5 2 дні тому
any video recommendations ?
@30DaysMonkMode-ft1kf
@30DaysMonkMode-ft1kf 2 дні тому
@@ShtBall5 You can check this video: "Learn Pytorch for deep learning in a day. Literally."
@murtazabadshah8747
@murtazabadshah8747 18 годин тому
@@ShtBall5 Yeah I ended up watching this video that helped me a lot to understand the basic concept of the Q-learning algorithm and the basic concept of how the state and agent work ukposts.info/have/v-deo/qKGprXqMapylxKc.html&ab_channel=Bits%26Neurons
@edwintjoa6099
@edwintjoa6099 Рік тому
Thanks for the awesome video! Really fun to see agent improving during the games.
@HRDmonk
@HRDmonk 9 місяців тому
Thank you so much for this tutorial. I have always wanted to introduce snake with ML. and now looking forward to learning more about pytorch
@mihailmihaylov988
@mihailmihaylov988 Рік тому
A nice video. My only critique is that the presenter kept writing code for almost an hour without running it. That's not giving a good example.
@user-de3oj1xw8u
@user-de3oj1xw8u 3 місяці тому
Need to be patient 😊
@DroopTheSnoot
@DroopTheSnoot 2 місяці тому
Debugging code should always be done every function or so
@pedroklain9375
@pedroklain9375 Місяць тому
Bro is just too good at coding, he runs it in his head 😂
@mihailmihaylov988
@mihailmihaylov988 Місяць тому
@@pedroklain9375 It's not about what he can do or cannot do. It's about what example he sets. After all, this is an instructional video.
@electricspeedruns6121
@electricspeedruns6121 Місяць тому
friend ai and programming code is a bit different @@mihailmihaylov988
@willchantemduang5871
@willchantemduang5871 2 роки тому
I’ve always wanted to do this, thanks alot for the tutorial
@nesimtunc
@nesimtunc 2 роки тому
Always very high quality videos when I just need it :) Thanks a lot! Looking forward to finish the video 😎
@Oncopoda
@Oncopoda 2 роки тому
Excited to try this!
@adrian46647
@adrian46647 Місяць тому
Awesome, so hard to find that type of explanation of dqn. All clear, great balance between theory and coding part for beginners in rl.
@khalidelgazzar
@khalidelgazzar 5 місяців тому
Watched the first 4 mins and the game and the learning process is fantastic! 🎉 Will go on it with the rest hopefully soon.
@khalidelgazzar
@khalidelgazzar 4 місяці тому
Part 2 - 17:24
@khalidelgazzar
@khalidelgazzar 4 місяці тому
21:30
@veronicasalazarpereira9933
@veronicasalazarpereira9933 11 місяців тому
I did it!!! Thanks for showing this.
@dabyttamusic7308
@dabyttamusic7308 Рік тому
Super great video! I run it and after more than 2000 games it seems that the average score reached a plateau of 30. It Avoids boundaries very well. But it hit itself when the tail is too long. But this movement should be predicted before.
@user-fb8tr3kb8j
@user-fb8tr3kb8j 6 місяців тому
To avoid the self collision we have to add snake parts positions to the game state somehow, in the current model it doesn't know where's the body so it will never learn how to avoid it
@Hiyori___
@Hiyori___ Рік тому
Very interesting tutorial. I was familiar with Snake code implementations, Tensorflow basics and RL principles and I could follow along. For some reason the plot doesn't appear, and I only get a small blank window where the plot should be. Probably some bug while typing the code down. Other than that everything works for me. Thank you very much!
@lostsoul4393
@lostsoul4393 6 місяців тому
just incase anyone else has this problem, you need at add plt.pause(.1) to the bottom of the plot function
@stillrunning9841
@stillrunning9841 4 місяці тому
Hey there, same happened for me! Did you find the error? I'm still trying to find it :)
@lillymoreau7721
@lillymoreau7721 4 місяці тому
@@stillrunning9841 this code worked for me: import matplotlib.pyplot as plt plt.ion() def plot(scores, mean_scores): plt.clf() # Clear the current figure plt.title('Training...') plt.xlabel('Number of Games') plt.ylabel('Score') plt.plot(scores, label='Scores') plt.plot(mean_scores, label='Mean Scores') plt.ylim(ymin=0) if scores: plt.text(len(scores)-1, scores[-1], str(scores[-1])) if mean_scores: plt.text(len(mean_scores)-1, mean_scores[-1], str(mean_scores[-1])) plt.legend() plt.draw() plt.pause(0.1)
@radiatian4908
@radiatian4908 3 місяці тому
Super late reply, but i had this issue and it turns out the video is missing two lines of code. In his files he also has plt.show(block=False) plt.pause(.1) as the last two lines. Hope this works for u too
@techarchsefa
@techarchsefa Місяць тому
That is so smooth bro, thanks
@sergiomollo
@sergiomollo Місяць тому
Thanks for this video, I have solved all my doubts
@pactube8833
@pactube8833 2 роки тому
Thanks for freeCodeCamp for making this possible.
@user-de3oj1xw8u
@user-de3oj1xw8u 3 місяці тому
A video as valuable as a playbook👍🏻👍🏻👍🏻
@devnull711
@devnull711 Рік тому
Incredible work, thank you Patrick! PS: It is very funny to spot the typos/bugs before you do :)
@Fr4nk4000
@Fr4nk4000 Рік тому
Following this since I wanna make a pygame project of mine that I poured a lot of time into and have no idea what to make the game about play itself. Wish me luck.
@jaibhagat7441
@jaibhagat7441 Місяць тому
have you made something
@aliengineroglu8875
@aliengineroglu8875 6 місяців тому
Thank you for your great work. I couldn’t understand the equation you created in the video about the simplified Bellman equation, Q_new = reward + gamma * torch.max(model(next_state)). In this equation, model(nextstate) gives us a probabilistic action prediction. I couldn’t understand why we added one of the action probabilities to the reward. This is totaly different than the Bellman Equation. I would be very happy if someone could explain how the original Bellman equation was simplified in this way. Thanks in advance to everyone.
@alanlam6302
@alanlam6302 6 місяців тому
Same here. Appreciate if someone can provide a reference to this implementation
@wangking7427
@wangking7427 Місяць тому
The Liner_QNet model gives three actions' Q value vector . It's not the probablistic of each action but the QValue of each action. We take the action which has the highest QValue each step. The QValue in this tourtorial is the reward of the action.
@dollarfree531
@dollarfree531 2 роки тому
Very cool video! Highly useful!
@simpleepic
@simpleepic 7 місяців тому
Great tutorial
@kevas777
@kevas777 7 місяців тому
nice algo, but how to solve a problem with self destroy, when the closest cell to move is "inside body circle". I think a state must be all field with each body part,head, food etc, but its endless states and all of them unique and it will never learns or may be i wrong?
@vikramganesan
@vikramganesan Рік тому
how to perform UCB, optimal initial values and dynamic programming approaches in this model?
@manojkothwal3586
@manojkothwal3586 2 роки тому
Brilliant 🔥🔥🔥🔥🔥🔥
@markkiryaflex
@markkiryaflex 2 роки тому
I think this project is awesome
@pyshine_official
@pyshine_official 2 роки тому
Nice effort and seems like the convergence is achieved upto some extent!
@gattorwichar3984
@gattorwichar3984 2 роки тому
This instructor is the best
@lukasgamedev
@lukasgamedev 6 місяців тому
Hello! Is there a way to save the state of the neural model? So I can load later a trained enemy AI, ready for being the player opponent? Thank you for the video!
@TanmayBhatgare
@TanmayBhatgare 4 місяці тому
i think u can you torch.save(model.state_dict(), 'rl_model.pth') to save model and model = YourModel() model.load_state_dict(torch.load('rl_model.pth')) to load it. Hope this helps.
@vinayakbhandage8319
@vinayakbhandage8319 2 роки тому
You never fail to amaze us✌
@cesarortegonavacerrada9065
@cesarortegonavacerrada9065 Рік тому
Very good video. I need help with the homework. How can you avoid the loops?
@Miyuru_
@Miyuru_ Рік тому
same problem it's getting stuck in the same loop
@hcc3904
@hcc3904 Рік тому
@@Miyuru_ overfitting... just add another condition point like if u dont eat apple in 10 seconds, minus -10 point
@codingfan
@codingfan Рік тому
Really interesting course !
@python0tutorial100
@python0tutorial100 10 місяців тому
Thanks for the video it was great but I have only one question if u can help after I run the agent and after a few try like 3rd game the game stuck and in the console suddenly start to count the number of game without the game really play
@yavarjn2055
@yavarjn2055 16 днів тому
Lot of the code needs more explanation. There is a disconnect from theory and implementation. We copy all the parameters to this and to that without understanding why. The memory part is not well explained.
@30DaysMonkMode-ft1kf
@30DaysMonkMode-ft1kf 13 днів тому
Yeah. He isn't explaining, just repeating what he is writing.
@Coding_Destini
@Coding_Destini Рік тому
What to install before creating an environment ,iam confused
@filoautomata
@filoautomata 4 місяці тому
What about using GA to train the NN itself ? Will be a very interesting comparison no?
@cadewzan
@cadewzan 5 місяців тому
Theres no need of waiting a lot of time to train, on the game script you can just change the varaible from 30 to 1000 so snake goes much more fast and trains intself on less time.
@MyArtsProduction
@MyArtsProduction 4 місяці тому
He actually mentions that 1:38:00
@stefanb4340
@stefanb4340 Рік тому
This might be a stupid question, but how would one go about saving this trained model and accessing it for further training?
@brayanfernandes371
@brayanfernandes371 Рік тому
you can save the weights
@sanghoututorial3878
@sanghoututorial3878 Рік тому
@@brayanfernandes371 how
@segovemoc4776
@segovemoc4776 Рік тому
@Biglyp I actually wanted to ask a similar question. I save the weights and load them through: self.model.load_state_dict(torch.load(model_saved_path)) and it works, but the problem is that the snake is then underperforming. It learns then much quicker than before, but still it is significantly worse than during training.
@segovemoc4776
@segovemoc4776 Рік тому
never mind. I found answer in the later comments - it is necessary to adjust the epsilon value to something very small after loading the model
@gmancz
@gmancz 8 місяців тому
@@segovemoc4776 It works, but you need to set the Agent.n_of_games to something big value (eg.: 280). self.model.n_of_games = 280 As you remember, there is exploration and exploitation in the get_action() function. We are updating the epsilon value using the self.n_of_games variable and then get a random number between 0 and 200. This line makes your snake so bad.
@Radu
@Radu 2 роки тому
Really nice!
@sidheshwartiwari9834
@sidheshwartiwari9834 2 роки тому
Look who is here, none other than mighty Radu sir🙏🏻
@Radu
@Radu 2 роки тому
@@sidheshwartiwari9834 haha :-) funny!
@angelosorte5464
@angelosorte5464 7 місяців тому
Is there a limit to when it stops learning? I mean the quality of the intelligence will stay the same at some point, or will it improve even more and more after those 12 minutes? Thanks.
@damienlmoore
@damienlmoore 6 місяців тому
At 34 mins, it breaks out of the game and starts gobbling up your filesystem. At 762, it breaks out of the machine and comes for you. 😅
@goldenmarketing3179
@goldenmarketing3179 10 місяців тому
Does this tutorial teach how to write "inference" by myself, not using library?
@ismailmatrix1
@ismailmatrix1 2 роки тому
What's your VSCode theme? It looks gooood
@soraaoixxthebluesky
@soraaoixxthebluesky 9 місяців тому
i just want to know after training, how can we load back a trained model?
@carolinab9945
@carolinab9945 3 місяці тому
Is it reinforcement learning even if you give some instructions about the movements?
@luthermillamuculadosreisec3844
@luthermillamuculadosreisec3844 Рік тому
Thank you
@GenkiKuri
@GenkiKuri 21 день тому
Awesome!!
@ckelsel8198
@ckelsel8198 11 місяців тому
thank you very much.
@vespervenom2343
@vespervenom2343 Рік тому
I copied it code for code and it doesn’t work. It gives me no errors but just runs and then ends with no result🤦🏽‍♂️🤦🏽‍♂️🤦🏽‍♂️🤦🏽‍♂️🤦🏽‍♂️
@kabrailchamoun8974
@kabrailchamoun8974 2 роки тому
Could you bring us a crash course of blender (3d modelling program)?
@slamsandwich19
@slamsandwich19 Рік тому
Python has nothing to do with 3d modeling though
@muhammadnaufil5237
@muhammadnaufil5237 Рік тому
@@slamsandwich19 it is. Blender has a python library called pyblender. I am looking forward to a good blender tutorial and how to integrate this with python
@sunyog
@sunyog Рік тому
What are the prerequisites of the video ?!
@xccds
@xccds Рік тому
very great course! thanks! A little question is , in class QTrainer maybe target = pred.clone().detach() ?
@superfact8751
@superfact8751 10 місяців тому
hi bro kaha se ho
@user-zo4cx8yi3g
@user-zo4cx8yi3g 3 місяці тому
thanks, intresting!
@canerunafraid9491
@canerunafraid9491 7 місяців тому
The snake moves smoothly, but when it hits the first wall, the interface closes idk :/
@Lenslyfe
@Lenslyfe 2 місяці тому
It's not meant to wrap. That's the correct behaviour.
@fitybux4664
@fitybux4664 11 місяців тому
38:13 "It has to be [0,0,1]"
@smkzachatac
@smkzachatac 6 місяців тому
So, my array for state is apparently returning NoneType. Anyone know a fix?
@gukeskl3671
@gukeskl3671 Рік тому
How can I get this terminal? :o
@sakshirokade6321
@sakshirokade6321 Рік тому
WHICH ALGORITHM IS USED IN THIS
@S3R43o3
@S3R43o3 Рік тому
So far so good... but how about load the old 'brain' if u exit the game and want resume later ? Oo iam bite confused tryd with state if os.path.exists('model/model.pth'): self.model.load_state_dict(torch.load('model/model.pth')) self.model.eval() in the Agent init. but that dosent work
@vikramganesan
@vikramganesan Рік тому
did u find out a way to load the previously trained model?
@S3R43o3
@S3R43o3 Рік тому
@@vikramganesan nope sorry
@RORoMiguel
@RORoMiguel Рік тому
it seems like we need to remove the epsilon to load the model, otherwise it will keep doing random moves
@jeremiebouchard6950
@jeremiebouchard6950 Рік тому
@@RORoMiguel You're right, it works for me :) thanks! first game : 33 score
@polllloigyv
@polllloigyv 8 місяців тому
i think it doesn't work because you you wrote / instead of this\
@usernumber334
@usernumber334 2 роки тому
true
@spyrav
@spyrav 2 роки тому
thanks!
@eugenmaljas7237
@eugenmaljas7237 Рік тому
I want to modify the program: how do I make 4 outputs? I would like to integrate the snake length.
@arnavgupta8376
@arnavgupta8376 Рік тому
I think you can do so if you change the line 20 ie 'self.model = Linear_QNet(11, 1024, 3)' change the 3 to 4. and in line 90 'final_move = [0,0,0]' add another zero. Maybe there are some more things you need to do.
@Ragnarok540
@Ragnarok540 2 роки тому
My snake became self aware, any ideas to stop it from taking over the world?
@matthewwisdom426
@matthewwisdom426 2 роки тому
😂😂😂😂😂 Mine has hacked Russia, it's now impersonating Putin and is about to start world war 3.
@sidheshwartiwari9834
@sidheshwartiwari9834 2 роки тому
🤣
@sidheshwartiwari9834
@sidheshwartiwari9834 2 роки тому
@@matthewwisdom426 lmao 🤣🤣 i am drying 😂😂
@mtijohn9274
@mtijohn9274 2 роки тому
yes
@dexterbarrot5226
@dexterbarrot5226 Рік тому
What text editor
@deepakpaira0123
@deepakpaira0123 Місяць тому
how to save the model...? where to know?
@godswillhycinth9809
@godswillhycinth9809 2 роки тому
I'm a JavaScript Developer that knows nothing about python, but I must say I'm jelous 😭😭. This is really cool
@Ragnarok540
@Ragnarok540 2 роки тому
Python is way easier than JavaScript. Give it a try.
@Loug522
@Loug522 2 роки тому
Yeah, if you already know other languages(especially OOP) then Python shouldn't take more than a few hours to lean most if not all of its basics.
@delete7316
@delete7316 2 роки тому
I’m jealous of you. Python is super easy to learn.
@NationalistVietnamese
@NationalistVietnamese 6 місяців тому
I use your code and train it with speed 60000 (just modify the "game.py" file)
@pranjalshukla8096
@pranjalshukla8096 Рік тому
18:58 for me it is not showing as (pygame_env) ... .... i am a windows user (edit) it worked for I had to open a CMD outside of VS CODE and then follow the steps of conda pygame_env now I am getting "(pygame_env) C:\Users\........."
@bobjeff6779
@bobjeff6779 Місяць тому
how did you get it to say pygame_env
@pranjalshukla8096
@pranjalshukla8096 Місяць тому
@@bobjeff6779 when you are creating a venv (virtual environment) you can name it so to make venv with a different name you can do that by python -m venv myproject
@googlyeyes3237
@googlyeyes3237 2 роки тому
Can we speed up the game soo fast so ai can play faster and learn faster. Basically it can finish 100 games in a minute or two.
@sidheshwartiwari9834
@sidheshwartiwari9834 2 роки тому
Absolutely, you just have to increase the render rate. Good question though.
@tlefevre2918
@tlefevre2918 7 місяців тому
it does that when it is drawing on the chart, I speed it up by making it only update the chart every 100 games.@@guillaumelotis530
@keethesh2270
@keethesh2270 6 місяців тому
Just change the tick speed to make it a higher one. It is at the top of the script and is called "SPEED"
@baslifico
@baslifico 3 місяці тому
I think handing the fundamental flaw in the design to others as "homework" is a bit stiff... You're asking a neural network to solve an NP-Hard problem.
@bipinmaharjan4090
@bipinmaharjan4090 2 роки тому
Found new hobby
@philtoa334
@philtoa334 2 роки тому
Crazy.
@fizipcfx
@fizipcfx Рік тому
why not just give the ai full board information as the state
@marcbringas1006
@marcbringas1006 2 роки тому
where is he typing the commands? is this a CMD?
@science_horizonts
@science_horizonts Рік тому
Do you mean the source code of the program or the command to run the program If to start, then yes, this is the command line(CMD), but you need to go through it to the folder where you have the project, CMD is somewhat similar to the explorer that goes through the folders
@cryptomoney6901
@cryptomoney6901 Рік тому
There is a small bug where the snake will eat itself if you go in the opposite direction that its going in. To prevent this from happening change this line of code: if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT : self.direction = Direction.LEFT elif event.key == pygame.K_RIGHT : self.direction = Direction.RIGHT elif event.key == pygame.K_UP: self.direction = Direction.UP elif event.key == pygame.K_DOWN : self.direction = Direction.DOWN to: if event.type == pygame.KEYDOWN: # 2nd conditional to prevent snake from eating itself! if event.key == pygame.K_LEFT and self.direction != Direction.RIGHT: self.direction = Direction.LEFT elif event.key == pygame.K_RIGHT and self.direction != Direction.LEFT: self.direction = Direction.RIGHT elif event.key == pygame.K_UP and self.direction != Direction.DOWN: self.direction = Direction.UP elif event.key == pygame.K_DOWN and self.direction != Direction.UP: self.direction = Direction.DOWN
@slamsandwich19
@slamsandwich19 Рік тому
He fixed that later in the video
@joe_hoeller_chicago
@joe_hoeller_chicago 2 місяці тому
Cool vid 😊
@planetoday9169
@planetoday9169 Рік тому
can someone help me how to create a multiple snakes?
@feliuswyvern7189
@feliuswyvern7189 11 місяців тому
so i had the program run over 2000 games and it couldn't get past the 80s mark. how would i get it to improve past that?
@taraskhan4755
@taraskhan4755 9 місяців тому
there is no way to improve it unless you got coding knowledge
@Miyuru_
@Miyuru_ Рік тому
someone tell me how to save trained data?
@__________________________6910
@__________________________6910 2 роки тому
omg patrick wow
@patloeber
@patloeber 2 роки тому
yeah :)
@joekanaan2548
@joekanaan2548 2 роки тому
Hi I have a question. I followed along and wrote everything. When I run it in the terminal it works but the plot doesn't. It gives me an empty minimized white screen. If someone else experienced this please help.
@AlexandruTunschi1
@AlexandruTunschi1 2 роки тому
There are 2 lines missing: plt.show(block=False) plt.pause(.1)
@RANDOM_DUD-qj3jd
@RANDOM_DUD-qj3jd 3 місяці тому
Where do these go?@@AlexandruTunschi1
@nescreation
@nescreation 2 роки тому
Wow
@piyushsrivastava7636
@piyushsrivastava7636 2 роки тому
Can we create AI bot to play dice game?
@sidheshwartiwari9834
@sidheshwartiwari9834 2 роки тому
Well, it's already done. Almost every Ludo game has a bot. In fact you don't even need AI and ML for it as the conditions are so few that it can be hard-coded
@RANDOM_DUD-qj3jd
@RANDOM_DUD-qj3jd 3 місяці тому
no windows opened when I ran. not even any errors. how do i fix it?
@bobjeff6779
@bobjeff6779 Місяць тому
did u get it
@serverautism2268
@serverautism2268 Рік тому
after 6000 games it averages at 33.4
@yarinh8417
@yarinh8417 5 місяців тому
somoone knows hot to imporve the snake so he does not collide with itself and loop over itself?
@aliabbas-xs6qm
@aliabbas-xs6qm 4 місяці тому
Discard the AI an use a hamilton cycle kind liek how codebullet did in his video, It wont be an AI but it wont loop or kill itself
@pepeCastillo
@pepeCastillo 2 роки тому
Jobs Is alive?
@dereinedudeda5298
@dereinedudeda5298 4 місяці тому
Grüße aus Deutschland
@user-wz2xn4ni2r
@user-wz2xn4ni2r Рік тому
I want it to play minesweeper
@omidelahi6196
@omidelahi6196 Рік тому
I got this error: ValueError: not enough values to unpack (expected 3, got 2) for line 118 in agent.py
@ericb9056
@ericb9056 Рік тому
I got that as well I forgot to type out the first value of the associated return in the game file. check that you have, return reward, game_over, self.score at around line 82 and 96
@omidelahi6196
@omidelahi6196 Рік тому
@@ericb9056 Thanks dude
@bindiberry6280
@bindiberry6280 7 місяців тому
Do you legally own the AI you trained?!!
@mesh8349
@mesh8349 5 місяців тому
yes
@xrhstos1330
@xrhstos1330 Рік тому
Very interesting topic, but a very bad teacher. He doesn't explain almost anything and the little he explains he talks about them very briefly like we already know all of those things and we make a revision.
@30DaysMonkMode-ft1kf
@30DaysMonkMode-ft1kf 13 днів тому
Exactly. Do you know some good tutoriors for this?
@chandramoulidasari3946
@chandramoulidasari3946 2 роки тому
What was the theme?
@mohammedismail6872
@mohammedismail6872 Рік тому
dumb question ik, but what is the box he is usint to code, is it CMD, or windows power shell etc ?
@melvinliew2426
@melvinliew2426 Рік тому
It might be cmd,but for me is anaconda prompt
@washyb
@washyb 2 місяці тому
hes using some CMD on a Mac, I think.
@AhmadBakdash07
@AhmadBakdash07 2 роки тому
Not first 😑
@Peaceful-er4vf
@Peaceful-er4vf Рік тому
12:00
@patr2002
@patr2002 Місяць тому
16:51
@patr2002
@patr2002 19 днів тому
40:15
@patr2002
@patr2002 18 днів тому
46:51
@patr2002
@patr2002 17 днів тому
50:22
@aoebb5021
@aoebb5021 Рік тому
5:31
@Agesilas2
@Agesilas2 4 місяці тому
set video speed at x1.25 or x1.5, thank me later.
@unionid3867
@unionid3867 5 місяців тому
The training time is very very long
@the_person
@the_person 2 роки тому
0:00 white man jumpscare
How to Win Snake: The UNKILLABLE Snake AI
17:05
AlphaPhoenix
Переглядів 2,2 млн
I tried to make a Valorant AI using computer vision
19:23
River's Educational Channel
Переглядів 1,3 млн
Анна Трінчер - Бар за баром (Official Music Video)
02:38
Анна Трінчер
Переглядів 1,7 млн
Training AI to Play Pokemon with Reinforcement Learning
33:53
Peter Whidden
Переглядів 6 млн
Watching Neural Networks Learn
25:28
Emergent Garden
Переглядів 1,1 млн
Evolving Genetic Neural Network Optimizes Poly Bridge Problems
9:59
Reinforcement Learning Course - Full Machine Learning Tutorial
3:55:27
freeCodeCamp.org
Переглядів 565 тис.
AI Learns Insane Monopoly Strategies
11:30
b2studios
Переглядів 10 млн
AI Invents New Bowling Techniques
11:33
b2studios
Переглядів 3,2 млн
Reinforcement Learning from scratch
8:25
Graphics in 5 Minutes
Переглядів 28 тис.
PyTorch in 100 Seconds
2:43
Fireship
Переглядів 787 тис.