A.I. Learns to Drive From Scratch in Trackmania

  Переглядів 7,295,729

Yosh

Yosh

День тому

I made an A.I. that teaches itself to drive in the racing game Trackmania, using Machine-Learning. I used Deep-Q-Learning, a Reinforcement Learning algorithm.
Again, a big thanks to Donadigo for TMInterface !
Contact :
Discord - yosh_tm
Twitter - / yoshtm1

КОМЕНТАРІ: 2 600
@bgosl
@bgosl 2 роки тому
Great video - I think your explanations and illustrations explain some tricky concepts in a super understandable way! As to your issues at the end: I think maybe it's related to the way your rewards are structured? Looking at the illustration around 3:30, there's a massive reward associated with cutting a corner: while going along a straight bit of road, it's getting rewards like 1.4, 1.6, 1.7 - but once it cuts a corner, suddenly you get 8.7 in one step. So it makes a lot of sense that it learns to always cut corners aggressively, since that increases reward by a lot. But going quickly on the straights, which seems it doesn't like to do, doesn't in itself carry all that much more positive reward. Since you're using discounted rewards to evaluate the expected rewards of each action, you will see a slightly higher reward since you're moving further along - but relative to the rewards seen if it finds another corner to cut a little more, it's quite small. So it might just be favoring minor improvements to a corner-cut over basically anything else, including just pushing the forward button on a straight. I think maybe restructuring your rewards could help. An obvious improvement would be to give rewards not relative to the midline of each block, but place rewards along the optimal racing line - but at that point, are you even learning anything? You're just saying "you will get an increased reward if you follow my predetermined path", which to me isn't really learning. I think an intermediate step would be to place rewards for each 90 degree corner at the inside corner of that block (maybe a small margin from the actual edge): that should reduce the extreme impact of cutting corners vs going fast on straights, but you're still quite far from just indirectly providing the solution. Also; unless just didn't say, I don't think you have a negative reward at each timestep? That's typical for a "win but as fast as possible" scenario, which is the case here. It would make sense, as well: going in the right direction but super slowly, is kind of like going backwards, so should also be penalized. I think that would even eliminate the need for negative rewards if going backwards: by proxy, going backwards will always lead to taking more time, which leads to more negative rewards. You might even have to remove the negative rewards from going backwards, as going backwards and going slowly might see the same net reward, which would leave the agent puzzled/indifferent between the two. In the end, getting to the finish with less time spent will lead to the maximum reward. Finally, of course: introducing the brake button would give you possible improved times - and even might let the agent learn some cool Trackmania tricks like drifting (tapping brake while steering) to go around corners faster. It does increase the action space though, which of course means longer training time. But something to consider, if you want to iterate on this! Regards, I went to UKposts to procrastinate from his reinforcement learning course, and ended up using some of that knowledge anyway. I guess the algorithm now knows my interests a little _too well_. PS: really well done on introducing exploring starts! When you got to that part of the video, I almost yelled "exploring starts!" at the screen, and then that's exactly what you decided to do. I'm curious if that was from knowing that exploring starts are a thing in RL, or if you just came up with that concept from thinking about it?
@yoshtm
@yoshtm 2 роки тому
Thanks for taking the time to write such a long comment ahah, it deserves to be pinned :) I'll try to answer everything "it makes a lot of sense that it learns to always cut corners aggressively, since that increases reward by a lot" Taking turns on the inside is the optimal strategy on this map, so I don't know if it's a problem to have a reward function that favors this. But yes I also don't like the fact that the reward value varies so abruptly at the corner. As you say, it would be probably easier for the AI to understand rewards if its values were all of the same order of magnitude. Maybe it would be better to directly use the car speed as a reward (faster = better), but it would not penalize some unwanted behaviors like zigzagging in a straight line... (Some people also suggested to penalize the AI if it changes direction too frequently, which could avoid zigzags) "place rewards along the optimal racing line" Yes I'm pretty sure learning would be way faster with that, and also the final result would be closer to what humans do. But as you say I think it's not "AI learns by itself" anymore :) Of course the more you show how humans normally play Trackmania, the easier it is for the AI to learn something. I used supervised learning in some other videos and the learning process is way easier and faster. But it's not what I wanted to do in this video, I wanted to leave the freedom to the AI to explore any driving strategies, to see what it would choose by itself. "I don't think you have a negative reward at each timestep? That's typical for a "win but as fast as possible" scenario". I don't understand what it would change. With the current reward function I'm using, the AI is already penalized by the fact that it gets much less reward than if it had chosen to go faster "introducing the brake button would give you possible improved times" Yes the brake gives an advantage, but it doesn't help much on this map : for example my personnal best is 4:44 without brake, and 4:40 with brake. So I prefer to try beating the no-brake time before to add more complexity, it's already hard enough ^^ Also, I think it's pretty hard to use the brake and drift correctly in Trackmania, compared to a simple "release" approach "I'm curious if that was from knowing that exploring starts are a thing in RL" Oh I had no idea there was a name for that in the RL field, good to know ahah
@Gotie_beats
@Gotie_beats 2 роки тому
@@yoshtm You're response is long to
@eyesofwaldo7015
@eyesofwaldo7015 2 роки тому
@@Gotie_beats It is your- responce
@kavebe9629
@kavebe9629 2 роки тому
@@yoshtm how about placing the rewards just on straights such that the q value/ reward is not depending on how sharp you take the corner. That could be more fitting to a realworld rewards since some turns should be driven wide and others sharp. I really liked your approach and video! Especially going with the random starting points to minimize overfitting instead of using some "usual" dropout was an awesome idea!
@dcode1
@dcode1 2 роки тому
@@yoshtm Super interesting video! Not sure if this is possible with the TMInterface, but maybe you could build a reward system that "precalculates" a reward value for all points on the track. You could separate the track surface into small sections and then do a breadth-first "discovery" of this grid where the reward that is assigned to the section is incremented every time a new section is discovered. It's quite hard to explain, but I did something similiar for my AI racing project: ukposts.info/have/v-deo/hadnfa1_ZKxnso0.html This was obviously not done with Trackmania, but maybe the concept can be transferred 🙂
@noahhastings6145
@noahhastings6145 2 роки тому
*Turning* AI: "I got this." *Straights* AI: "🤷‍♂️ Guess I'll die"
@rvsheesh
@rvsheesh Рік тому
Lmao😭🤣
@achtsekundenfurz7876
@achtsekundenfurz7876 Рік тому
If the AI has a set "map" from every possible combination of inputs to the possible reactions, the "long straight path ahead" could be completely untrained, with an empty reaction (no gas, no steering). When it finally comes closer to the next turn, that input changes to something it already knows, but maybe the longest straight so far ended with a left instead of a right turn -- resulting in the AI driving off the left side on "purpose."
@zacxzcitoxd2193
@zacxzcitoxd2193 Рік тому
@@achtsekundenfurz7876 shut up fucking nerd, u ruined the joke asshole
@IrishCaesar
@IrishCaesar Рік тому
@@rvsheesh i also hate straights
@shadow_GTAG157
@shadow_GTAG157 Рік тому
True
@LeoJay
@LeoJay 2 роки тому
The AI getting scared and slowing down is kinda adorable lol
@svenjansen2134
@svenjansen2134 2 роки тому
And then it kills us.
@DarkJusn2020
@DarkJusn2020 2 роки тому
I'm both intrigued how something not even alive can be cute and yet 100% agreed with this comment
@RNCHFND
@RNCHFND 2 роки тому
It's very human
@utopes
@utopes 2 роки тому
And then they just freeze and hide behind a parent’s leg, as all AI do
@Therevengeforget
@Therevengeforget 2 роки тому
Then getting courage to go on... dying 3 seconds after
@Aoi_Haru763
@Aoi_Haru763 Рік тому
"At one point, it even stops, as of it's afraid to continue. After a long minute, it finally decides to continue, and dies". Story of my life. I feel a connection between me and the AI. Empathy.
@newfreenayshaun6651
@newfreenayshaun6651 Рік тому
Cooler than the other side of the pillow. 😆
@JonathanGillies
@JonathanGillies Рік тому
Christ died and rose again to pay the punishment for the sins of those who would put their trust in him. Turn from your sins and cry to God for mercy, and you will be given everlasting life! But if not, you will fearfully perish. ;(
@Longchain69
@Longchain69 Рік тому
Me on most games
@ameliorateepoch9917
@ameliorateepoch9917 Рік тому
Why is someone preaching religion on an AI video??
@JonathanGillies
@JonathanGillies Рік тому
@Ameliorate Epoch. Because there are people heading for eternal damnation here just as much as anywhere else, so I will do my best to warn you to flee from the wrath to come, and then at least you have been warned, and if you perish now, you will only have yourself to blame! ;( But please don't ignore my warnings!!!!! Our good deeds can contribute NOTHING to our salvation. When God judges us, he will look to see if we ever broke any of his commandments (like lying, stealing, fornication, hatred, disrespect, using God's name in vain, etc.), and if we have, then we will be pronounced as guilty and the punishment is ETERNAL damnation. He will NOT take into account ANY good deeds that we have done, because it was our duty to always do good anyway, so it is irrelevant. So EVERY one of us is by default heading for eternal damnation, because NONE of us have perfectly kept God's whole law. God is most holy, and perfectly just, and MUST punish EVERY sin that is committed against him. HOWEVER, (good news!) he also delights in mercy, and does not want any of us to have to be punished in a lost eternity forever, so he sent his Son into the world to be punished in the place of all who would put their trust in him and HIS righteousness ALONE for their salvation. So we must STOP putting our trust in our own good deeds to 'outweigh' our bad deeds, and instead put our ENTIRE trust in Jesus Christ's untainted righteousness ALONE. If we do this, and if we wholeheartedly and sincerely turn from our hatred of God and our love of sin, and cry out to God for mercy and forgiveness because of Christ's sacrifice on the cross, then God PROMISES to fully forgive our sins and give us a new nature that will love God and hate sin, unlike our old nature which hates God and loves sin. You can tell whether or not you have been truly saved by asking yourself whether you love God and are broken-hearted if you sin against him, OR do you still love your sins and hate God for not wanting you to do them. I hope I see you in Heaven one day. God bless!
@sygneg7348
@sygneg7348 Рік тому
Never have I felt so much emotion for a programmed robot, but here we are.
@p3mikka709
@p3mikka709 2 роки тому
"but then, the AI got this run" music starts playing
@Atlas_V.
@Atlas_V. 2 роки тому
Wirtual vibe
@redholm
@redholm 2 роки тому
*En aften ved svanefossen starts playing*
@Atlas_V.
@Atlas_V. 2 роки тому
@@redholm You got it
@bawat
@bawat 2 роки тому
I'm just going to leave this here :P ukposts.info/have/v-deo/iZSQoaN6aJWruKs.html
@adubs.
@adubs. 2 роки тому
@@bawat what the fuck
@DarkValorWolf
@DarkValorWolf 2 роки тому
"and after 53 hours of learning, the AI gets this run" nice Wirtual reference there
@cvf4662
@cvf4662 2 роки тому
Next time yosh should call him just to say this legendary phrase
@dinospumoni5611
@dinospumoni5611 2 роки тому
Wirtuals is actually a reference to Summoning Salt
@yassineaadad7716
@yassineaadad7716 2 роки тому
@@dinospumoni5611 Summoning salt is actually a reference to jojo
@MotigEx
@MotigEx 2 роки тому
But which run did Hefest get???
@kazuala
@kazuala 2 роки тому
Yes
@SelevanRsC
@SelevanRsC Рік тому
I love how at 7:01 the one car made such a well run that it was shocked in the end how good it was, and got totally confused, lol
@esmolol4091
@esmolol4091 11 місяців тому
It wasn't shocked, it just didn't expect something completely different and didn't know how to cope with it.
@PM-wp6ze
@PM-wp6ze 11 місяців тому
@@esmolol4091 hence the word ‘shocked’
@DustyyBoi
@DustyyBoi 6 місяців тому
​@@esmolol4091we should totally invent a word for that
@7cpm293
@7cpm293 5 місяців тому
@@esmolol4091”I’m not breathing, i’m just taking in air.”
@TheStormyClouds
@TheStormyClouds Рік тому
I'm so happy that you did the randomized spawn points and speeds. I was worrying that you might simply be teaching the AI how to play a single map by it learning just pure inputs rather than seeing the actual turns and figuring out what to do. I was incredibly impressed with how many made it through the map with all sorts of jumps and terrain types.
@neutralb4109
@neutralb4109 Рік тому
have you played this game? I'm curious to how much terrain type impacts over all control. Was the AI actually making real time changes to its behavior or was it just luck?
@TheStormyClouds
@TheStormyClouds Рік тому
@@neutralb4109 I haven't played the game, but there's no way it was just luck. Just look at the types of jumps and round hills they go over as well. The AI was definitely making real time corrections as it noticed itself getting away from corners and towards edges. It definitely didn't know how to do those jumps, but it knew after going off the jump and getting messed up that it needed to correct its position. It's likely the same with the terrain types. It sees itself drifting out of position, so it corrects by steering more.
@neutralb4109
@neutralb4109 Рік тому
@@TheStormyClouds nice thanks for your time
@TheStormyClouds
@TheStormyClouds Рік тому
@@neutralb4109 No problem
@TheStormyClouds
@TheStormyClouds Рік тому
@LEO&LAMB It's a very very very complicated calculator LMAO. Deep learning and AI stuff is getting intense. This stuff is gonna look like a basic calculator compared to the AI we end up creating.
@lebimas
@lebimas 2 роки тому
The fact that the model with random starting points achieved far more in 53 hours of training than the one with only one starting point did with 100 hours shows the value in choosing random samples for iterations
@breakfast-burrito
@breakfast-burrito Рік тому
I was really concerned in the beginning for the AI training from the same spot, the switch to random spots was such a small change that made a massive difference. Blown away by how much more efficient it was by giving it more a more noisy input.
@silentguardian8349
@silentguardian8349 Рік тому
Also shows importance of diversity in life because different people have different starting points in life and collectively can be more efficient at accomplishing tasks than alike people.
@Archimedes.5000
@Archimedes.5000 Рік тому
It shows that blind AI can't be generalized no matter what
@petertolgyesi6125
@petertolgyesi6125 Рік тому
Maybe I would make a looped track. The AI can then start at random positions and has to do a full lap.
@LunnarisLP
@LunnarisLP 8 місяців тому
Actually this just occurs very often on loads of models. I used to train an RL agent that would use experience replay to reused old data multiple times from an experience buffer. If one turned the buffer on too early it actually kept looking at the same e.g. first 100 runs way too often creating a massive bottleneck. Only after I delayed the experience replay to when 3000 runs were in the buffer and then added a priority to it, making it a bit less likely to pick the same run loads of time it actually showed decent improvements over just creating new runs all the time. The methods use case is still only if the simulation of the environment is relatively costly compared to the training process of the neural network, because if you can just create loads of runs with hardly any cost why bother reusung outdated data, but if it is costly it's a cool method.
@eL3ctric
@eL3ctric 2 роки тому
Oh god yes finally someone that tackles the "my ai just learns the track layout" by adjusting the layout/starting position. Nice!
@Tom-cq2ui
@Tom-cq2ui 2 роки тому
That was my first concern when I started watching this video, but it was nice to see how it was addressed! I was surprised to see how well it worked too.
@Benw8888
@Benw8888 2 роки тому
He still trains and tests it on the same map, though... it's bad practice to test/evaluate on the same map/data as an AI is trained on. It's possible the AI is still just memorizing possible 2-road arrangements, it's just learning more of those arrangements. Not that this is necessarily a bad thing, it you only care about simple rectangular maps like this one
@eL3ctric
@eL3ctric 2 роки тому
@@Benw8888 yes it's obviously fitted to work on the area it's been trained on.
@dcode1
@dcode1 2 роки тому
@@Benw8888 The reason for only using one track might be that each track has to be manually prepared. But it would still be awesome to see how the AI handles different "types" of tracks (non-rectangular ones). I made an AI racing video myself. I did not use trackmania, but I was able to come up with a system that automatically adds a "reward system" for the tracks so I was able to train and test on multiple tracks. You can find it here: ukposts.info/have/v-deo/hadnfa1_ZKxnso0.html
@Benw8888
@Benw8888 2 роки тому
​@@dcode1 great video
@funx24X7
@funx24X7 Рік тому
There’s been instances of AI finding exploits in games that humans have not found or are incapable of performing. I would love to see a trackmania AI trained to find insane shortcuts
@SixDigitOsu
@SixDigitOsu Рік тому
This shows how sticking to the same thing doesn't make you improve, you just memorize it. But trying different things make you improve.
@marijnregterschot7009
@marijnregterschot7009 2 роки тому
I think trackmania is a great game to practice machine learning. It has very basic inputs and the game is 100% deterministic. Most importantly it's just satisfying to see.
@yoshtm
@yoshtm 2 роки тому
Yeah the satisfying part is a great motivation :D
@stinkikackepups9971
@stinkikackepups9971 2 роки тому
Or Geometry dash. Its also very simole in inputs
@Orlanguru
@Orlanguru 2 роки тому
Yeah, so much easier than ElastoMania :)
@polychoron
@polychoron 2 роки тому
Why is 100% deterministic a good thing? I wouldn't think so.
@richarddogen7524
@richarddogen7524 2 роки тому
@@polychoron 100% deterministic means that, under the same conditions, the same actions will always provide the same results. If the game were not deterministic, aka random, it wouldn't get the same result from same actions under same conditions. A good example is the random encounter in Pokemon or similar RPGs. In Pokemon, you may encounter something or you may not encounter something, even if your team is the same, you start in the same spot and you walk forwards for the same time. Pokemon is random, due to you not being able to tell the outcome. In a deterministic version of Pokemon you would always encounter the monster on the same spot.
@the_break1
@the_break1 2 роки тому
I really wonder how fast would this AI pass A01 and it's reaction would be on final jump. Really cool stuff!
@adamnielson42
@adamnielson42 2 роки тому
Unfortunately none of the inputs seem to involv height so it would likely need to be a modified ai
@ageno_493
@ageno_493 2 роки тому
Or A07
@whatusernameis5295
@whatusernameis5295 2 роки тому
or ones that humans struggle with (or even can't do)
@DanielHatchman
@DanielHatchman 2 роки тому
@@whatusernameis5295 it wouldn't make it. Maybe one could, but not like this.
@jeremiahjorenby2275
@jeremiahjorenby2275 2 роки тому
I want to see it beat author medal on A06
@Linguinesticks
@Linguinesticks Рік тому
This made me feel better about the machine learning course I dropped out of a year ago. While I don't think I'll ever understand the actual construction or inner workings of machine learning models, it was nice to notice the overfitting problem before the script mentioned it. That's always a pet peeve in machine learning videos, like there's one where someone plays through a game with an ML model, but retrains from the start at each new level because the neural network won't generalize.
@hyper-focus1693
@hyper-focus1693 Рік тому
Honestly this recaps humanity, learning, logic, trial & error, problem solving, anticipation, texting, deduction, and so much more. I loved it. I learned so many things that are way beyond the scope of the video. Keep it up. 💪
@xgtwb6473
@xgtwb6473 11 місяців тому
What did you learn
@DonatCallens
@DonatCallens 2 роки тому
Suggestion: when you compare human runs versus AI runs, you immediately see a big difference which is that humans make less corrections. The driving style of humans is infused with the biological constraint of energy preservation. I think we could improve the learning of AI greatly by adding a negative cost to the amount of input changes the AI makes...
@fantasticphil3863
@fantasticphil3863 2 роки тому
Or a negative cost when the frequency of alternate direction changes is more common than the frequency at which the track changes direction. Imagine the left right input of the car is a sine wave with a higher frequency than the sine wave of whether the track is on a left or right turn, if so, the AI is penalized.
@rendomstranger8698
@rendomstranger8698 2 роки тому
@@fantasticphil3863 Not need to consider the amount of turns. Just make the reward for distance higher than the punishment for turning left or right. Another improvement would be increasing the reward for distance as the AI gets closer to the record. This would result in the AI prioritizing speed in the early parts of the map so that it learns more complex situations. Meanwhile, it would prioritize distance during the later parts of the map. Especially if a large reward is implemented for breaking the record.
@StephenKarl_Integral
@StephenKarl_Integral 2 роки тому
Also humans are gamblers (by design), the outrageously "out of safety margins" behavior which produces unbeatable performances, yet, unlikely to get reproduced endlessly under changing context. One may argue AI does actually gamble, when trying millions various attempts but the thing is, a human remembers "I have great chances to win this specific gamble at this portion of this track" while AI is designed to generalize... That's why most attempts at an AI being seriously competitive with an human usually resolve in a specific learning model per context, ie, one track, one model, another track, another model...
@gemapamungkas7296
@gemapamungkas7296 2 роки тому
yall stop giving him ideas or we would havr skynet someday in the future.
@StephenKarl_Integral
@StephenKarl_Integral 2 роки тому
@@gemapamungkas7296 Even if it's an off topic excursion, may I just point out the principle of a skynet rise supposedly predating the doomfall of humanity : _An AI designed to predict the future, based on big data going rogue against humanity because the AI got aware of humans being the culprit in the death of this planet._ Such an AI *already exists,* actually, mutiple of them by various companies such as the owner of UKposts. It's a bit late to be afraid of skynet. The thing is, existence of real skynet is not to be feared. At the moment, the main objective of powerful figures controlling them is to *make money and assert dominance* over economics, politics and competition elimination. You have economic sanctions, wars, private companies alliances, shares, licensing, privileges and exclusivity, etc. (I won't be dragged in debates on the ways they use, I only explain the principle) As long as the goal is to *assert dominance,* skynets devs won't go deep in giving *emotions, sense of altruism or self preservation to such AI,* because all its purpose revolves around the usage of large human resources for the interest of the minority of influent wealthy people. And the devs know that, that, if someday, anyone of them tries to design an AI with a sense of _"justice based on feelings",_ that will be the very trigger *to kill all humanity.* My point is : the powerful companies don't want that, meaning, you, me, and the other guys giving advices here on how to make a more "human-like-AI" *will never get hired* by such companies, the "phylosophy" is just not on point. At the same time, we are all here talking about learning AI, but none of us are dev lead in the industry, we just want to make small scale application of AI learning, but at best it enters game lines of code, at worst, a fantasy essay in our private computer never making its way elsewhere. Having a video on YT is already much better, this is entertainment and snacks for the brain. Everyone has everything to lose (including you and me) in trying to make the most human-like AI that has access to big data and actually uses it to try to _save_ the planet. That won't happen. Anyway, most skynet disasters depicted in documentaries, movies, anime/mangas and other books/blog articles usually fail to grasp the complexity of such omnipotent global machine rebellion : resources and mantainance logisics. You need various metals and minerals to manufacture the machines, energy and fluids harvesting to make robots move, communications that appears global like SpaceX StarLinq are not, to disable them, you just have to physically destroy the server relays dispatched all over the world and they become inoperative. Simply put, you have chips in your smartphone and computer, thanks to millions of african human workers harvesting the required resources for you and your country. 10000 nuclear warheads exploding on the first 10000 large cities around the world is not enough to erase humanity, it will only impede the machines faction in a way 99% of their infrastructures, logistics and resources are compromised (call that a strategic critical error due to bad programming). And it is always possible to physically disable mechanical components of a machine. I'm always amazed how come (in Matrix and other distopias) machines got the billions tons of metal to manufacture the robots, and no human did care to check what's going wrong. I believe the skynet comment was just a pun (and I'm fine with that, it was funny), but I'm still hard pressed to point out it's still a serious matter where real humans are ruling the world in a way that is unknown to billions of others. You believe presidents or head of states are the powerful figures, you're deeply mistaken, they are mere replaceable puppets. You believe Russia is wrong attacking Ukraine, what you don't know is Ukraine head of states are the ones being childish in the whole thing. African countries among others are still poor for the similar reasons, where the private african company heads being the traitors of their own countries... I mean, skynet is a drama fantasy. You can find a little analogy with covid and ebola where a seemingly mass deadly virus could end humanity............ not even close. I'm sad for those who died and those at loss (I'm among them), but life doesn't end there, you must keep going. Likewise, you cannot find the correct course of actions to cure the world, _your_ world (or prevent a skynet rise - for those like me who have such concerns) if you don't understand how it works, what's behind the scene. All you could do is what was taught you through education and mass (social)media, where people are endlessly sharing the same wrong concept and conclusions of peripheral concerns : manipulation (and various AI are designed to raise people inside that illusion). There is no such thing as conspiracy, only reality that is not widely taught because that would disrupt the life stability of weathy countries. The thing is, today, those countries are in deep shit aswell, some greedy figures are late to step down and find a better way to get both interests and still exist (ie, not get bankrupt). At some point, you cannot but give away some of your power to the people, or you die prematurely.
@peekay120
@peekay120 2 роки тому
I think the most interesting thing about these kinds of videos is that it really puts into perspective just how insane our own brains really are, a human player, even one who isnt good at racing games, would take a tiny fraction of time to be able to complete the track than what the ai requires.
@Gappys5thTesticle
@Gappys5thTesticle Рік тому
the most interesting thing is that us, humans built the AI. We created Inteligence out of sticks and stones
@presence5692
@presence5692 Рік тому
In a few years, a properly programmed AI will surpass the best people in a matter of hours at most. We can't beat the computers in some regards, TAS proved it.
@andrew_kay
@andrew_kay Рік тому
@@Gappys5thTesticlewe didn't create any intelligence yet. This AI here clearly don't have any clue what it was doing. It was like 10000 blind cockroaches in labyrinth.
@swagatrout3075
@swagatrout3075 Рік тому
ya but keep in mind that the AI was born and learned this much in about 60 hours while a new born baby if given a controller can't if proper 70-85 years of human life was given to It I wonder what a mature AI it will become. and maybe after the civilization evolution we have of about 200,000 years ago, Homo sapiens emerged. That's us I wonder if they can make there own AI's and have a civilization of there own where they want to create there own some other different kind of intelligence maybe biological hence creating humans.
@HanMestov
@HanMestov Рік тому
@Presence isn't tas just slowing the game down or something like that in order to achieve frame perfect runs? The human is still putting in the inputs, no?
@karmavil4034
@karmavil4034 Рік тому
What a lovely story 😍. I'm not just jealous about what you have accomplished but also how you did it. Starting from the simple idea, the goal, the experimentation, evaluations and improvements, and an outstanding audio-visual documentation. The is pure gold! Thank you for sharing this topic and the inspiration
@avn6628
@avn6628 Рік тому
9:07 "After a long minute, it finally decides to continue.. and dies."
@BryceNewbury
@BryceNewbury 2 роки тому
I really enjoyed the explanations of the different training methods paired with the excellent visuals. Keep up the good work, and I can’t wait to see what you try next!
@jeromelageyre5287
@jeromelageyre5287 2 роки тому
What a fun way to learn about machine learning and its variants! Very good video and montages ! Very clear and accessible English ! The return of yoshtm is more than a pleasure!
@firatagis8132
@firatagis8132 2 роки тому
Might be fun to use different learning algorithms for the same map, exploring which one is good to use in what context using trackmania as a medium. Could be really instructive. Because different ais are racing with each others, it can be really entertaining as well. Like bracket style, each ai has 50-100 ingame hour to learn the map, then the next round is a different map. But that sounds like a lot of computation time
@yungsigurd
@yungsigurd Рік тому
Absolutely incredible video. The amount of effort you put into this is honestly staggering. Keep up the amazing work bro 💓
@zyxyuv1650
@zyxyuv1650 Рік тому
This is one of the best videos ever made for explaining AI to beginners. I hope you make new videos *soon,* 10 months/a year is way too long delay in between videos, especially since there's so much interest in AI right now you're missing out on and people are really missing out on learning from you.
@petros4225
@petros4225 2 роки тому
I like how the AI figures out that by moving in a sinusoidal trajectory rather than a straight line, it covers more distance, thus generates more cumulative reward. Maybe you could penaltize unnecessary steering somehow, to make it less wiggly 😜
@japanpanda2179
@japanpanda2179 2 роки тому
Or alternatively, calculate distance based on position on the track, as opposed to actual distance traveled
@tortareka2681
@tortareka2681 2 роки тому
you can also just train it up until its winning consistently then base it on time for completion and not survival time edit: commented this before watching the video but when he changed it to this the wiggle was significantly less.
@demonindenim
@demonindenim 2 роки тому
actually the rewards system in this video is based on the length of the track, not the distance the car covers. that's why cutting corners provides such a big boost in rewards, because it suddenly jumps from one section of the track to another, and the bits of the track that it cut off get added to the reward all at once, as shown at 3:30. This contributes to the sinusoidal nature of driving, as the AI is constantly looking for corners to cut
@seeibe
@seeibe Рік тому
Wiggling at a certain speed actually leads to faster movement in trackmania
@lukas8385
@lukas8385 Рік тому
Actually that is not what happens. That's why the Ai learns to cut corners
@lvjurz
@lvjurz Рік тому
Fantastic video, great explanation of concepts. Made quite a few things for me much clearer. Thanks for interesting content.
@sabofx
@sabofx 2 роки тому
Best educational video on the practical implementation of deep learning I've seen on youtube. 🤩 And I've seen a lot! 🤭 Thank you for sharing your knowledge and experience 🤗
@ToToMania
@ToToMania 2 роки тому
I can't even think of how much time went into this video. Amazing visualizations, and a great AI of course. Very interesting to see the learning process. Great work!
@Caterblock
@Caterblock 2 роки тому
A01 but its by an A. I
@cparch1758
@cparch1758 2 роки тому
I'm curious how adding walls would have affected the learning speed. Add barriers around the track, and subtract the "reward" for every time it made contact with a barrier
@niblet8955
@niblet8955 2 роки тому
I would expect it chooses a more stable, less aggressive style - albeit with a slower time most likely
@JustSomeoneRandom1324
@JustSomeoneRandom1324 Рік тому
I wonder what checkpoints and a bonus for getting there faster would do. Incentives to go as fast as possible, while punishing any that don't make it
@penny0G
@penny0G Рік тому
Nice! Thanks for the effort, these videos are always so interesting to watch.
@jimmygravitt1048
@jimmygravitt1048 Рік тому
For a complete layman in AI, this was dope. Well done. Introduced me to some concepts there.
@sjccsjcc
@sjccsjcc 2 роки тому
i enjoyed this so much and the wirtual reference made it better, keep up the good work
@groovyball
@groovyball 2 роки тому
No
@rohanalias9053
@rohanalias9053 2 роки тому
@@groovyball Yes
@f1shyspace
@f1shyspace 2 роки тому
No
@deathfoxstreams2542
@deathfoxstreams2542 2 роки тому
It would be cool to see a speedrunner catagory based around learning AI
@deathfoxstreams2542
@deathfoxstreams2542 2 роки тому
@This is my Username no because its AI doing the speed run not a person
@nachos5142
@nachos5142 2 роки тому
@@deathfoxstreams2542 hmmm Human Assisted Speedrun?
@ScherFire
@ScherFire 2 роки тому
@@deathfoxstreams2542 Yes but a human has to create the environment on which to train the AI (Tool). Is it functionally any different than a human issuing predetermined inputs on every single individual frame of the game?
@exodusdonley77
@exodusdonley77 2 роки тому
@@ScherFire honestly I would say it's different yeah, spending 53 hours on a TAS for this created map would yield a far better result than teaching the AI to do it. Really, what the competition would be over is how well you've set up your training environment, and I think that would be interesting in its own right
@jamesorlakin
@jamesorlakin 2 роки тому
AWS DeepRacer?
@user-di4bt7qu2i
@user-di4bt7qu2i Рік тому
Fascinating video. Can't wait to see all of your posts. Thanks!
@lindsayguare6603
@lindsayguare6603 Рік тому
I think an additional input would have greatly helped performance, especially with respect to quick turns vs straightaways. If there was an input for distance to next turn instead of just which direction the next turn is, I think that would have helped!
@MAAZ_Music
@MAAZ_Music Рік тому
Yes I think this can beat his personal best
@Migweegin
@Migweegin 2 роки тому
Absolutely amazing production quality, and a great video overall. This channel deserves more subs!
@runningsloth3324
@runningsloth3324 2 роки тому
Damn, this AI really did learn to play Trackmania instead of just learning to play this one track. I see videos about machine learning in other games where it's sometimes obvious that the AI hasn't really learned to play the game, but just one map.
@TheStormyClouds
@TheStormyClouds Рік тому
Exactly. I hate the machine learning that isn't true AI. That "learning" is just it randomizing inputs until it finds the perfect inputs that make it happy rather than it actually learning how to play.
@serge.stecenko
@serge.stecenko 2 роки тому
Awesome video and visualizations! Thanks a lot, really enjoyed watching it.
@perero
@perero Рік тому
The visualization of your project is terrific. Wow.
@Kya10
@Kya10 2 роки тому
Incredible job as always! Very interesting to have more insight on how the process goes, and I'm honestly really surprised that the AI was still able to drive the final track with all those obstacles, boosters, etc! And hey, for what it's worth, I think your english improved significantly since last time, so, great job on that aswell :D Always looking forward for more videos from you 😄❤
@maxanderson8872
@maxanderson8872 2 роки тому
Puts things into perspective when you keep in mind that a child could pick up the game and complete the track within a couple tries, without needing to even consider the basic calculations of input and consequences
@MichaelPohoreski
@MichaelPohoreski 2 роки тому
Yup. a.i. = biological actual intelligence compared to glorified table look up of A.I. synthetic Artificial Ignorance.
@michaelleue7594
@michaelleue7594 Рік тому
Well, it isn't really ever the same intelligence doing the driving more than once, is a big part of it. You have thousands of first attempts, and longest survivors "tell" the next generation how to do the track, but the next generation has never actually seen the track before. They're all new intelligences. They're better communicators to the next AIs than children would be to the next children, but even so, there's a ton of information a child can see on the screen that the AIs just don't register at all, let alone manage to pass on.
@AleksandarIvanov69
@AleksandarIvanov69 Рік тому
It is scary to see the multiple cars together. It reminds me how a computer can be many things simultaneously without loss of productivity, while a human can only be one thing at a time.
@pelt1581
@pelt1581 Рік тому
i guess thousands of cars overlapping is just showing last learning results which were done one by one
@CHen-de6qf
@CHen-de6qf Рік тому
And how much more superior human brain is comparing to AI (as of now) to complete the best lap time just after a few tries
@AleksandarIvanov69
@AleksandarIvanov69 Рік тому
@@CHen-de6qf i don't know about superior... our innate imperfection leads us to err. Ask a speedrunner 😁
@hanshanshansans
@hanshanshansans Рік тому
@@CHen-de6qf i think seeing any superiority in human intelligence just from this experiment is very short-sighted. This AI was, in human terms, born for only this purpose, and hasnt experienced anything but this. Any human playing this game most likely already has several years of experience in their head. So the big question is: how would a newborn baby perform here?
@shivpawar135
@shivpawar135 Рік тому
In your brain you are doing hundreds of thigs at same time.
@arthurledoux2337
@arthurledoux2337 Рік тому
Super vidéo mon gars, grave construite super montage et tout. C’est hyper bien expliqué on comprends des trucs difficiles à comprendre plus simplement. Continues t’es le best
@yoshtm
@yoshtm Рік тому
Merci beaucoup ;)
@Encysted
@Encysted 2 роки тому
I am super impressed by you keeping on the same topic for so long, gradually improving your approach and production. It's really cool to see someone working on a really long-term project. Normally, I don't like those very long series, but this is cool because it's something I understand, you make it easier to understand, and you break each one down into bite-size chunks. I don't think I'd be able to cut very much with something that I'd probably be very invested in.
@nonbread7911
@nonbread7911 2 роки тому
Honestly one of the best AI videos I’ve seen
@srbasha74
@srbasha74 Рік тому
Wow!!! Beautifully explained and visualized. Thank you very much
@snowden1018
@snowden1018 Рік тому
This was fascinating. It feels like it feeds into human behaviour when things just work when we've repeated the same process unthinkingly but then we hesitate and fear things outside those knowns. Really enjoyed this.
@Anaris84
@Anaris84 2 роки тому
Brilliant narration of your journey training your pet AI to drive! I like how you also talked about machine learning concepts as well and showed us how it can be put into practice.
@LJay205
@LJay205 2 роки тому
Very good visuals, this video must've been a ton of work. Commendable effort!
@wesb9546
@wesb9546 Рік тому
fascinating and so well presented...you deserve way more subs :)
@wingedsheep2
@wingedsheep2 Рік тому
Very cool experiment! I like how you show all the problems you run into and how you solve them. And how you visualise everything!
@achromath
@achromath 2 роки тому
Really really good stuff. I assume others have mentioned this, and it's an absolute beast to tackle computationally, but I think what would take this over the edge into really scary generalizability would be some dimension of image recognition frame-by-frame (or even a proxy of like overhead position?). If I understand correctly, this AI effectively tried to learn this course "blind", i.e. only knowing inputs and the rewards associated with those exact inputs. Then a bot that learned on one track could be dropped in another and not have to start from scratch, because the image context is there.
@vincents9285
@vincents9285 2 роки тому
Génial, félicitations et merci ! Le rendu de la vidéo est top et accessible, c'est vraiment du beau boulot. Tu as considéré à rendre ton programme open-source pour que la communauté puisse s'associer à ton travail ?
@ikwed
@ikwed 2 роки тому
Was I the only one to use the google translator function
@Traquenard_
@Traquenard_ 2 роки тому
@@ikwed no it's a wonderful feature
@bread5240
@bread5240 Рік тому
Love the videos man, i cant wait for the AI to find some crazy trick that people start to use
@lepicier7920
@lepicier7920 Рік тому
Tes vidéos sont super intéressantes et avec un jeu comme c'est le combo parfait !
@XxTheDifferencexX
@XxTheDifferencexX 2 роки тому
reminds me almost of those micromouse competitions in japan. When it comes to the surfaces test, the AI was not going nearly fast enough to feel the effects of the surfaces.
@FrostKiwi
@FrostKiwi 2 роки тому
Really impressed with this one. Educational and inspirational! Many thanks from a fellow computer scientist from Belarus
@foobars3816
@foobars3816 2 роки тому
I hope Putin doesn't drag your country further into his war.
@TheMadSqu
@TheMadSqu 11 місяців тому
I love this series. It would be great if you would continue it. THX for your work.
@gustavosouzasoares
@gustavosouzasoares Рік тому
This video is so well produced, well done
@Nebula_ya
@Nebula_ya 2 роки тому
Here's an idea, try the A.I on a full speed map and have forward always held, break never used and only left and right as inputs
@danihtoledo22
@danihtoledo22 2 роки тому
Would be great to see it learn a map with shortcuts, i wonder if the AI could also learn to use them instead of going the normal way!
@skull1161
@skull1161 Рік тому
theoretically, the AI that does a shortcut the first time will then be the one that did the best, then they will all be do the shortcut in the next generation because they all learn from what the best did.
@flosikfgsleijflsn6025
@flosikfgsleijflsn6025 Рік тому
This was very interesting. I'm looking forward to your next videos.
@hanjuhbrightside5224
@hanjuhbrightside5224 Рік тому
Never seen a video like this before, so I think this channel is really cool, I'm subbing
@ShcrTM
@ShcrTM 2 роки тому
Can everyone appreciate how the AI attempts a start trick at 12:36
@halbkorn
@halbkorn 2 роки тому
yes
@tezlashock
@tezlashock 2 роки тому
You could also add a neuron if confidence is low. That way, when it encounters a situation past neurons cannot understand, it has extra storage to reference new data
@Rkcuddles
@Rkcuddles Рік тому
More more more more !! This was sooo fun to watch! Are there methods that would improve this ai that are outside your expertise?
@SveenxHD
@SveenxHD 7 місяців тому
dumb question: how do you even "hook up" your code to the game? how does it read inputs? i cant imagine its just a background program reading whats on the screen visualy? i always wondered about this when it comes to coding stuff in existing games etc.
@hugoanastacio3233
@hugoanastacio3233 2 роки тому
"and after 53 hours of learning, the AI gets this run" MAN! YOU'RE A LEGEND!
@dominikplatzhalter1083
@dominikplatzhalter1083 2 роки тому
I dont understand the referance. Can someone pls explain?
@hugoanastacio3233
@hugoanastacio3233 2 роки тому
@@dominikplatzhalter1083 It's a Wirtual meme, it's a Trackmania UKpostsr/Streamer
@FlightRecorder1
@FlightRecorder1 2 роки тому
I wonder if adding carrots for fewest changes in actions would result in a faster time? It seemed to me that it was loosing a TON of time flipping its wheels back and forth from left to right all the time.
@RAMfOR
@RAMfOR Рік тому
Huge and great work! That's amazing!!!
@ItsMeBeaufortSC
@ItsMeBeaufortSC Рік тому
The exploration stage of the AI is kind of like me when I start playing a new game: I don't go thru the tutorial, I don't use strategy, I just press random buttons and keys to see what they will do
@spoonikle
@spoonikle 2 роки тому
Changing the training parameters part way through by having set random starts was good. Next you should have set up a shortest path system, making the AI aware of its distance from the optimized racing line and target ideal speeds at sections in the map. As a human, I would never know my driving was “slow” unless I was shown a faster way. Driving is dangerous and just staying on the road is a challenge for the AI. Expanded input parameters that modify an already functional AI could further accelerate progress.
@duesenantrieb8272
@duesenantrieb8272 2 роки тому
as someone who studies AI rn ... this is realy interesting ... being in the 2nd semster only tho still leaves me with a lot of questions ... i for once cant see how to do stuff like that myself.. hope it comes with time
@KE-yj4ip
@KE-yj4ip Рік тому
Thanks, that was interesting ❤️ One thing that I thought of was how nice it is that we have vision, and I hope that we can give that to everyone
@AlphabetsFailMe
@AlphabetsFailMe Рік тому
Introducing the random starting point and the random events at the start of spawning was brilliant.
@tongpoo8985
@tongpoo8985 2 роки тому
This is amazing. I would love a code walkthrough on this.
@jairorodriguezblanco615
@jairorodriguezblanco615 2 роки тому
Hi, insanely interesting video. Took a couple of AI classes at college, and this is an incredibly visual application. I have a question, how far can the AI see to calculate the rewards? For example, in the long straight lines, can it see that it's a straight line for a long while, or will it drive carefully because it can't see after a few steps? Hope I worded my question correctly, English is my second language. Cheers!
@antonioscendrategattico2302
@antonioscendrategattico2302 Рік тому
​@LEO&LAMB T- the game itself? It's the game that produces the inputs... and the AI processes them.
@bogeyboiplays4580
@bogeyboiplays4580 Рік тому
@LEO&LAMB the ai?
@shivpawar135
@shivpawar135 Рік тому
It's on you ai si seeing with the help of raycasting you can find how ai targets player on yt, You can change ray size so it could be anything.
@shivpawar135
@shivpawar135 Рік тому
@LEO&LAMB no man it's just pure mathematics you can tell it If it's correct add +1 everytime if not correct than don't add anything and repeat what adds +1 that's simple pice of code if you understand a little bit of coding you probably get what i said.
@mrripper3780
@mrripper3780 Рік тому
I've watched this once or twice before but I keep coming back because it's so interesting
@Fantasticleman
@Fantasticleman Рік тому
I would love to see you release this as a game where we race your best AIs one day!
@GhostersSupreme
@GhostersSupreme 2 роки тому
Would it not have been better to perhaps do something more along the lines of "rewards per second" instead of total rewards? I think that could do a lot more the speed aspect of the AI
@storm_fling1062
@storm_fling1062 2 роки тому
The ai would eventually just stop moving to get the most ammount of points
@benjaminclay8332
@benjaminclay8332 2 роки тому
@@storm_fling1062 an easy fix, though; keep basing the rewards upon distance traveled, with a multiplier based on speed; traveling faster will yeild more rewards, and stopping will have a multiplier of 0
@GhostersSupreme
@GhostersSupreme Рік тому
@@storm_fling1062 make the line disappear/become inactive as soon as its crossed so it won't give duplicate points
@florianlemysterieux1689
@florianlemysterieux1689 2 роки тому
Encore une fois, j'ai trouvé cette vidéo passionnante !
@droneflybzz4500
@droneflybzz4500 Рік тому
Great video. Nice explanation of problems of AI learning process!
@Coldrior
@Coldrior Рік тому
its just fascinating to see how AI interact with videogames and learn to improve the performance, nice video dude take ur sub and like 👌👌
@ferdyg3520
@ferdyg3520 2 роки тому
can you please make a more technical video showing how the implementation of the game data works, that would be really cool
@Ulariumus
@Ulariumus Рік тому
yes please
@user-ok4pk2mp3e
@user-ok4pk2mp3e 2 роки тому
Is there anything you could do to encourage the AI to drive faster on long pieces of track? If one of the inputs is the distance in front of it and another is its speed, maybe there's something you can do to reward it when both numbers are high.
@nonethelessK
@nonethelessK Рік тому
Second One is very simple. Its learning only from mistakes. But more complex learning is the speed at which you can achieve most favourable action. Saving up time, evolving beyond most human comprehension.
@Simon-ed6zc
@Simon-ed6zc Рік тому
Could you combine reinforcement with supervised learning? Give it a few of your attempts to emulate before starting the reinforcement phase to have some "instincts" baked in?
@Xamarin491
@Xamarin491 2 роки тому
Maybe you could give the Neural Network AI *your race* as an input? After the required learning to do one lap (or maybe a new AI), use your inputs as a baseline to improve on.
@wack1305
@wack1305 Рік тому
Batch training could be useful but it would require a large dataset of human inputs. For this scenario I don’t think it would be reasonable to create that
@medafan53
@medafan53 Рік тому
@@wack1305 It'd perhaps be an interesting online game concept. *Release day:* "God, this AI is so rubbish." *6 months in:* "I don't know what reviews are talking about, the AI isn't that bad actually." *2 years later:* "ARG! GOD DAMN IT! The AI are impossible to beat!"
@wack1305
@wack1305 Рік тому
@@medafan53 that would be really cool. A game that collects all data from players inputs and uses that to train AIs. You could do something like use only the top 10% of players or something like that. Super cool idea
@sebastianjost
@sebastianjost Рік тому
@mattio there are AIs out there that start training by mimicking some predefined actions and basically use that as a starting point. Some even learn to adapt to new observed behavior insanely quickly, requiring only 3-10 examples to learn a (simple) task. I don't exactly remember details or examples, but I'm sure they exist. You can probably find examples on the UKposts channel "twominutepapers", I may also have come across them during private AI research though, reading papers.
@sebastianjost
@sebastianjost Рік тому
@@medafan53 I've had this idea for years, just don't have time to develop a game to use it... Maybe in 5 years 😅🙈 I would also like a game where you have some kind of AI rival/ opponent which uses machine learning to learn at the same time you play the game, maybe even learn from your games directly to keep the game challenging as you progress. This could basically adjust the difficulty of the game automatically as required and keep the game interesting for longer. Machine learning AI opponents also don't really have a cap on their abilities. So even years after game release as players master the game, the AI could still keep up with them.
@Beatsbasteln
@Beatsbasteln 2 роки тому
you know what might make sense? if every time a new generation of the AI runs a new track is loaded for that, so that it learns a lot of strategies. i'd not try to create randomly generated maps for that but actually just use the same maps that are chosen for track of the day, cause they have been reviewed to be high quality and only a few of them act a little randomly. that way the AI would never overfit to a certain type of track or surface
@CrunchyTurtle
@CrunchyTurtle 2 роки тому
Would there be a way to automatically stop the program, load track of the day and download it, load the track of the day and run the network without serious issues? If so then this is an awesome idea, just having it in the background like 20+ hours a day just grinding tracks would make a super cool ai. But the network would have to be a lot more advanced and have more inputs per second. But after like a month that ai would be better than 90% of players
@Beatsbasteln
@Beatsbasteln 2 роки тому
@@CrunchyTurtle maybe the trackmania community should let multiple computers run for weeks to accomplish that
@CrunchyTurtle
@CrunchyTurtle 2 роки тому
@@Beatsbasteln yeah haveing several thousand of these ai running at once would make super optimised movement, but it would make online competitions bad as people will cheat with them
@Beatsbasteln
@Beatsbasteln 2 роки тому
@@CrunchyTurtle let's think of moral issues later and just enjoy watching the world burn in a bunch of magnificient runs. at some point the AI might expose itself with tons of unhuman nosebugs anyway
@mikelord93
@mikelord93 2 роки тому
I don't think that would work. You would need to program the rewards and punishments into all those tracks and that isn't feasible
@martinezcoboenrique1
@martinezcoboenrique1 Рік тому
That's incredible. Great video
@WhiteSarti
@WhiteSarti 11 місяців тому
This is so cool! I have a question for you: training the AI in this way is like training a blind man to recognize his way to home, is it right? If it's in this way, is possible to implement a way thanks which the AI could like predict streets in front of itself, like it has eyes? In this way is necessary to train the visual AI recognizing the different type of streets and then could be implemented with this AI? Thanks if u reply!
@cadaeib65
@cadaeib65 2 роки тому
12:28 When you finished saying "after 53 hours of training" you said "ai got this run" exactly the way I thought you would say it
@firaswaffle
@firaswaffle 2 роки тому
"The Return Of The King"
@firaswaffle
@firaswaffle Рік тому
yeah past me
@khaledsrrr
@khaledsrrr 8 місяців тому
Keep them coming
@johnjacobjingleheimerschmi6655
@johnjacobjingleheimerschmi6655 Рік тому
Idea for training the ai: Make the rewards bigger if they get there faster, their completed time would establish the base rewards. Slower to any checkpoint gives less, and quicker gives more proportional to the time difference. Logically this would make the ai pick up speed and reduce the teetering on the straight sections. Can't wait to see the video on teaching an ai to jump gaps.
@DiggOlive
@DiggOlive 2 роки тому
Weighting immediate reward higher than long term reward, just like me!
@fredonice4201
@fredonice4201 2 роки тому
Love this series - Could you make a behind the scenes? I would love to see the whole progress as well.
@alphakakcmeddlakadoofahkii3362
@alphakakcmeddlakadoofahkii3362 2 роки тому
It would be super interesting if you did this again with the other machine learning approaches and compared the results. Great video though! 😄
@RadoHudran
@RadoHudran Рік тому
That's really cool! It feels like you apply human psychology to AI to teach it stuff. I also feel like playing track mania now
AI beats multiple World Records in Trackmania
37:18
Yosh
Переглядів 2,3 млн
AI Learns To Swing Like Spiderman
15:29
b2studios
Переглядів 5 млн
Блоховирус !🦠 #симба #тигра #булли
00:57
Симбочка Пимпочка
Переглядів 9 млн
ЗРЯ Я 24 ЧАСА СТОЯЛ НА ГВОЗДЯХ! #нонале
00:35
MINHA IRMÃ MALVADA CONTRA O GADGET DE TREM DE DOMINÓ 😡 #ferramenta
00:40
AI vs. AI in 100m Dash (deep reinforcement learning)
11:13
AI Warehouse
Переглядів 2 млн
Training an unbeatable AI in Trackmania
20:41
Yosh
Переглядів 12 млн
Much bigger simulation, AIs learn Phalanx
29:13
Pezzza's Work
Переглядів 2,5 млн
AI Learns Insane Monopoly Strategies
11:30
b2studios
Переглядів 10 млн
AI Learns Table Tennis
11:20
b2studios
Переглядів 924 тис.
Evolving Genetic AI Optimizes Poly Bridge Problems
9:59
AstroSam
Переглядів 952 тис.
AI Learns to DESTROY old CPUs | Mario Kart Wii
9:54
AI Tango
Переглядів 1,2 млн
AI Invents New Bowling Techniques
11:33
b2studios
Переглядів 3,1 млн
Future Computers Will Be Radically Different (Analog Computing)
21:42
How Speedrunners Conquered The World's Hardest Game
54:46
Maximum
Переглядів 3,9 млн
Do you remember the Old Jessie?💔🥹 #brawlstars #shorts
0:11
HB Nico Zockt
Переглядів 6 млн
Что творит этот боец
0:59
Garga
Переглядів 643 тис.
Minecraft Japanese Mountain Temple Build Timelapse 🤯
0:41
CraftedGaming Shorts
Переглядів 35 млн
Help Herobrine Escape From Spike
0:28
Garri Creative
Переглядів 54 млн