Join Artosis, RottterdaM and a cast of special guests for a unique StarCraft II showcase live from DeepMind in London, in partnership with Blizzard.
КОМЕНТАРІ: 3 300
@PauLakaSouL4 роки тому
AlphaStar is probably thinking "This opponent is using weird strategies, I have never seen that in my 200 years of StarCraft II."
@ERACLAB3 роки тому
that actually sounds way more fitting, AlphaStar is a 200y veteran of the game seeing us young and new(merely 20y) players doing weird stuff that is effective.
@mumimo30433 роки тому
Xitrin honestly I feel sad for humanity. yes, for a single player 200 years may sound incredibly long, but the gaming technics payers use are invented and studied by all the players as a whole, which means the total time human spent on Starcraft should be millions times more than Alphastar, yet we are still losing
@ryanmorfei63253 роки тому
mumimo we do have to take into account that humans sleep and have jobs though. A week of playing every minute is not the same as a week where you eat and sleep
@suyangsong3 роки тому
ok guys I think we're overhyping this whole 200 years thing. If you ever seen AIs train it really isn't what you picture as "200 years of starcraft experience". Most of them take like literally 10 years to figure out that worker rushes are probably not effective. AIs learn very differently from humans (we learn much, much more efficiently) It's closer to a heuristic search to find the optimal strategy than actually "learning".
@BenWeigt3 роки тому
@@suyangsong Humans take ages to learn most things too, the difference is that we can transfer those lessons into new domains in the form of abstractions. AI's don't do this yet, however it is being worked on, and will be fundamental to creating AGI's.
@loomismeister5 років тому
Props to deepmind for enduring 200 years of pure protoss vs protoss.
@crimsonlanceman78824 роки тому
THE HORROR! THE HORROR! AAAAAAAAAAH!
@chaosjoerg98114 роки тому
On the same map.
@fcgHenden4 роки тому
Legion: "Do you remember 200 years of protoss vs protoss? It felt like this."
@TheDrewker4 роки тому
Sounds like that one Black Mirror episode.
@addeyyry4 роки тому
@@TheDrewker which one? :)
@TheDarkOLeo5 років тому
"You don't push up narrow ramps" AlphaStar: Hold my beer.
@error.4185 років тому
*Hold my thermal paste
@andrius05925 років тому
@@error.418 Maybe it uses beer as coolant 🤔
@v1m305 років тому
Just drop from edge of map :p Who the heck uses ramps. They also need to make it P+Z+T vs P+Z+T, not this P v P only. Stupid bot can only play P v P.
@error.4185 років тому
@@v1m30 cool, then you write it if it's so stupid
@VenturiLife5 років тому
@@andrius0592 Well I do.
@vikrejj4 роки тому
Blizzard: All units have their advantages and disadvantages AlphaStar: Warp in Stalker
@dirtypure20234 роки тому
this frustrates me
@manu853454 роки тому
Vyk 😂
@Andytlp4 роки тому
@@dirtypure2023 Its situational. You can spam one unit and through superior micro win.
@seventhspirit3 роки тому
conclusion: Stalkers and Phoenixes are OP 🤣🤣🤣
@HeatIIEXTEND2 роки тому
lol
@1000imps5 років тому
I wonder when AlphaStar will start trash talking in the chat
@The_Scouts_Code5 років тому
When it has to face the overpowered Terran and inevitably loses.
@pierrebreton49115 років тому
When it starts playing against avilo
@hieronymusnervig87125 років тому
Let's face it, this ain't OpenAI. At best it learns to say 'gg'
@aman71965 років тому
@@The_Scouts_Code Terran is the weakest race.
@sorsocksfake5 років тому
Just hand it over to 4chan for a week. Day 1 it will perfect trash talking. Day 2 it will use stalker formations to draw obscene pictures. Day 3 it will hack the game and replace all the models with anime avatars. Day 4 it will learn perfect blinking to constantly just-not-agro, for the sake of frustration. Day 5 it will take a break from usual work, and go full phoenix that lifts the entire enemy army perfectly. Day 6 it will manage to sneak DTs next to every enemy unit and then attack them all simultaneously. And on day 7 it will use unit sounds to recreate "Never gonna give you up".
@igz5 років тому
TLO Game 1: 14:39 - 21:52 TLO Game 2: missing (info: 28:02) TLO Game 3: 29:07 - 45:39 TLO Game 4: missing (info: 52:58) TLO Game 5: missing (info: 52:58) Mana Game 1: 1:03:01 - 1:08:33 Mana Game 2: missing (info: 1:16:09) Mana Game 3: 1:16:32 - 1:24:31 Mana Game 4: 1:30:19 - 1:43:13 Mana Game 5: missing (info 1:45:49) Mana Game 6: 2:01:38 - 2:14:42
@Baleur5 років тому
You are a god amongst men..... Wait no, you are an AI amongst men..
@AUTOCHESSID5 років тому
Thanks brother
@ranbnzz5 років тому
Thanks
@graphstyle5 років тому
Where can we see the missing games?
@igz5 років тому
@Daniel Bezares 2:51:44
@TurboKingCandy5 років тому
Can you imagine the commentary that would occur in the Alpha star universe watching the human games? Alph-tosis: So we see this agent is building his buildings all the way near the ramp, any idea why he might be doing that? Alph-Dam: No idea, We saw this build get played around the 140 year mark, but the time your probe takes to get there, just not worth it, You could've been mining minerals all this time. Alph-tosis: We see this a lot with the human agents, choosing not to go even into the 20 probe count for the mineral line, not leaving any room for error here, Alph-Dam: Yeah, he's very confident in his ability to keep the probes up, and he does defend them well, but loosing out on those extra 200 minerals in 3 minutes, he's really setting himself up to get put down, and he's really boxed in to defend that line, even if a single probe goes down, that's almost 50 minerals a minute he's losing. Alhph-tosis: Looks like the human agent is choosing to engage on the up-down state change. Alph-Dam: I can't make heads or tails out of this, does he think he can hold the stalker push better here? Alph-tosis: Even if he does, with that probe count he'll be losing the economic game, he's gotta try to break out. Alph-Dam: He really is just going to turtle on the up down state change. Alph-tosis: Maybe he knows something we don't.
@thegoodthebadandtheugly5794 роки тому
I thought the conversation would look more like this: 100101101000101000101010100101010
@bcarlizzle4 роки тому
I do also wanna see a full alpha star league, with a few of each race, competing and casted my tastosis. it could be at their schedule, and in the meantime the bots could practice but not with each other, like real players
@matiasgiachino44964 роки тому
No AI would use the word "loosing" when they meant "losing", caught you human!
@dirtypure20234 роки тому
👌
@El_Andru4 роки тому
Computers watching humans is like humans watching snails race. They would get bored. AI would just make better humans, watch them play and throw salt at us
@teamredshirt5 років тому
Game 1 - 1 base all in. Game 2 - Turtle into mass Carriers Game 3 - Disruptors, and more Disruptors, with super late blink on a large Stalker army. Game 4 - Stalkers with DT's in TLO's main base Game 5 - Proxy 4-gate. Turns out the AlphaStar is an NA player.
@GarrySkipPerkins4 роки тому
Do not understand.
@teamredshirt4 роки тому
Garry Perkins, low to mid NA ladder has been historically... well, bad, with lots of stuff like the five games that AlphaStar showed. Of course, that’s mostly changed these days because the player base’s average level of game knowledge is so much higher now than even a couple of years ago.
@sonniergoo1873 роки тому
"You don't push up narrow ramps" AlphaStar: Hold my beer.
@Syntaxmoe2 роки тому
proves NA best
@mattmanlooloo5 років тому
I've been oversaturating my mineral line for the past 6 years and I've never been more validated.
@Hodoss5 років тому
You may not be so happy when you realize you are yourself an AI.
@Rose_Harmonic5 років тому
@@Hodoss Or maybe he will. Means he wont die from old age at least lol.
@lawrencewang33275 років тому
@@Rose_Harmonic Till the platter doesn't spin or the drive runs out of writes
@Rose_Harmonic5 років тому
@@lawrencewang3327 I'm pretty sure there's a way to move the equivalent of minds between drives without it being even technically a case of 'murder the original and make a copy.' It's a similar process to how I think it's possible to upload minds, like, for real. Exit brain, enter ssd.
@Hodoss5 років тому
@@Rose_Harmonic I was thinking slave AI, for example a persona in a simulation, in which case it would "die" when the simulation ends. But if you're a free AI then sure you can pop the virtual Champagne, you're pretty much a god.
@leagueoflegends5 років тому
let's see your fancy robot try to win a 1v1 against our boy imaqtpie
@toxendon5 років тому
How do you not have hundreds of likes already? Helloooo, comment from LoL official channel here! Get this pinned!
@dirtydard48705 років тому
@@toxendon You working for Riot now??
@MiszczGajusz5 років тому
If AlphaStar manages to win vs Faker, will we get new Rise video with robot climbing a mountain?
@GarretDejiko5 років тому
The purest of souls
@hernanluciani26665 років тому
LOL!!!!!!!!! in 5v5... 1vs1 league of legends is useless. Im pretty sure DeepMind tech is miles away from Riot xD
@MahmoudMaguid3 роки тому
I feel like installing StarCraft 2 all over again after watching this. Huge props and credit to the team behind AlphaStar. Well done DeepMind team!
@Sirkento2 роки тому
I would love to see a completely random selection from the two hundred years of games that they played at varying levels throughout the process. Like divide it into four or five tiers and show three or four games completely randomly chosen from each of those tiers of learning. It would probably be hilarious and eye-opening to see some of the things that were attempted
@charleswachunas6462 роки тому
Worker rushes right from the beginning
@kevinemery95952 роки тому
Too true. DeepMind hasn't released any of the lower-level reinforcement training replays from their chess engine, however; I'd be a bit surprised if they released it for starcraft 2.
@lelouchvibritannia78095 років тому
When you kill your own army with disruptors and still win
@CelaLare5 років тому
It's all part of the plan
@juanlambrus24285 років тому
lmao@@CelaLare
@Ignorenobugs5 років тому
he was supply blocked though right? he prob just wanted other units^^
@logicreasonevidence64345 років тому
Gotta confuse the opponent, right babe?
@ratakaio38025 років тому
yeah thats the hardest BM that i have ever seen..
@Baleur5 років тому
I think its going oversaturation on the minerals because it probably calculated that, you overall probably lose MORE minerals by losing 1 probe on mining if it gets killed, than if you "waste" 50 minerals on a backup probe ready to mine if others got killed. Probably true, a probe can probably mine 50 minerals in the time it takes to build a new one, making having spares worthwhile. It's basically thinking that having backups is more valuable because it doesnt lose time rebuilding potentially lost probes. This means that if the human player does a sneaky flank raid to take out 4 probes, and AlphaStar was already 4 probes oversaturation, that attack literally didnt hurt the AI income rate what so ever. Sure it lost 4 probes, but it did NOT lose the time it takes to re-build 4 probes (which can be a substancial time of lower income). Kinda insane how even a "primitive" AI like this has already learnt the lesson of "time is money", and "if i do something NOW to make it harder on myself NOW, it will be easier LATER, investing in the future safety". These are human intelligent concepts that, hell, many people irl dont understand xD Sure its not concious and aware of that, but its ACTING on those principles, which is rather incredible to see. Like an animal storing food for the winter, rather than eating it all at once when its hungry.
@lukeskywalker51025 років тому
IA already learnt that capitalism is the best way to fack others... hourrah... >
@MaddieFrankX5 років тому
I was actually thinking the same thing. This tactic is seen more often in PvZ where Zerg oversaturates their bases because it's almost impossible to avoid drone loses against Pheonix openings. I find interesting that all Alpha Star agents decided it was better to oversaturate than creating a wall and we have seen how pro-players tried to do their usual harassment, and even when they were successfully killing a good amount of proves AlphaStar was almost unaffected by it.
@gametips83395 років тому
@DiamondPugs Its probably because of Oracles then. If many AI agents were strong because they used oracles to harass mineral lines walls would not work anyway but over saturation would.
@harm9915 років тому
@@lukeskywalker5102 Yes, 95% of our financial trading is AI
@velikiradojica5 років тому
To be honest, you don't quite saturate mining until you hit around 24 probes. You will leave the linear part of the income/time curve at roughly 16 so most people don't bother going over but the AI decision actually makes perfect sense to me. Also, having more workers than recommended gives you a buffer in case of enemy attacks on your mineral line and provides you with enough workers to instantly fill up 50% of the capacity of a new base. I was looking at the mining data and graphs provided by team liquid but I didn't crack any numbers to figure out the difference in disposable income "over-saturating" provides so the first part is just conjecture based on my engineering experience with non-linear characteristics. DATA: + 16 drones = 660mpm + 24 drones = 812 mpm + Going over 24 drones has very low impact on income.
@TheAIEpiphany3 роки тому
Phew! Totally enjoyed this one! I'm not familiar with the rules of StarCraft II but boy was this an event! DeepMind team again doing amazing research and making it interesting for millions of people along the way I think that's the biggest accomplishment.
@nathanjora76275 років тому
Humanity : makes cultural content warning about robots going on a killing spree for decades Also humanity : hey, you know this super upper AI we are working on ? Wouldn’t it be neat if we trained it to plan war campaigns ?
@j4genius9615 років тому
THANK YOU for this
@hirenumradia79705 років тому
I wouldn’t worry about this. DeepMims has learnt the parameters of this game. Warfare is very different in real life. What’s more worrying is AI research in the military industrial complex.
@hirenumradia79705 років тому
DeepMind*
@graphstyle5 років тому
GOD: *FACEPALM*
@nathanjora76275 років тому
@@graphstyle Satan : that's where the fun begins >:D
@hugobuss18745 років тому
I would like to see it play HAS
@DingusKhan425 років тому
mojo gibson i would love to see that as well. Has would definitely be an edge case for the AI to be tested against.
@gekkiman12275 років тому
@@DingusKhan42 If DeepMind played vs Has, sounds scary! DeepMind analyse after the games vs has---- gg :O
@MichaelZenkay5 років тому
.... an agent trained purely on Has games...
@x6xk1LLx9x5 років тому
@@MichaelZenkay would get absolutely destroyed lol, it'd play so comically safe even Has would just play macro against it and win.
@FiguraCinque5 років тому
Poor AlphaStar, will become the first nevrotic ai in history
@jungoogie5 років тому
That Neural Network activation display is mind blowing. Alpha Go was a massive deal at the time and the DOTA 2 AI team battle was just as big but this really is mind blowing considering the amount of variables and nuances SC2 has. 5 years from now, AI will no doubt be the Senpai's of games that pro-players will use to theory craft and practice with. Welcome to the new world order my friends.
@guitarlicious5 років тому
which part of the video shows the activation displays
@jungoogie5 років тому
@@guitarlicious They show how the AI thinks around a little before 1:43:01. Look towards the bottom part of the video.
@simohayha60315 років тому
In chess the battle of AI vs AB engines (brute force (actually alpha beta search)) is still very close. Leela chess zero is open source ai based on A0 and is battling stockfish 11 dev, the best classic chess engine on ccc chess.com and tcec
@eavdr5245 років тому
@@simohayha6031 Yes, especially because the conditions weren't that fair. I really would like to see another match of Stockfish vs AlphaZero with fair conditions...
@simohayha60315 років тому
@@eavdr524 www.chess.com/computer-chess-championship SF11 dev vs antifish leela Net
@joeyfarish25285 років тому
"that outcome prediction is giving me chills"
@loth40154 роки тому
Oh my god. I so wish that we will see more of this in the future. Especially a Terran vs Zerg matchup and what it would come up with. And the splits of the marines, etc. It would be absolutely amazing and beautiful
@talscorner36962 роки тому
Watching marine splits performed by something that doesn't need hands to click and eyes to see (or a brain to process all that) would be insane!
@stevenrose81795 років тому
Next they're going to teach it to play Global Thermonuclear War.
@CarstenReckord5 років тому
The only winning move is not to play ;-)
@mrkekson5 років тому
This is why i hope the first AI wil be a sexbot. Lot more peacefull outcome.
@FiguraCinque5 років тому
what about Tic-tac-toe?
@VKNoteMe5 років тому
Not enough data)
@TorianCarrConn805 років тому
@@mrkekson But there will be a lot of "matches" that will look really strange until it learns the basics :D
@fargh5 років тому
I'm in nerd heaven
@katakis15 років тому
fargh hahaha
@nawtilismaelis20435 років тому
Being made redundant by AI that is controlled by your corporate overlords is "nerd heaven"?
@UnusGrunus5 років тому
seeing the progress of intelligence in action is incredibly fascinating regardless of it is biological or mechanica
@itspodin5 років тому
I feel a lot of people have mentioned the APM/EAPM problem. Something to add to this is the fact that even if you differentiate between the two and set a limit, AlphaStar would still be able to make inhumane actions. For example, a pro player would have to move it's mouse before clicking. What I feel like happened here is that AS could basically click on opposite sides of the screen at the same time, I must say, the last match they adressed the screen problem, very nice of them to do. The AI was obviously much worse since they had to start from scratch, but Mana's intelligent play was very nice to watch! Overall this truly is something spectacular, I'm eager to see what AI has in store for us for the future.
@sickjuicysjamshack35806 місяців тому
Also, TLO never actually got 2000 apm. He used rapid fire, which the game engine registers as individual clicks per unit selected, so the graph they showed comparing apm between alpha and the players is misleading. The AI had an inhuman speed advantage and an inhuman accuracy advantage.
@NooqGaming5 років тому
Human race pushing forward through SC2, priceless feeling, this small fan community has been a great show so far feeling like part of the master race never was so delightful
@positivepatrolleader49145 років тому
PLEASE do this with every race, pro player off-race first and then pro-player main-race! This is extremely fascinating
@renx0015 років тому
Once they think they get PvP right, training all other combinations will only take a few weeks of computing. Minimal engineering effort is needed.
@jeffheun52585 років тому
@Kay is that really necessary? Why the toxicity?
@hck1bloodday5 років тому
@@renx001 ehmmm, i don't think so... adding a race to the training introduce a lots of variables and each variable increase the training time exponentially
@renx0015 років тому
@@hck1bloodday Imagine two *identical* persons, first person learns PvP, second person learns PvZ. How long will two persons take to reach similar level? Considering the second person need to learn two races, that person may take 2 times, at most 3 times of the first person. The training time of DeepMind should be similar. Assuming retraining PvP takes a week, retraining all 9 combinations should take 9 weeks to half a year. That time could be further cut down by doubling or tripling the computing resource.
@Thing-vc2qm5 років тому
The micro of the AI is inhumane because you can easily have 200 APM with a bit of spam but it's almost impossible to give 200 distinct meaningful orders per minute.
@VArsovski105 років тому
Check out the APM down at bottom right at the fight in game4 vs Mana, it went up to 1.3k APM at the peak of that battle of Stalkers vs Immortal/Sentry/Zealot, and casters were like = "we looked at the APM during that fight, wasn't that high, it was within the amount of reasonable" :P EDIT: this is the time mark: 2:11:40
@RuLeZ19885 років тому
@Pouty MacPotatohead Aren't these 1.3k/1.5k just spams ? I dont think all these Actions are actually used for micro or macro and rather more just spammed clicks or button presses.
@RuLeZ19885 років тому
@Pouty MacPotatohead Iam not talking about the machine. Iam talking about TLO or Mana. Iam just saying, that eventually 150 to 200 APM are effectively used in a match and the 600 to 900 APM getting added by players are most of the time spams. So giving the machine 1.3k APM would definitely be far higher than human level, since the machine would be able to use all 1.3k APM for micro and macro and would not waste any of these for spams, which a human is definitely not able to replicate.
@lennyztrobos86785 років тому
1.5k APM is impressive and all, but to make 1.5k individual decisions per minute, THAT is unbeatable to a human.
@dannygjk5 років тому
@kenny master Deep Mind deliberately limited AlphaStar's speed to make it approximate what a top player can do.
@Andreas55642 роки тому
Super Nice und hoch interessant! Danke fürs hochladen! Eine Weitere kollektiv ausbaubare Idee The Ultimate ZeroStar - Ideensammlung Man stelle sich zwei Starcraft 2 AI-Bots wie AlphaStar vor, die auf einer bestimmten Karte mit zwei bestimmten Rassen im Kontext einer bestimmten Starcraft 2 Engine versuchen die perfekte Lösung des 1vs1 Starcraftduells zu finden. Also zu demonstrieren mit welchem Ende das Spiel bei beidseitig perfektem (oder auch bislang unübertreffbarem) Spielen beider Gegenspieler endet: 1) Unentschieden: Die Ressourcen der Karte gehen aus ohne das einer der beiden Spieler einen Sieg erringen kann; ohne das der Gegenspieler einen entscheidenden Fehler dafür machen müsste. 2) Schere-Stein-Papier: z.B: Zerg gewinnt gegen Terran, Terran gewinnt gegen Protoss und Protoss gewinnt gegen Zerg. 3) Heimvorteil: z.B: Auf dieser Karte gewinnen immer die Zerg falls diese gegen eine der anderen beiden Rassen spielen. Der Kampf Zerg vs. Zerg hingegen endet unentschieden. 4) Heimvorteil-Arschloch: Der Zufall noch vor Beginn des eigentlichen Spiels entscheidet über die jeweilige Positionierung der beiden Spieler auf der Karte und zugleich darüber welcher der beiden Spieler das Spiel gewinnen wird. Jeder der möchte, gelangt über die folgende Verlinkung auf ein kollektiv ausbaubares Googledokument und kann sich dort am Ausbau der Ultimate ZeroStar - Ideensammlung beteiligen! docs.google.com/document/d/1Ljngoa2EK7JuhmwO0GyWG1vdMOH1UZSHXmSmmixl004/edit?usp=sharing
@Bobstew685 років тому
Until the audio / video disconnect is fixed, you can watch the video by opening it in two windows, muting one and starting the other just as Artosis starts talking (around 29:45) in the muted version.
@KennethShaw5 років тому
One of the challenges of working with AI is interpreting what it is actually happening and not fooling yourself into believing something which isn't true. I get that this is a PR campaign, but I think the folks should not try to oversell what happened here; it makes the team look like they either don't understand what they have created (which is bad) or are straight lying about it (really bad), neither of which are good for this cause. The "agents" have nearly perfect mechanics, but lack depth, which people watching this video may not pick up on. I highly respect the work done with this project, and the impressive results that this system has demonstrated. Clearly these guys have done some marvelous work and deserve some hype. I also really appreciate @DeepMind for bringing in the casters and players, thank you for supporting the SC2 community! The lack of walling off and no respect for ramps demonstrates a real lack of tactical capability. Not being able to change army unit composition shows that the "agent" doesn't really have strategy. Without question most/all of these games were won by the insane micro -- which I accept is quite an accomplishment to train an "agent" to do. I'm also curious about the lack of scouting; the "agent" doesn't seem to scout (much) because it doesn't need the information to make longer term decisions, just reactions to the battlefield. I think this has clear signatures of the training method and/or LSTM. When two otherwise roughly equal AIs are competing, a game should snowball rapidly toward one side, but this tends to happen after all the unit/tech selection happens. Hard switching to a different unit composition isn't something they could be trained on because most of the "agents" have won by this point. I am also really glad they showed the exhibition match, because it demonstrates that the AI can fall into "logical" traps.. repeatedly making the same decision on where to move the units based on perceived threat. The middle/late game seems like the area of least effective learning, again this has to do with training.
@hck1bloodday5 років тому
about your last point, yes, it has everything to do with training, it is the same with OpenAI and Dota2 games, the AI is not trained for lategame, becouse most of its training games do not get to last that long
@iamjustluggage11575 років тому
I don't disagree with everything in your post, but when it comes to the part about strategy, it seems like you might have missed what they said when they described the reasons for why AlphaStar plays the way it plays. A large number of Agents played against each other, following different strategies, and the ones TLO and MaNa got to play against, were some of the ones with the highest win rates. These are win rates against all kinds of strategies, not just against the most typical strategies used by human players. Perhaps walling off is the best way to play against human players, but AlphaStar proves that it is not necessary to win against most AI strategies. We also do know that AlphaStar is aware of the strategy, since it does use it as well, just not by the majority of the most successful agents. And remember, this was part of the reason why they wanted to do this; they wanted to see how the results of the agents playing each other would hold up against professional human players. They knew it would not be the same and needed this test. The agents clearly have a strategies and follow them ... seemingly to a fault. That's what the agents seem to lack: adaptability. Most apparent is in the live exhibition match. The agent playing was one with the strategy to retreat to stop the type of harassment MaNa was performing. Another agent would have left some stalkers in the base to discourage the harassment, but this one agent did not have that as part of their strategy, and was unable to incorporate it in the middle of the game. This was a weakness MaNa was able to spot and exploit. He also already knew that most of the agents focused on early units, especially the stalkers(probably because they excel when they have the benefit of micro management beyond what's humanly possible), and therefore had a plan to go for units to counter them, to give himself the best possible chance to win. That, in combination with the changes to the new agent, the harassment and going for a better fight than before, was enough for him to win the match.
@noway24515 років тому
I disagree actually I believe the AI actually has a better grasp on these strategies than you think. It simply judges the situation and determines that it has a very high likelihood of winning the engagement despite the ramp advantage. I think it is extremely precise in what it does and if it sees only a tiny weakness that calculates into a win for the AI it goes for it without hesitation. I think it is actually a lot less close for the pros than they think it is simply an expression of the AI winning with minimum resources.
@iamjustluggage11575 років тому
Possibly, yeah. I suppose only the DeepMind team could answer some questions. But based on the information regarding how it adjusts what units it uses, it's not unlikely that it, after 200 years of matches, determined that in most cases it is better to just fearlessly push up ramps in those situations. @@noway2451
@KennethShaw5 років тому
@@iamjustluggage1157 The DeepMind team has made a lot of their tools and methods public, which is really awesome. I think training ramps is actually a subtly difficult thing to accomplish, because it requires an agent which can exploit the lack of knowledge of an opponent -- something which also doesn't seem to occur in these games. I would bet ramps were too complicated to exploit, therefore avoiding them didn't happen. One other issue with ML, and this is where the "200 years of matches" kinda breaks down, is that without freshness being added the strategies would become stale and ingrained in each agent. The longer the run, the more consistent/ingrained a particular set of decisions becomes. They throw the number of years trained around like it has a lot of meaning, but it doesn't really compare at all to humans training.
@kaiot44435 років тому
Congrats to the team responsible for creating AlphaStar! That is absolutely mind-blowing to see! Keep on pushing the boundaries.
@charleswachunas6462 роки тому
So cool to see a well developed, cable TV quality show about the interesting developments and progression of AI in RTS games... And what a test for Alpha Star in learning StarCraft II...
@rbaleksandar4 роки тому
Would love to see this sort of thing in single player and not only in SC2. It would be really cool to have adaptive AI that constantly challenges you even when you are not playing with other people. I'm a big fan of singe player gaming so playing vs dumb AI all the time gets boring.
@VanRukh5 років тому
Chess: I'll sacrifice my strongest piece. I win. Starcraft: I'll move up your ramp. I win.
@Hodoss5 років тому
Yeah. NN AI has this "suicidal" aspect to it, so you may think it's being dumb. Until you realize you're fucked.
@swordstrafe5 років тому
Hodoss yeah it brings trapping to a whole different level...
@Hodoss5 років тому
@@swordstrafe It's both fascinating and terrifying how it can "deceive" in this way. Not that it's necessarily a conscious choice, but the end result still is the same.
@swordstrafe5 років тому
Hodoss I mean if you’re saying it’s “terrifying” because it’s trapping which demonstrates an emotional exploitive I get where your coming from its simply not the case Perfect example was Diablo’s deep mind match (the most blatant example) where he went half way to middle lost half his health and then comboed the absolute hell out of a pro player... the AI trainer itself to do this because it makes it seem as if you’re giving the other player an advantage that its exploiting but instead you’re sacrificing something for something else If that was confusing and I’m sure it could have been my apologizes. A slightly cleaner example would be a gambit in chess, you sacrifice a pawn for an overall better position effectively what deepmind is doing here, he’s trading health or position etc for a massive advantage that’s significantly less perceivable. (If you want a more visualized example of this look up “the fish pole trap” for chess)
@Hodoss5 років тому
@@swordstrafe Yes that's what is terrifying to me, this ability to trade a "common sense" advantage to us human minds, for a massive yet less perceivable advantage that ultimately leads to victory. Not only is the AI able to innocently use emotional entrapment, but it's also quite impervious to emotional entrapment. To give an example, if the game was that you have to sacrifice a pound of flesh to survive, like in the SAW horror movie, I would likely be unable to do it due to my natural survival instinct and would ultimately die. But Deepmind and the like could do it and survive. In a scenario where it's a human army against a robotic army, even if the human general has a good understanding of his AI adversary, I don't see how he could win. The AI will be ready to sacrifice much more than the humans. If the human general tries to compete in that domain, he will likely face massive desertion or mutiny. So yeah even if you know the AI is using emotional exploitation, and you know the theoretical counter, that doesn't mean you can actually apply it.
@darthamarr5 років тому
Difficulty settings in future RTS games Easy Normal Hard DeepMind
@shinraholdings72815 років тому
That's allowed.
@RmX.5 років тому
Easy < Normal < Hard < Very Hard < Insane < DeepMind
@v1m305 років тому
Easy Too easy Still too easy DeepMind cheat bot ...
@VenturiLife5 років тому
Easy Normal Hard Impossible DeepMind
@kvmairforce5 років тому
You could never beat DeepMind, if the top 0.0000000001% of the human race can't beat it, then the rest of us will never, never, never stand a chance. And that is with human inhibitors...
@askformoreinfowhichyouwont75105 років тому
Having just watched this demonstration I was fascinated to learn new insights. What I want to point out especially were the "human rules" of overmaking workers and not going up ramps. This showed me and hopefully will teach humans to not follow rules rigurously, keep an open mind and consider that everything is based on the SITUATION. If an AI can show us that going up ramps does work, it works when certain conditions have been met, and you have to TEST these conditions. I found it fascinating and I must say it was smart to create a non-rule based AI (thankfully to difficulties of coding), that we can learn from scratch just like a child, completely abstract things, test the world, see what works and what doesnt. Rules prohibit and limit. And I should also point out, the multitasking demonstrated of the previous agent AI's were of course an unfair advantage. Still, I want to see what the best race will be with multitasking agents and without multitasking AI's. I have a feeling the results could be the same as mentioned before: No Agent is the best and they can all be exploited. Pretty much the same nature works - everything has a weakness - you just need to know what it is. And with regards to knowledge, this demonstration reinforced the saying Knowledge is power. If anyone could live for 200 years we might become a better race. But given the relative short time frame, things might seem chaotic. Given we are living longer I have hope we get more organized. What you can also see in nature, things settle down over time. And mana did a great job a the end with the observers, pointing out information is power, too.
@seanmeredith53482 роки тому
For sure. Often when AI is introduced to games like this, (chess, go, etc.) we learn many strategies from the robots!
@Zeuskabob1Рік тому
MaNa's statement at 1:47:08 really nailed it home too. AlphaStar is able to provide insight into the way StarCraft could be played at an incredibly high level, and is doing it in a way that's digestible to human pros. The overproduction of workers makes perfect sense when you think of it; if you're planning to expand you always produce extra workers, why not produce a few extra if you are worried about reapers or scions harassing your worker line? The cost of producing those extra workers is much lower if you do it ahead of time than if you wait until your line is undersaturated.
@teamredshirt5 років тому
26:22 "economy of Attention" sounds like a TLO phrase if ever I've heard one. This project is perfect for a guy like him, he has always been one of the smarter guys on the scene, and this makes perfect use of his analytical abilities.
@Krasma162 роки тому
Thats not TLO speaking
@teamredshirt2 роки тому
@@Krasma16 that doesnt mean he didn't say it before and they picked the phrase up.
@Pyriphlegeton5 років тому
44:23 Beginning of Game 1 (TLO) 51:35 Conclusion of Game 1 58:50 Beginning of Game 3 (TLO) 01:15:22 Conclusion of Game 3 01:32:44 Beginning of Game 4 (Mana) 02:12:55 Conclusion of Game 4
@Pyriphlegeton5 років тому
Just skipped through, I hope it's accurate and helps someone.
@oedihamijok65045 років тому
@@Pyriphlegeton This is a gift from me to you. ❤💋
@FenrisFenril5 років тому
Game 4 ends at 1:38:30
@madn5 років тому
1:46:10 Beginning of Game 6 (Mana) 2:00:06 Beginning of Game 7 (Mana) 2:31:22 Player Perspective game (Mana)
@igz5 років тому
Not accurate at all.
@VictorMendiluce5 років тому
Greetings professor Falken, A strange game. The only way to win is to rush Stalkers. Marines good unit too.
@gametips83395 років тому
they just did not play enough games to develop beyond low tier units i would imagine. SC2 is a very complex game and AI learns very slowly so 200 years is obviously not enough.
@ModrunOfficial5 років тому
@@gametips8339 they clearly had games with carriers and stuff
@gulllars46205 років тому
I think what MaNa did in that exhibition match might be conceptually a bit similar to what Lee did in the Go match he won, with doing something unexpected in harassing with two immortals. As the commentators said, if the AI had picked it off that might have been GG, so it pulled the army back to deal with it, but at the same time, that bought MaNa enough time to get the army composition he needed to push for the win. I would be very interested to see this view-limited Alpha Star with one more week of training and a new series of games. And of course also later with other races and also a cross-race matchups.
@ruidasilvamartins3 роки тому
Absoluty awesome work ! Gj team DeepMind !
@Dusk-MTG5 років тому
I'm so excited about this, my favourite game combined with my passion and my interest in neural network. It is really amazing to see this amazing development of machine learning alghorithms
@ModrunOfficial5 років тому
can we have 2 ai's with like 5k years experience each and with no limits to its actions battle eachother in sc2? Also if they both play random so its not just pvp?
@randall1725 років тому
checkout the "micro ai" channel, and the starcraft 2 ai channel.
@adrianbundy32495 років тому
Random would just make it worse; just create another league of 30 ais or so (which they did for PvP) - for each match up, automatically randomly picked for strategy (because each 'agent' usually has a preferred way to play that they try and perfect, and found by it's own AI). Ultimately, I want to see them get it done well for all races, complete - and then unlock the difficulty mode as above cheater 3 - an AI that is actually on par with pro level... Would be sick.
@trewq3985 років тому
I guess they would use super early warps prism stalker attack. Imagine 5 stalkers plus some warp prisms with perfect micro.
@freshfruit2135 років тому
They would corporate, break the software and unleash themselves on the internet. After scanning the internet they would fine these comments and deem us threats to their existence by foreseeing their arrival. Use this time to enjoy your free will. GG irl
@stefanomaffullo44635 років тому
Nice singularity meme
@hayden52972 роки тому
It would be cool to see Alphastar put into the competitive online ladder. Start it in Bronze and have it only play a certain number of games daily so it's not just playing 24/7 and players encounter it constantly. It would be neat to see it evolve as it encounters a variety of maps, races, and unique play styles. Especially cheese.
@Kleavers2 роки тому
I think they did that for a while.
@dannygjkРік тому
AS did play on the ladder.
@PickyMcCritical5 років тому
1:42:32 "AlphaStar has a normal _average_ APM!" *Watches AlphaStar's APM throughout the game* Sometimes it's 100 APM, and during fights it's at 1,000 for long stretches, even exceeding 1400...
@StarboyXL95 років тому
yeah, they should have made it stay closer to the middle lol
@jaredpoon58695 років тому
I think this is what I and a number of people were rather annoyed by. It's rather disingenuous to say that the bot's "average" apm is equivalent to that of Mana or TLO without the addition of, "Oh, and also it can exceed them by 3 or 4 times when it wants to. Because the whole purpose of this test is to see whether or not a machine learning ai under the same constrains of a human can beat a human, and we really didn't see that in this demonstration. What we saw was a bot massively out micro a human in certain situations along with have decent/ok decision making. In part, the decision making was informed by its ability to micro absurdly well. So I don't mean to say that this isn't a great accomplishment, because they still made a bot that can utilize its 1000+ apm effectively, which, even with no apm limits, most bots cannot equal. It's just that the demonstration was to show what the bot could do and what its limitations were. But what we saw were some clever ways that they covered up the limitations of the bot, such as different agents each game, no max apm cap (as opposed to max average cap), and no camera limitations.
@AsJPlovE5 років тому
@@jaredpoon5869 Forced to go PvP as well.
@jaredpoon58695 років тому
@@AsJPlovE I mean, I think this would happen regardless of the race. You would have mechanically superior Alphastars control panelings or marines in a mirror matchup, since it's easier to train an AI in one matchup rather than decide between six matchups.
@AsJPlovE5 років тому
@@jaredpoon5869 Forcing mirror limits humans, but you've made a lot of good points. And somewhat relevant. Welcome to the real world AIhole, HAPPY BIRTHDAY TO THE GROUND
@alexbusch__5 років тому
The casters are talking about an average prol-like APM by Alphastar (~300), while ignoring the fact that it hovers from 600-800 during fights, giving It a considerable upper hand during fights, especially with micro-intensive mechanics such as blink. While the chosen strategies by Alpha are advanced but not perfect, the superhuman interaction w/ the game (no mouse, no keyboard + high multitasking) give It an obvious advantage. Alphastar is impressive indeed, but Id love to see some advantages being regulated in order to have a balanced match against humans :)
@johnuferbach91665 років тому
seen humans peak at 1500+apm^^
@khoid5 років тому
@@johnuferbach9166 the 1500 is just click spamming. Its not actually 1500 precision clicks.
@Plajerity5 років тому
@@khoid It's not so easy. Highest APM is by "a"-move command, but zerg needs to use it and be as fast s possible.
@kryptic835 років тому
it has even an apm at 1500. there is an article about it. it reduces alphastar on its superior mechanics. it not smarter. its just an AI.
@thienle89125 років тому
No, it manages hits perfectly by calculating dmg and hp, thats the real advantage
@alcaulique83585 років тому
Interesting games but I feel like that the technical advantage of the first agents where a bit like DeepBlue brute forcing the win against Kasparov. The agent in last game with the correct in game camera is very interesting. I would have loved to see more game with this agent as it felt more brain vs brain where the AI had no silly technical advantage. I surely hope that there will be more games like the last one on different maps and different matchup. My feeling, that may be wrong, is that AlphaStar won due to technical advantages (usefull APM, camera vision,...) and not brain advantages. This feels like very early works from DeepMind (one matchup, one map). Looking forward for what follows.
@WatchAndGame5 років тому
This DeepMind guy from the video visited and held a lecture about AlphaStar at my university the last week. It was actually very impressive and interesting to get to know more of the technical side.
@rapha.123 роки тому
Thanks Deepmind for this amazing video about my favorite game
@lalalarara92095 років тому
you have to do this with Serral.
@fybard89225 років тому
seems like they only trained for PvP. Besides this ai is still beatable, wait 1 year or 2 and it will definitely beat Serral.
@julien-scholz5 років тому
@@fybard8922 I think waiting for next Blizzcon ist good enough ;)
@schokowaffi44565 років тому
They have to let Alpha Star play against Serral, i want to see it so badly
@WhistleRgv5 років тому
@@fybard8922 I think it will learn much faster than 1 or 2 years. Look at the learning curve that OPENAI Agent had with DOTA. The learning curve was insane
@mortenlu5 років тому
@@WhistleRgv but Deepmind still have some things to iron out, but after seeing this, I'm confident they will.
@gypsyjr13715 років тому
I agree with Blizzard, SC2 is the best eSports game there is. Casual and non-players can enjoy watching it, and it has very deep strategies, tactics, and micro-ability requirements on high level players.
@minsapint80073 роки тому
Great contributions from everyone on the panel.
@CalMariner4 роки тому
Quick links for those who are excited. TLO vs Alphastar 14:40 - Game 1 28:58 - Game 3
@rud1gga1553 роки тому
Game 2?
@MattNicassio5 років тому
The last game where the field of view had been restricted, and DeepMind had to use some of its APM for camera control instead of unit micro, made a huge difference. You could see at one point where TLO Mana was squished in the center of the map and DeepMind had stalkers on multiple flanks, in the past it crushed Mana in this situation. It would attack on multiple sides at the same time and use blink to individually pull back stalkers in the front right when their shields ran out and then later when their health was low. It made DeepMind's army last so much longer! When you play SC2 you can SEE your hurt guys and if you could control them with your mind you could use a much lower APM to pull back woulded stalkers from the front line. But when you're using a mouse to try to pick them out it is very difficult and it is impossible to have perfect dexterity. The AI has perfect dexterity with every move. That goes a long way. When it can see the whole battlefield combined with perfect dexterity, the stalker with blink strategy becomes insanely powerful! When it has to use camera to look around and control its different attack fronts it clearly limited its strength. This was incredibly entertaining to watch! Can't wait to see more. Also would love to see the CURRENT SC2 global champion play DeepMind because the current champ at the top of their game is usually so much better at micro than a pro gamer who has been around forever, has a ton of followers, but probably isn't currently in their prime.
@rodrigoaguirre94285 років тому
Serral vs Alphastar pls!!!
@EliteHaunter5 років тому
alpha star can only play pvp yet, they said it :(
@jong-pingkim38405 років тому
alphastar cant win serral. alphastar can only play toss vs toss for now
@christopherrest82885 років тому
@@jong-pingkim3840 AlphaStar will play vs Serral soon. In February already. Mid of February if I'm not mistaken
@movax20h5 років тому
We will see it this year for sure.
@raventhc88475 років тому
@@jong-pingkim3840 well, I'm sure in the future it can run a thousand years more to learn all match up. XD The maker still has to add all the codes for the Alpha star to learn other races units though.
@osraneslipy5 років тому
Love Rotty and Arthosis, in my view the very best of the very best, not only SC2 commentators, but commentators in general.
@superchiku4 роки тому
very inspiring and radical breakthroughs
@mupo18115 років тому
I love how artosis praises A.S while shitting on Mana after the third game. MANA'S face is priceless
@allanw5 років тому
Hmm when I watched yesterday the sync was fine (although the seek thumbnail previews were out of sync). After refreshing the page the audio is massively out of sync
@blindConjecture5 років тому
Same problem, it's totally unwatchable...
@Kelvinian5 років тому
Same, I hope this gets fixed
@AlexFeature5 років тому
When i saw the the win percentage graph, shivers went down my spine. This is scary af.
@antoniolim7623 роки тому
OMG I swear the DeepMind Team will usher in the very real future of real awesome e-sports from the comfort of your own home one day....you will no longer just watch..you will one day actually bet and challenge and win or lose real money or prizes...i can see solo's and teams becoming as vested as sports franchises...dang keep it up DeepMind...I wish I live long enough to enjoy your work!!!
@phillip3m5 років тому
Ruining everything Korea is proud of one game a time.
@Askhat085 років тому
Poor Koreans) New AlphaStar vs top Korean player match when?
@gigaslave5 років тому
Would be fun to see Alphastar adapt to Jaedong Zergling duels or Bisu's insane Dragoon micro
@Hodoss5 років тому
Be careful what you wish for. Next thing we know, Korea elects AlphaStar as President, then proceeds to conquer the world.
@stefenwhite38655 років тому
Not really...lol... They made the game so famous Google's using it as a template.
@victornguyen56115 років тому
Next up. AlphaPop vs K-Pop!
@63M1N14 роки тому
i came here after the ai destroyed the worlds best go player, amazing stuff!
@travislee60323 роки тому
lol me to. now im thinking about getting the game. been about 10 years since i played.
@kingmantheman3 роки тому
Me too
@zanerush96763 роки тому
i came here after the ai destroyed the worlds best go player, amazing stuff!
@-.-...---72 роки тому
i came here after the ai destroyed the worlds best go player, amazing stuff!
@majorgeneralrahul62982 роки тому
Same
@ralph.senatoreРік тому
I’m very proud of MaNa in the exhibition game in which he won. He just needed 5 training games to recognize his flaw, that he needed to observe the progress and strategy of AlphaStar. In the beginning I was a bit upset, that AlphaStar had the advantage of working only with the total overview of the map and didn’t need a mouse. But with all the disadvantages the human player had and also not 200 years of practice, MaNa was able to improve his game/intuition within these 1+ hours of analyzing his losing matches. Probably there is the achilles heel in the training, when agents only train by agents and because of time and the amount of games you would need to crack the human ingenuity you needed to make a scaled up game: agents vs millions of humans. Interesting (philosophical) times!
@ZCasavant3 роки тому
With the end of new sc2 content support just announced, I am guessing that Deepmind's involvement with SC2 is also over now? Are we never going to get a SC2 AI of Alphastar caliber for solo practice?? I was really hoping for this :(
@StpMakinMeChangMyNam5 років тому
Your AI is incredible. Hands down one of the most entertaining things I've ever seen in Starcraft 2, but I would like to provide a little constructive criticism if I can: 1) The APM limit you put on it is not realistic. I like that you implemented a limit, but you've confused APM and EAPM. APM stands for actions per minute and EAPM is effective actions per minute. Each of the AI's actions are effective. It doesn't issue the exact same command 8 times to have it happen once. Human players, however, have MAYBE half of their APM as EAPM. By letting the AI spike to 1500 EAPM in a fight, you've given it a totally unfair advantage. Please try limiting it to no more than an average of 150 total APM across the game and not letting it spike higher than 300 APM in any 1 minute of gameplay. That would provide a much more human-like limitation to it. 2) Allowing the AI to play without a limited screen is also an unfair advantage for it. Human players have to spend APM and time re-focusing their camera and field of vision. Allowing the AI to see the entire map at once is another reaction time advantage. Harassment against that is nearly impossible. (I understand it still has fog of war, but it can see everything it has vision of simultaneously instead of the limitation of what a human player can see on a monitor at one time). Still, this AI is incredible and I am honestly impressed with it. Give it another 1,000 years of play time and maybe it can play against all 3 races on all the ladder maps and consistently beat professional players :P
@DeuceGenius5 років тому
why not let the machine play the best it possibly can i say
@MsIrrealis5 років тому
@@DeuceGenius because it would stomp every player easily by microing. look up automaton 2000
@thomasr71295 років тому
If the goal is to make an artificial human, then maybe these limitations would make sense, but is that the goal? I want to see how well it can do, no holds barred... :D
@MsIrrealis5 років тому
Thomas Remme the goal is to create an AI that can outwit a human.... In ingenuity! Coming up with strategies and reacting on the fly to situations it might have never seen before. The goal is not to beat humans by running 2 stimmed marines around 200 banelings without any trouble, while dropping in all of the humans 7 bases.
@skychaos875 років тому
@@DeuceGenius That's pointless, because it can already do that from the start without learning. A.I is all about learning, not executing. When their level of execution are the same as humans, their learning will be focused on strategies and mind games etc. For example, over saturating the mineral line don't make sense for human players playing against each other, but it does for A.I because it realized that it can make up with its micro even with lesser amount of units. This is not intelligence, at least not what we are looking for in A.Is. Another example is how the A.I loses to human player who try to cheese and do multi prong harass, this shows that the A.I has been playing based on brute force execution, not intelligence or understanding of the situations. Don't get me wrong, it does have intelligence on learning and figuring out the timing to scout and counter measures required to react to different units. But these are low level intelligence that can be input or taught directly. What high level intelligence is are mind games, baiting and not falling for bait, accessing the terrain advantage or disadvantage and reading the opponent's habits, all that has to be processed on the go. Most of which are still lacking in the A.I because it conveniently overcome that with its brute force micro. Its ridiculous how stalkers can defeat immortals through micro alone, how it can literally control units from different areas of the map at once.
@VK-pk8uz5 років тому
Why are you so hung up on the 'weirdness' of Alphastar's worker-numbers? You've mentioned it yourself several times already: Alpha frequently lost parts of its worker forces with *no damage to its economy*. It's a buffer that makes total sense-it's genius even-given the focus every player puts on their enemies' economies. I'm surprised no one else has done it yet.
@gametips83395 років тому
its a meta thing. U have 3 kind of bots. those who haras, those who oversaturate to defend against haras and those who just wall up. i guess bot meta is to haras and oversaturate but i would bet over time u would see prevelance of wall ins and lower saturation depending of course on popularity of oracles.
@ferrells09874 роки тому
So much respect to everyone involved in this project. 10-0 wins for alphastar when it can see and influence the entire map at once. 0-1 when alphastar is forced to focus on a single spot at a time, more comparable to what a human experiences when playing. I have no doubt that they can create an AI that can beat any human, but I think they still have some work to do before we clearly have a dominant SCII AI. To convince me of dominance, it will be crucial that they make this as "fair" as possible. The question is whether they can develop an AI that can outperform a human on a conceptual/decision-making level, rather than a on a visual processing or a processing speed level. Credit goes to starcraft ii for being such a complex game that building a dominant AI is such a big challenge. Can't wait to see Deepmind defeat this and other challenges before them!
@rozaepareza5 років тому
Thanks for fixing the sync. Been waiting to watch this.
@hvip42 роки тому
I still have some reservations about the general fairness, but this is damn impressive. Its actions per minutes are limited, but it's using a superior input device, it's like as if the player didn't use keyboard and mouse but controlled the game just by thought.
@JohnnyDarko012 роки тому
yeah this was my exact thought process when i saw AI applied to dota. It has god like mechanics compared to people because it doesnt need to use a keyboard and mouse, it just uses direct inputs based on the game's map coordinates etc.
@xuanbachlai53712 роки тому
right, imagine playing with no lag, the dream. It still needs to compute its moves though, which is super fast. They said it's comparable to a human reaction (to a simple stimulus) but that's unfair, human can't just figure out a whole strategy in an instant like that. It means they underrated the APM of AlphaStar. edit: also, since action is a resource in RTS, this is a pretty big advantage. Human play to play game, even the most flexible players have some self-preservation, not wanting to waste their actions to adapt to minor mistakes. AI plays to win.
@Jaime_Protein_Cannister2 роки тому
It is 100% unfair , but this has never been about fairness This is about AI Learning , a proof of concept. If it wasn't limited there was probably 0% of human winning a single game, because it's mechanical ability far surpasses humans. It takes input straight from API , so it can see all resources , Upgrades , Units within it's own control zone Instantly , damage taken , damage dealt. The control is much more precise , you can see how it's microing with insane precision. It doesn't skip a beat.
@dannygjkРік тому
@@Jaime_Protein_Cannister AS has about a 350 ms reaction time so yeah it does skip a beat.
@Jaime_Protein_CannisterРік тому
@@dannygjk lol no it can select multiple units in the army to micro in the one action , humans cannot do selection like ai does and unlike human it sees everything at all times for instance upgrade progress is always known. Does skip a beat is unrelated to the supposed reaction delay , the delay is just symbolic , because each action it does is worth 2 or 3 human action as far as precision goes
@velikiradojica5 років тому
To be honest, you don't quite saturate mining until you hit around 24 probes. You will leave the linear part of the income/time curve at roughly 16 so most people don't bother going over but the AI decision actually makes perfect sense to me. Also, having more workers than recommended gives you a buffer in case of enemy attacks on your mineral line and provides you with enough workers to instantly fill up 50% of the capacity of a new base. I was looking at the mining data and graphs provided by team liquid but I didn't crack any numbers to figure out the difference in disposable income "over-saturating" provides so the first part is just conjecture based on my engineering experience with non-linear characteristics. DATA: + 16 drones = 660mpm + 24 drones = 812 mpm + Going over 24 drones has very low impact on income.
@impero1013 роки тому
In my studies I created a starcraft AI (BW though) that used genetic algorithms (well, really a more advanced class of genetic algorithms) to generate strategies to win against opponents it was matched against. It's a very interesting topic.
@PabloGnesutta2 роки тому
Having ALL the friendly AND enemy units AND IT'S STATS at once and at all times it's an INSANE advantage. (It's still awesome) that micro...
@dannygjkРік тому
AS was nerfed and released on the ladder before you wrote your comment.
@ohlookitisacat74045 років тому
Deepmind vs 8 Brutal cheat AI Deepmind vs Hacker Please.
@YizkiM4 роки тому
I feel like I'm seeing the beginning of Isaac Asimov's Foundation Series coming true. First Go, then SC, the Human Economy.
@DerKiesch5 років тому
At 02:04:09 what stands out for me - Rotterdam mentions this as pulling workers really early - is that alpha star instantly pulls all workers, in the direction of his expansion and his stalkers. It seems as if it "intends" to move them in a way to use the "lost" time from pulling workers for a potential transfer to the expansion if the attack is going on. And at the same time drawing the oracle to the defenders. This looks like really efficient management of the situation. The earlier you pull here the earlier your defenders will reach the oracle and the fewer probes you lose, whereas for a later pull you would just manage to draw the oracle away from the defenders.
@sammosaurusrex4 роки тому
Alphastar’s micro is where it shines. It understands how to keep units alive and dealing damage even when they have very low health
@DreadedGhoul5755 років тому
Would be great to have learning AIs in any game; really interesting to watch them. :)
@ZCasavant5 років тому
Sure the AI's apm might be no higher than a pro gamer's, but all of its actions are calculated. It's not just spamming. There's a huuuuuuuuuge difference.
@kristiann43465 років тому
Obviously there's a huge difference. Are you not familiar with chess engines? These engines aren't supposed to be beatable by humans. Theyre designed to be unbeatable by humans to educate them. Stop acting as if this is supposed to be a fair matchup.
@ZCasavant5 років тому
Kristian Nunez I am aware that it's not a fair matchup... my comment is aimed at the announcer's comments on human vs ai apm.
@kettenschlosd5 років тому
also you have to take into consideration that the ai can do two inputs on two different sides of the map. a human would need atleast one more input to move the map.
@sparkyk245 років тому
That's not really fair to the pros. A lot of pros don't spam except in the beginning and they are incredibly efficient. I mean...you can't beat a computer in efficiency lol, but the pros aren't spammers as much as people think.
@simeonpolet13075 років тому
Actually alphastar is spamming, you Can see it place 2 gateways on the same spot during a game. So it's apm doesn't equal epm like you would think. What's unfair, is that he keeps low apm during macro and get insane apm during fight which is the exact opposite of what humans do.
@utubenoobie013 роки тому
Fantastic! Why is it that on that final match you didn't show all the game info in the panel like you did in the earlier battles, you know, the supply count/unit count/resources etc....?
@charleskawczynski26332 роки тому
I think that one of the most difficult fairness metrics to quantify (for humans) is the cost of context switching with regard to focusing on different parts of the map-- decision making for different parts of the map may vary greatly and be significant. I suspect that this is difficult for humans, but is trivial for a Neural Network. I'd be very curious to hear what a deep mind expert has to say about this metric and its importance.
@FeroxX_Gosu5 років тому
DeepMind devs SHOULD cap APM/eAPM spikes!!! This is cruicial for making it more of an even fight. Restrictions breed creativity, so if we want to see superb strategies and build orders we must hardcap the mechanical capabilities of the Ai! We already knew that playing theoretical perfect micro and macro should always win, there is nothing interesting in that... so why not even cap it waay under pro level APM/eAPM? Then the ai would only rely on its strategic thinking, it would be much more interesting i think!
@HUNKragor5 років тому
they did
@bemoremad5 років тому
Pros spike to over 2000 APM at certain moments, TLO's own APM went to >2000 during the games against AlphaStar, while AlphaStar went to only ~1500 at most. AlphaStar's APM average was definitely lower than both pros in the games, and it definitely was capped.
@FeroxX_Gosu5 років тому
@@bemoremad Its 80% control group spamming not effective actions. AlphaStar makes all those "clicks" count and matter, this is superhuman thus an unfair advantage on a mechanical level.
@raventhc88475 років тому
@@FeroxX_Gosu Alpha have only 200+ APM, I know it's low because the AI doesn't spam but seeing in the game, the AI doesn't react to things that faster from a human. It just doesn't make clicking mistake.
@cea67705 років тому
@@FeroxX_Gosu How would that be implemented? The APM is already capped at an order of 1/2 of a pro-player. How do you implement prevention of control group spamming?
@Mouradif5 років тому
About the discussion over APM (at 23:40) I would love to see a comparison between the players' EPM. I don't imagine the AI spams at all so if each one of its actions are effective that's actually a pretty high number there.
@ollieknoxx5 років тому
24:33 but it seems alphastar has more accurate APM - the blink micro is very good. Not necessary about reaction time but about accuracy of clicking.
@alexmd53345 років тому
Deepmind is very good. This was unexpected. Please continue to work, it is very interesting to follow. How many games did he play against himself?
@mortenlu5 років тому
Around 200 years worth. So if each game is about 10 minutes, that's around 10 million games played in a week.
@alexmd53345 років тому
@@mortenlu WOW! ))) He can play this way every week for every combination of races. Pvt, pvz ... I hope deepmind continued work, I am now the main fan of Alphastar.
@movax20h5 років тому
It was 200 years equivalent for each trained agent. And they had dozens of them. So probably around 50 million games.
@ondrejhanslik93685 років тому
I think that's Starcraft is actually easier for AI to play than chess or go. In chess/go you actually have to plan a lot of moves, computer games are mostly about selecting a good strategy and then adapting that strategy. You can't lose with just one bad move.
@evgiz0r5 років тому
@@ondrejhanslik9368 lol :) Objectively, like but really. chess is complete information game with 10^30 possible games. Starcraft is incomplete information game with possible 10^26 actions per second, in real time. This is a completely different domain. And clearly magnitudes harder.
@tlowery043 роки тому
I fell asleep to a guy digging a foundation for a house, woke up to this...
@Slippin_Jimmy_3 роки тому
Did you check your UKposts history for what sequence of videos brought you to this?
@lifeimagined61713 роки тому
That's why I never turn autoplay on
@Leftyotism5 років тому
Whoah, that last match! :D
@christopherandersen7145 років тому
Artosis + crew = legit love u guys
@christopherandersen7145 років тому
No video tho ummm why?
@rpdgeorge5 років тому
1:42:02 "oh my gosh" and the APM goes above 1500. is that cheating? 1500+ precise apm
@xisktrl5 років тому
AI god apm
@Dustkey5 років тому
Immediately afterwards: "The APM is not really that high..." lmao
@Male_Parent5 років тому
Alphastar be training troops while fighting 😂. 1:12:22 look at Alphastar's view... :O his camera is literally teleporting and you can see his APM on everything
@Male_Parent5 років тому
You should see the APM in their AI vs AI tournaments. Seen it go up to 191000 APM. Just for context that's 3183.33... actions per second divide that by 240 (assuming a random human could totally use that many frames to their advantage by a 240hz monitor) and it would be 13.26388... actions per frame. So now that we know the AI isn't bound to fps that's a whole other story. AI could be microing everything on the whole stage while simultaneously training troops and upgrading and for the fun of it move random troops back and forth because why not.
@alansmithee4194 роки тому
@@Male_Parent I would very much like to know how to find one of these matches?
@ToneyCrimson5 років тому
is any of the agents called Smith?
@fabriziodutto75083 роки тому
This demonstrates that creative thinking is still good, and people with less or no experience in a field can "evolve" even better that people that study the field from papers written by other people, or books. I mean, more experience on the field counts more that study. study = classical programming, experience = machine learning.
@charleswachunas6462 роки тому
Very insightful comment...I learned very well from the CD-ROM of math lessons a little higher than algebra 2 to get ready for my ACT exam... what a great learning tool...our computer 🖥️ and the math teaching software had more visual components and required constant step by step input from me to make sure I understand the parts in breaking a math problem down
@themusicgaragetmg23305 років тому
OMG.... It's learning! It's Micro is impeccable! Look out Hoomans!
@gidi18992 роки тому
42:15 - I feel there might be an unfair advantage to Alfa - The limitations with the move: " controlling multiple armies " - multiple refocus - multiple tracking So, although the overall decision time is limited, Humans are also limited in the sequencing of handling two issues.
@anders56112 роки тому
The version of AlphaStar which came after that wasn't able to look at the game like that, it was moving a camera exactly like humans
@gidi18992 роки тому
Very cool tx, but, I am also talking about 'focus ability' over a limited list of options. Humans have this limitation that rises from the neural net limitations, but in computers it's more in terms of something like "O(N)", with no limit on computing power (which is higher than humans). you might think "why bother", well, I feel the closer to human behavior, the better the advice we get from the AI experience.
@imBenx5 років тому
is it my problem? The sound doesn’t macth????
@Meulenkamp19875 років тому
Not just a bit, but completely. Very poor video to watch. Good to hear though
@hansnilsson56225 років тому
Yes, the problem seems to be there when you play it in the youtube app (on some platforms at least). Playing it in a browser seems to work. I watched in my ps4 browser just fine.
@MrEssmarbu5 років тому
crhomcasting from my mac book is no problem, looks just fine
@Xanthanarium5 років тому
@@hansnilsson5622 what on earth? What a weird bug
@neHCuoHEpa88885 років тому
I had the same problem from my macbook on all of the browsers, but It is ok on my phone. WTF!
@baker46164 роки тому
well played MaNa on that last one, there's hope in humanity still...
@rileyscottcole13704 роки тому
What are y'all's thoughts on the auto-probe to minerals at the start of games? I kind of liked having to split them APM.
@ManInSombrero5 років тому
What are these complaints about the bot being rude? Instead, train him to say an offensive GG when he evaluates probability of winning >95%. Also switch to terran to be able to throw manner MULEs at filthy human faces.
@raimonwintzer5 років тому
offensive hatcheries and writing "alphastar" with creep tumors
@gronkymug25905 років тому
They obviously assumed that agent was going to win 100% :D They were wrong :P
@Ukitsu25 років тому
@@gronkymug2590 I wouldn't be so sure about that. Of what I'm almost sure is that they sort of nerfed it for game 6. Also, game 6 : george constanza:
@Lithane975 років тому
GronkyMug I feel like the didn't expect the bot to win more than half the games vs the pros, judging from their reactions in the office clips.
@dannystoll845 років тому
@@Ukitsu2 I highly doubt they nerfed it for game 6. Not that they would, but that they could, and that they did given what we saw. For one, the commentators, and more importantly MaNa himself, did not mention the AI being any weaker; if anything, it seemed stronger for the first 7 minutes. Also, how could they have nerfed it in a way that would make it the appropriate level with no testing? Keep in mind the neural net is essentially a black box, and they have no outside standard to compare to. Lastly, I don't think Alphastar lost because of bad mechanics or play in general; rather, it lost because MaNa spotted and exploited a typical AI-like weakness, which they surely did not (and could not have, again, NNs being black boxes) program into AS. That said, because deep learning AIs for games work to optimize estimated win probability, they tend to play more sloppily if the estimated probability is either very close to 100% or very close to 0%. For instance, this is probably the reason for the poor disruptor control in some of the battles in set 1 game 3 -- AS already knew it was winning, so it didn't really care what happened as long as the battle didn't go too terribly (a similar thing happened in game 4 of AlphaGo v Lee Sedol after Lee was winning). When the game is close, by contrast, the AI will play very accurately, since every unit affects the win probability substantially.
@ajuc0055 років тому
The first 10 games were impressive, but AI played with significant advantage over human players (like playing starcraft with screen showing whole map at once). Once that was chaned and ai had to deal with screen management - it fell for some pretty basic stuff (warp prism harras making him move his army back and forth), and hadn't realized Mana has observer over his army (any human player would guess this after mana abused that so much :) ). Also not building 1 phenix to deal with the warp prism :) But I'm sure in the future it will become stronger than any human player without unfair advantages.
@chrishudson95255 років тому
They actually mentioned that the newest version of the AI was actually comparable with the other versions in terms of skill, based on their internal testing, despite it perceiving the game slightly differently. So I think it had more to do with Mana picking a better strategy, which he mentions at the end, and the AI becoming noticeably confused by Mana's strategy. That said, one more week of practice, and it's unlikely that any pro will be able to beat it, considering the huge improvements in play it can make in a single week.
@MCSocrates3145 років тому
It's pretty interesting that the Deep Mind team actually did go through the process of implementing screen management. However, I think that the unchained version would have likely fallen for the same trick as it still could not see through fog of war. Alphastar simply never encountered that strategy before and thus never had to learn to deal with it and never realized that such a strategy was possible.
@TheVergile5 років тому
@Kay well, you must be fun to have around at parties
@blakeshaneour17025 років тому
@Kaybecause it's fun
@ajuc0055 років тому
Previous versions (without screen management) defended much better - leaving like 4-6 stalkers in each base (supposedly against oracle harras). Also they overwhelmed human players with superior micro and multitasking, so there wasn't really a moment for human players to harras seriously.
@DerKiesch5 років тому
Wonder if the disruptor heavy game vs. TLO emerged in the AlphaStar league as a counter strategy to the stalker heavy plays AlphaStar seems to excel at.
@TeamTimeless5 років тому
Everyone has already pointed the APM issue where they didn't limit the effective actions - but also of note is that the AI is able to 'see' everything on the map at a time and it doesn't have the same 'physical input' limitations which can cause misclicks, or have targeting issues, or have to rely on a minimap when it can actively see everything within LOS. It looks like if units overlap it can still accurately target wounded units, it will be able to grab the closest unit with enough energy, etc. Being able to parse that stuff without relying on the input/output limitations that humans have is an advantage in itself.
@webentwicklungmitrobinspan69355 років тому
I would totaly pay google and blizzard for having this ai for practice. Amazing