DeepMind’s AlphaGeometry AI: 100,000,000 Examples!

  Переглядів 148,547

Two Minute Papers

Two Minute Papers

3 місяці тому

❤️ Check out Weights & Biases and take their great courses for free: wandb.me/papercourse
📝 The paper "#AlphaGeometry: An Olympiad-level AI system for geometry" is available here:
deepmind.google/discover/blog...
Me on Twitter/X: / twominutepapers
📝 My latest paper on simulations that look almost like reality is available for free here:
rdcu.be/cWPfD
Or this is the orig. Nature Physics link with clickable citations:
www.nature.com/articles/s4156...
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Bret Brizzee, Gaston Ingaramo, Gordon Child, Jace O'Brien, John Le, Kyle Davis, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Putra Iskandar, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: / twominutepapers
Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
Károly Zsolnai-Fehér's research works: cg.tuwien.ac.at/~zsolnai/
Twitter: / twominutepapers

КОМЕНТАРІ: 329
@marshallmcluhan33
@marshallmcluhan33 3 місяці тому
What a time to be a sine
@jakeroper1096
@jakeroper1096 3 місяці тому
don’t go off on a Tangent now
@spookyrays2816
@spookyrays2816 3 місяці тому
I chuckled
@Speedrunner.007
@Speedrunner.007 3 місяці тому
here before this blows up
@danisob3633
@danisob3633 3 місяці тому
what a time to be A line
@MeesterG
@MeesterG 3 місяці тому
What a time to be a five
@shadowdragon3521
@shadowdragon3521 3 місяці тому
This is a huge breakthrough. Right now LLMs tend to struggle with logic and reasoning, but that's about to change. This will open up so many possibilities!
@LincolnWorld
@LincolnWorld 3 місяці тому
A lot of people tend to struggle with logic and reasoning too. At least half the population.
@martiddy
@martiddy 3 місяці тому
If AI is capable of master mathematics, it will create a snowball effect to every branch of science and engineering and will eventually master those as well.
@Anton_Sh.
@Anton_Sh. 3 місяці тому
@@martiddydamn. Is that good or bad ? We don't know yet..
@memegazer
@memegazer 3 місяці тому
ukposts.info/have/v-deo/rZyoiaGYgqCYumQ.html
@asdfasdfasdf1218
@asdfasdfasdf1218 3 місяці тому
@@LincolnWorld That's why most scientific and technological advancement is done by the top 0.01% of the population. Almost all of humanity is "ignorable" in terms of human progress, it's only the top of the top that pushed the whole thing forward. In other words, the kind of AI that can really push things forward is not one mimicking the average human, but the top 0.01%.
@Baekstrom
@Baekstrom 3 місяці тому
Someone should show this to Roger Penrose. Those are the kinds of problems that he argued could never be solved by a classical computer.
@Gingnose
@Gingnose 3 місяці тому
It would be cool if AI starts to see intricate geometries in a way we never thought of into things like, art, architectures and engineering principles. Geometry carries over not only in other realm of mathematics. This achievement really showcases what to come in the future.
@Gigizhvania
@Gigizhvania 3 місяці тому
Too much AI idealism will kill our every function and we'll end up bald and grey with the only thing left to do is to observe other civilizations like ants.
@me_hanics
@me_hanics 3 місяці тому
It would be cool but as long as AI is "trained on data" I just think it can use ideas we have already invented. However, non-mathematicians (e.g. artists) may have came up with some ideas long ago which just went unnoticed and may be breakthroughs to see intricate geometries.
@Ken1171Designs
@Ken1171Designs 3 місяці тому
This is impressive, but the part I am mostly interested in is this recent trend of making models much smaller these days. We are also seeing a trend to make smaller models more efficient than the larger ones. I have hopes this will eventually bring the power of GPT-4 to consumer-grade GPU that can be installed locally. That would be really something.
@ryzikx
@ryzikx 3 місяці тому
models like nous hermes mixtral and solar already surpass gpt 3.5. we just need multimodal models like llava to improve a bit more and we'll have gpt4 level capabilities
@Ken1171Designs
@Ken1171Designs 3 місяці тому
@@ryzikx Yes, that would be the thing. However, the "nous hermes mixtral" models I have found don't fit into my 24GB GPU VRAM, so I can't use it locally. There is also the context token limit that creates a bottleneck on the size of the domain the AI can be used for. The majority of models are limited to only 4K tokens, which it soon forgets what it was talking about. More recently, I have seen models with 8K tokens, but the models themselves weren't very capable. I have tried a few 30B models, but they were too slow in my GPU to be usable, so I am sticking to 13B. Which 13B model would you think is the most capable that can fit into 24GB VRAM?
@Mega4est
@Mega4est 3 місяці тому
​@@Ken1171Designsfrom my experience, below 30B models are just not good enough. Even though you cannot fit those models fully on GPU, you can offload some of the layers to speed up the inference. I have been getting good results with quantised mixtral8x7b at a speed of 8-9 tokens per second by loading~15 layers to gpu and leaving the rest to CPU. Not very fast, but the results are of better quality and I would not recommend going lower than that.
@jacobnunya808
@jacobnunya808 3 місяці тому
consumer grade GPU or an AI processor on things as small as smartphones.
@Ken1171Designs
@Ken1171Designs 3 місяці тому
@@Mega4est This basically summarizes my original comment above, where models that can actually do something are too large for consumer-grade hardware (30B and up). When I saw the articles on small 7B models being more efficient than 30B or even 70B counterparts, that's a trend that gives me some hope for the future. However, what worries me is that smaller models tend to be limited to only small domains due to the context token bottleneck. There is still some way to go. ^___^
@user-mm8ts9ht4l
@user-mm8ts9ht4l 3 місяці тому
This is not something I expected we would make significant progress on so soon.
@vectoralphaAI
@vectoralphaAI 3 місяці тому
This is really impressive. AGI will revolutionize the world when it happens.
@CatfoodChronicles6737
@CatfoodChronicles6737 3 місяці тому
Or take advantage of the peoples dumbness and give all their power to the prompter if given in the wrong hands
@stell4you
@stell4you 3 місяці тому
Maybe we can create our own god.
@JackCrossSama
@JackCrossSama 3 місяці тому
Instead of a Paradise, we will create our own hell.
@nemonomen3340
@nemonomen3340 3 місяці тому
True, but I think it’s worth noting that AI would drastically change the world even if real AGI never came.
@-BarathKumarS
@-BarathKumarS 3 місяці тому
How is this Olympiad solver gonna help though?
@feynstein1004
@feynstein1004 3 місяці тому
I'm literally watching Skynet being born. What a time to be alive indeed 😁
@ArlindoBuriti
@ArlindoBuriti 3 місяці тому
yeah bro... the ideia of my novel is coming to light where humans can only fight the A.I with a personal jammer on their bodies because without that it never miss.
@feynstein1004
@feynstein1004 3 місяці тому
@@ArlindoBuriti Sounds lit 😀
3 місяці тому
Not a huge worry soon >:).
@feynstein1004
@feynstein1004 3 місяці тому
Eh?
@TheFartoholic
@TheFartoholic 3 місяці тому
What a time to still be alive!
@spookyrays2816
@spookyrays2816 3 місяці тому
Thank you for making my go to videos for watching when I’m eating my lunch. You really improve the quality of my day. Thank you!
@doyourownresearch7297
@doyourownresearch7297 3 місяці тому
what a time to be a lunch
@usamazaheer9194
@usamazaheer9194 3 місяці тому
4:46 What I am more impressed is there is a class of human biengs with such sheer brilliance, that they can beat a trained neural network at logical deductions.
@quantumspark343
@quantumspark343 3 місяці тому
Humans are neural networks themselves...
@mysticalword8364
@mysticalword8364 3 місяці тому
well, if you have a billion people try to do something and take the absolute peak accuracy it would be more like having a billion variants of the AI and cherry picking the absolute best results for each specific domain, which is fine, but has different implications I think
@nonameplsno8828
@nonameplsno8828 3 місяці тому
Not for long. Guess what happens just two more papers down the line?
@Mojkanal1234
@Mojkanal1234 3 місяці тому
If I understand it's high schoolers which is even more impressive
@swordofkings128
@swordofkings128 3 місяці тому
Right? I think we forget that humans are extremely impressive compared to machine learning algorithms. Computers aren't limited to the flaws of our human bodies- they don't need to eat or sleep, they don't have any emotions compromising their performance, they don't need to be entertained or to relax, they don't need anything motivating them, they have no human needs. So it's no surprise that, given enough time and resources, a computer can be good at human-like math logic. Or anything, really. Like given that humans work under the restrictions of being human, it is kind of more impressive that humans can still do better than machines given how insanely good modern ML has become.
@gix10000
@gix10000 3 місяці тому
This might be the breakthrough needed for AI to start designing General AI, imagine it comes up with a general mathematical theorem for intelligence which is more complete than what we've had up to now? And then that model can improve upon itself, and so on. The next 5 years are going to be amazing to watch
@alphablender
@alphablender 3 місяці тому
Man thanks for your insights and speed news it's great.
@marcmarc172
@marcmarc172 3 місяці тому
Incredible! Specialized, but absolutely incredible! What a time to be alive!
@tuseroni6085
@tuseroni6085 3 місяці тому
you just need a bunch of such specialized expert AI and a mixture-of-experts model to end up with ASI.
@marcmarc172
@marcmarc172 3 місяці тому
@@tuseroni6085 that's one approach that could work - good point!
@asdfasdfasdf1218
@asdfasdfasdf1218 3 місяці тому
If an AI can master all math, then that means it can do any kind of deductive reasoning. This AI is specialized in the sense that it can only find auxiliary points for 2D geometry proofs, but perhaps they'll soon find a way to branch out to all other kinds of math.
@LetsGenocide
@LetsGenocide 3 місяці тому
Terrifying amount of progress for a single research! Can't wait for it to be applied to other AI's
@ibrahim47x
@ibrahim47x 3 місяці тому
Integrating these different models together, the multi-model approach is what will make this useful, GPT-4 will be able to do math really well using it.
@carlsonbench1827
@carlsonbench1827 3 місяці тому
listening to this is like racing down a bumpy road
@JoshKings-tr2vc
@JoshKings-tr2vc 3 місяці тому
This is a beautiful paper. It made think about the way we train AI and how to allow it to grow on its own.
@JoshKings-tr2vc
@JoshKings-tr2vc 3 місяці тому
One example is Machine Vision being greatly improved by vectorized images.
@JoshKings-tr2vc
@JoshKings-tr2vc 3 місяці тому
The synthetic data process they used really made this AI shine. The symbolic deduction and traceback method for the training is very intriguing. Similar to how our brains have common sense and certain concepts that are bent and molded for deduction. But we also learn from seeing the logic reasoning of tracing back the deductions. Awesome paper.
@jonathansung8197
@jonathansung8197 3 місяці тому
For me, doing the mechanical work in maths class was learnable, but pulling "rabbits out of hats" was what I really struggled with when doing proofs at uni. Seeing this AI perform almost as well as a Gold IMO contestant is very impressive!
@DreckbobBratpfanne
@DreckbobBratpfanne 3 місяці тому
I wonder if we see a development where we have some major frontier general AI that uses lots of specialized systems as tools but then the next-gen frontier AI is trained on this entire structure so it learns to do them by itself out of the box
@freedom_aint_free
@freedom_aint_free 3 місяці тому
GPT-4 is quite bad at geometric problems, as I've tested it, I do a little coding exercise with it and so far it has never got right nor have gotten any better as far as I know: 1)I ask it to generate a isometric 3D maze, simple with every wall of the same height and width, 2) Them I explain to it how it could do it: 2.1) Use a classical algorithm say Kruskal's to build the Minimum Spanning Tree (MST) out of random grid of M x N points; 2.2) When point are connected there's a wall and where they don't there's a space in a cell (or the opposite also will work, overall is about 50/50 between open and closed cells) 2.3) It needs to have an opening at top left (Entrance) and another at right bottom (EXIT) Those points (2.0-2.3) it gets right after a few back and forth 3) I say that it should use a linear transformation to make it isometric and them to extrude the wall upwards. Them it keeps getting it wrong and I've showed it dozens of images of simple isometric mazes, but so far as I know it never gets it right, if somebody was able to do it, please leave a message !
@vash2698
@vash2698 3 місяці тому
This would work well in a Mixture of Experts type model or as a specialized agent in a swarm, right? GPT-4s limitations here would be mitigated by fine tuning it to leverage a model like AG as a tool. As long as it can get as far as asking the question and validating the answer it would likely be an effective approach.
@markonfilms
@markonfilms 3 місяці тому
I've heard rumor of GPT-4 turbo being distilled into an MoE with about 230(?)B parameters making up a totally off something like 1.4 trillion still
@imsatoboi
@imsatoboi 3 місяці тому
Whaat? Open source? Damn , im having chills because this will be able to do things that we maybe dont know about because it learned from scratch(i think)
@nefaristo
@nefaristo 3 місяці тому
6:38 about this AI being "still" relatively narrow: I think it's a good thing it would stay that way, while progressing in its own field. Since models are black boxes, to minimize the alignment problem we want narrow superintelligent AIs to communicate to each other (and themselves, see chain of thought methods etc) in natural language, so that humans (and other ais) can check on what's going on. I think it's a good trade off between security and efficiency.
@tsarprince
@tsarprince 3 місяці тому
Very very scaringly brilliant
@xSeonerx
@xSeonerx 3 місяці тому
Awesome! Imagine what will happen some years in the future!
@BosonCollider
@BosonCollider 3 місяці тому
It may be worth mentioning that you can already use a normal computer algorithm to solve these problems. Tarski's axioms are first order, so it is decidable to check whether a theorem follows or not from them, and there are plenty of existing algorithms that enumerate a large number of theorems. The innovation here is making it find short proofs in a way similar to a human, using said algos that enumerate practice problems
@denisbaudouin5979
@denisbaudouin5979 3 місяці тому
Are you sure of that ? Here one difficulty is to find where to put the interesting points to be able to complete the demonstration, and it seems that Tarsky’s axioms don’t say anything about that.
@denisbaudouin5979
@denisbaudouin5979 3 місяці тому
And I am unsure enumerating a large number of theorems really help, what you want is a way to find the correct one quickly.
@Spiritusp
@Spiritusp 3 місяці тому
I think you are underestimating the enumeration space. We can also see other non AI methods in the comparison, including Grobner Basis method.
@epicthief
@epicthief 3 місяці тому
This is probably the first video in awhile that was way over me head, it's like Pure Math.
@hectoris9193
@hectoris9193 3 місяці тому
I know very little about AI, but an idea has been bouncing around my head for a while: What would happen if you made a dataset containing a ton of trained AI models, with each model labeled with a text description of what the model can do, along with descriptions on how many dimensions and parameters it has. Would it be possible to make a Text-to-Model AI that allows you to describe a functionality, and have it churn out a model with weights close to what is needed to achieve that functionality?
@alansmithee419
@alansmithee419 3 місяці тому
It already takes huge quantities of data to train language models (like, internet-encompassing quantities). To train a model on text-to-model data would require millions (bare minimum conservative estimate to my mind) of examples of high-quality AIs, and we simply don't have that many. Not to mention architecture, training method, and data are all just as if not more important than parameter/node/layer count. I'm not an expert, so I could easily be wrong, but I don't think there's remotely enough data to achieve this usefully, if it could even be particularly useful at all.
@j3ffn4v4rr0
@j3ffn4v4rr0 3 місяці тому
That's a super interesting idea, and I admit I'm also fairly uneducated about AI......but I suspect one hurdle to implementing your idea is that the model is essentially a "black box".....we don't know what's inside. So, writing a text description of "what it can do" would be prohibitively difficult. In fact, the description of it's capabilities might be only fully described by the model itself. But I'd be interested to know if I'm wrong about that.
@BlooFlame
@BlooFlame 3 місяці тому
langchain
@okirschner001
@okirschner001 3 місяці тому
You are correct. I use this sentence also a lot. This also implies that description can be exchanged with understanding. Also the problem with AIs trying to understand AIs to tackle the blackbox problem. An AI that 100% understands an other AI, would basically be a copy. Any derivation introduces uncertainty(blurryness). Not a coincidence the similarities to the uncertainty priciple in physics. Interesting metaphysical concepts arise from the study of AIs. Everything that exist, is just a different expression of the same fundamental concepts, just on a higher komplexity plane. It fractals down and up all the way. We are creating what created us!@@j3ffn4v4rr0
@hectoris9193
@hectoris9193 3 місяці тому
@@alansmithee419 do models require that much data normally? I would’ve thought a thousand or so might be reasonable to start off with, and huggingface is full to the brim with people making experimental models
@user-if1ly5sn5f
@user-if1ly5sn5f 3 місяці тому
5:20 no, fast thinking is the quick response like muscle memory and the slow thinking is working out a process like using a process to window away until its answer. The process is what matters kinda like building the blueprint. Math process is the blueprint and we build to the answer. The slow is better thought out basically, not just a first prediction with all info but a first prediction that uses the info to find the right answer. Understanding the process allows it to be smarter.
@capitalistdingo
@capitalistdingo 3 місяці тому
I think he is referencing a terminology in a book about cognitive science by Daniel Dennet rather than physiology terminology. Sort of like how the term “metal” means a different criteria to an astronomer than it does to chemists.
@yoverale
@yoverale 3 місяці тому
What a time to be alive!! 🤯
@pridefulobserver3807
@pridefulobserver3807 3 місяці тому
New all-powerful mathematician... hmm i can smell the new physics in some years already
@asdfasdfasdf1218
@asdfasdfasdf1218 3 місяці тому
All of science and engineering is applied mathematics, starting with some empirical observations at its base. If AI can master mathematics, it can essentially master the creation of any kind of technology.
@AnAncient76
@AnAncient76 3 місяці тому
Mathematics is not reality.
@asdfasdfasdf1218
@asdfasdfasdf1218 3 місяці тому
@@AnAncient76 Mathematics is reasoning and logic. In fact, math is simply another word for logic, these two things are the same. So math is reality insofar as concluding new things from previous things.
@AnAncient76
@AnAncient76 3 місяці тому
@@asdfasdfasdf1218 Mathematics is a concept people use to explain reality. And obviously they can't explain it because reality is not mathematics. Numbers and lines do not exist in nature. The same applies to "logic". The Universe does not know logic, because to know logic it should know non-logic. This implies that the Universe can make non-logic, which is wrong. The Universe at its fundament does not create non-logic, mistakes, etc. People do that. The Universe also does not think like people. Space and time are also human concepts, aka they do not exist.
@asdfasdfasdf1218
@asdfasdfasdf1218 3 місяці тому
​@@AnAncient76 If you think mathematics is "numbers and lines," that means you don't know what mathematics is. Mathematics at its core is not numbers and lines, it's formal languages and model theory. You should educate yourself more before commenting these irrelevant things.
@tmdquentin5095
@tmdquentin5095 3 місяці тому
Can you please talk about the new LLM Model called "Mixtral-8x7B" Thanks
@tim40gabby25
@tim40gabby25 3 місяці тому
Forget the paper, my subtitles spelt Karol's name perfectly! What a time to be a guy.
@virgilbarnard4343
@virgilbarnard4343 3 місяці тому
Yes, I’ve been obsessed with this paper… but to say it’s without human assistance is misleading; they developed a sophisticated set of methodologies and functional graph procedures to generate the training data from scratch, even setting a new state of the art in proof discovery in order to feed it.
@KnakuanaRka
@KnakuanaRka 3 місяці тому
For the infinite number of primes question, that actually can be done as a straight construction (and may have been in the original, need to check): after getting the product of a list of primes + 1 and getting the factors of that, you can put these new primes into the list and repeat the process over and over to generate an unlimited number of primes.
@vitruviuscorvin3690
@vitruviuscorvin3690 3 місяці тому
Would this be able to work with tabular data? I've been looking at ways to have an LLM crunch some tabular data i have and do some actual math on it (nothing too complicated tho). LLMs so far been incredible with text/string data, but when it comes to numbers, oh boy, hallucinations start easily
@alectoireneperez8444
@alectoireneperez8444 3 місяці тому
It’s specialized for geometry problems & finding proofs
@NirvanaFan5000
@NirvanaFan5000 3 місяці тому
great paper for the start of 2024. can't even imagine where we'll be at by december. hard to believe chatgpt is barely a year old
@linkymcfinkelstein6763
@linkymcfinkelstein6763 3 місяці тому
Want more of this!
@lasagnadipalude8939
@lasagnadipalude8939 3 місяці тому
Next version optimize for less steps possible and put it in a mixture of experts agent swarm to make the AGI
@daviddelaney363
@daviddelaney363 3 місяці тому
I do like the "narrow" focus approach. Would you rather have a candle or a laser?
@yessopie
@yessopie 3 місяці тому
Euclid's proof is written incorrectly at 2:15. The point is not that "p is a new prime", but that the prime factorization of p is a set of new primes.
@tuseroni6085
@tuseroni6085 3 місяці тому
get general purpose AI like this that outperform humans, make some for all domains, make an AI which is able to detect a domain or set of domains from related to a prompt, then bring in the relevant expert AI to solve, if multi domain break down the question into the relevant domains, ask each expert to solve for their part and them put them together. you now have artificial super intelligence. i feel like research papers in the future will be: methodology: i gave the question to an AI results:
@sabofx
@sabofx 3 місяці тому
A-MA-ZING !!!!
@guncolony
@guncolony 3 місяці тому
It still seems like a pretty narrow AI but seems incredibly good at what it does. There's still a long road to making this more general, but if that is accomplished, you essentially get an AGI that can figure out how to solve a complex problem on its own. Imagine giving it an access to a simulation (so the AI can check its solutions), and you can use it to develop drugs, computer algorithms, optimized mechanical designs... Basically replacing a whole lot of science and engineering.
@michaelleue7594
@michaelleue7594 3 місяці тому
Getting a machine to narrowly do a task very well is something we've been doing for 200 hundred years, so let's not put the cart before the horse here. Generalizing this to *anything* more than what it's doing now may not be possible at all, let alone all of the stuff you're talking about.
@Afkmuds
@Afkmuds 3 місяці тому
@@michaelleue7594but when combined with the others it can act as a section fit he brain for processing math😏
@michaelleue7594
@michaelleue7594 3 місяці тому
@@Afkmuds I guess, but at what point does frankensteining a bunch of individual models together look less like a reasoning model and more like just a regular computer?
@Afkmuds
@Afkmuds 3 місяці тому
which is the next step, a computer than can access itself, the fact the file explorer is so stupid for instance. computers are bouta be so much better at being computers@@michaelleue7594
@TheVonWeasel
@TheVonWeasel 3 місяці тому
I keep saying the way to AGI is to have a million specialized sub AIs and one controller that knows the best one to pick for the job and a translator that can facilitate the communication between them for more complex tasks involving multiple disciplines.
@donjerson
@donjerson 3 місяці тому
AGI is an AI than can create sub ai's or morph itself
@Jonny11299
@Jonny11299 3 місяці тому
Keep saying that bro. This rings true and is beautiful to consider.
@andrechaos9871
@andrechaos9871 3 місяці тому
Sometimes I want to connect a lot of these AIs to my brain as new modules and tools. It hadn't upgrades for a long time.
@KilgoreTroutAsf
@KilgoreTroutAsf 3 місяці тому
Look at this paper title! Look at this graphics! Let's not discuss anything about the method! What a time to be alive!
@EstrangedEstranged
@EstrangedEstranged 3 місяці тому
Thanks!
@BlooFlame
@BlooFlame 3 місяці тому
Wow, first the MoE revisited approach, and now, we get a narrow expert to augment the MoE with exceptional mathematical ability!? What is going on with all these optimizations in AI right now?
@infowarriorone
@infowarriorone 3 місяці тому
What I wonder is how AI can help decode previously undecipherable written languages from the past. No doubt people are already training AI to do that.
@marcfruchtman9473
@marcfruchtman9473 3 місяці тому
This is great progress. However, you are incorrect @5:06. The AI doesn't "learn from scratch" ... the training data that it is given is pre-trained human created symbolic data from symbolic engines created by humans. So it is literally trained using data that humans created which provide the baseline rules of geometry... it doesn't self-create those rules. The synthetic data is 100,000,000 examples of that data, and then it can use that to give answers to more complex questions or questions not included in the data set. It is still very exciting progress because it can extrapolate what it is trained on to problems that it has not seen.
@spencer__
@spencer__ 3 місяці тому
I really appreciate the time you put in on this channel. I wish I could access the content you provide but your drawn out way of speaking makes it really grating to follow for me. Obviously this is your stylistic choice, but I wanted to give the feedback that the constant pauses in every sentence stop me from clicking your videos despite really being interest in the content. Either way thanks for your hard work :)
@jjjccc728
@jjjccc728 3 місяці тому
Play it faster. I do it at one point five times and it's fine.
@spencer__
@spencer__ 3 місяці тому
@@jjjccc728 I already do, its not so much the speed its the abruptness of each stop and emphasis on every word. For a channel focusing on fitting big ideas into a short form so much time could be saved 😅
@nrusimha11
@nrusimha11 3 місяці тому
Do consider the possibility that it is not a stylistic choice. You may see your comment in a completely different light.
@harrybarrow6222
@harrybarrow6222 3 місяці тому
I took Maths and Physics at university. It always seemed to me that the logical presentation of a proof said very little about how the proof was first found. 🎩🐇 I came to the conclusion that we use perceptual abilities to discern the key features of the problem, and then use analogy with similar problems to suggest possible approaches. You might even have to break the task into a sequence of stages with sub-proofs for each. This by itself, of course, does not give you a watertight proof. You then have to go through the process of filling in the steps and details.
@Sekir80
@Sekir80 3 місяці тому
Károly, ez már elképesztő! Mi mód van arra, hogy ezt a papírt elmondd magyarul is?
@luizpereira7165
@luizpereira7165 3 місяці тому
Is geometry easier than algebra for AI? Can they make a general AlphaMath with this approach?
@lis7742
@lis7742 3 місяці тому
Off topic but; I saw a video on how video games are made. The ones where the graphics are well made, it simulates physics. What if we can make a simulation even better, with building a game or a model of reality made with quantum physics, layered and layered to get the reality we see? Would that be possible?
@civilianemail
@civilianemail 3 місяці тому
If I'm interpreting your question correctly, you seem to be asking if we could build a simulation as complex as the universe we live in. Presumably making it indistinguishable from reality. Is that a good restatement of your question?
@AnthonyWilsonOlympian
@AnthonyWilsonOlympian 3 місяці тому
Impressive, but what are the implications or applications?
@GarethDavidson
@GarethDavidson 3 місяці тому
Wow so can we spin this up as an API in Docker, and make LLMs pose problems in the same distribution as its input data, and use the outputs? How much GPU RAM do we need?
@scitechtalktv9742
@scitechtalktv9742 3 місяці тому
Could this be applied to the domain of Physics?
@spicycondiments3043
@spicycondiments3043 3 місяці тому
at 3:42 the AI claimed angle congruencies that don't visually add up. Are these mistakes on the AI's part or mine?
@okirschner001
@okirschner001 3 місяці тому
Blackbox problem and stochastical parrots:(Originally an answer hjidden deeply here, so context is missing) An AI that 100% understands an other AI, would basically be a copy, or at least would have this "copy" integrated into itself. Any derivation introduces uncertainty(blurryness). Not a coincidence the similarities to the uncertainty priciple in physics. Interesting metaphysical concepts arise from the study of AIs. Everything that exist, is just a different expression of the same fundamental concepts, just on a higher complexity plane. It fractals down and up all the way. We are creating what created us! These fundamental similarities in concepts can also be seen with human neurological networks. Even though AI only modells one layer of our multilayered thinking and biology. We finally will understand everything much better with and through AI. Ai is the last invention humans will make. People saying that LLMs are only statistical parrots, don't really understand what's going on. But I don't feel like writing a roman now here. Basically both sides of the argument describe the same thing through a incompletness lense. It turtles all the way up and down. The very first time I heard that story of the old lady, that got laughed at by all the scientist, I knew in my heart that she was correct. The most ironic outcome is the most likely. But it was actually my intuition that I knew. And where Intuition comes from, and where it ties into this bigger picture(and QM), is very fascinating. Since AlphaGo, the bigger picture became much clearer for me. And we are just at the start. We are always just at the start. Prepare for the biggest ride ever known to mankind. (Sometimes I feel like I have to write a book, many concepts I never read somewhere else in my head) I am not even really starting here, just wanted to reply a few words, but can't help myself.
@xXWhatzUpH8erzXx
@xXWhatzUpH8erzXx 3 місяці тому
The next step is developing an AI that can craft these expert AI's themselves, or intrinsically learn to accomplish these tasks
@charliebaby7065
@charliebaby7065 3 місяці тому
Notice how people always refer to thinking in ways that are unique from every pervious attempts or methodologies..... as Thinking Outside of the box Hardy Har Har aaahhh sigh I love your vids Thank you for you passion and your diligence
@BooleanDisorder
@BooleanDisorder 3 місяці тому
Is there a use for this model or mostly a proof of concept?
@waadeland
@waadeland 3 місяці тому
2023: “It is just a fancy autocomplete” - human talking about ai 2024: “It is just a fancy autocomplete” - ai talking about human
@ThanosSofroniou
@ThanosSofroniou 3 місяці тому
This is the greatest AI advancement within the past year
@juhor.7594
@juhor.7594 3 місяці тому
Makes me wonder how the problems in a mathematic olympiad are made.
@Adhil_parammel
@Adhil_parammel 3 місяці тому
those who get 20 above become a group of professors
@jhunt5578
@jhunt5578 3 місяці тому
Impressive to see it work off synthetic data.
@anywallsocket
@anywallsocket 3 місяці тому
The problem is how to reduce the domain to reductive and inductive techniques, once you can do that, you can train a NN on anything, and it will be better than the best - why? Because it is fitting a function in the hyperspace, the limit of which is full memorization of the training data. We shouldn’t be so blown away by this, as the NN doesn’t know it’s doing geometry, and if you change anything fundamental it will fail spectacularly.
@kanetsb
@kanetsb 3 місяці тому
That moment, when you're walking happily in the streets of computer technology and there's a singularity standing in the next dark alley, waiting to shank you...
@Kram1032
@Kram1032 3 місяці тому
3:52 should note the many many proof steps is actually a negative? It's basically doing a slightly more informed version of throwing spaghetti at the wall and looking at what sticks. And it kinda by definition makes no mistake as it just accepts generated proof commands if they actually manage to advance the proof state. Human-written proofs of these things are gonna be quite a bit more concise.
@cvspvr
@cvspvr 3 місяці тому
the maths here is way over my head, but at 6:00, apparently it has found better solutions than humans
@emmastewart3581
@emmastewart3581 2 місяці тому
Can it create new, even more advanced maths problems, for the future AI Olympiads?
@Dron008
@Dron008 3 місяці тому
Can they prove theorems and do the same for other math?
@ivanleon6164
@ivanleon6164 3 місяці тому
Google: HEre is a new paper and new techinques OpenAI: thanks Google, how can we copy this... again?
@User-actSpacing
@User-actSpacing 3 місяці тому
You forgot to say “what a time to be alive!”
@hypersonicmonkeybrains3418
@hypersonicmonkeybrains3418 3 місяці тому
Can it complete Fermat's last theorem?
@mindaza0
@mindaza0 3 місяці тому
How does this pulling rabbit part work?
@antoniobortoni
@antoniobortoni 3 місяці тому
This is HUGE, big, i mean imagine having problem solving capabilities in your pocket, wow.... the exponential grow is happening, we imagine robots idiots but can be poets, and genius inventors, just do the same to inventions and all the possible inventions and discovery ever can be possible now. Wow
@BryanLu0
@BryanLu0 3 місяці тому
I don't know if you can call the proof better. It requires lower level deductions, but more steps. It's easier to understand, but provides less intuition into what is going on.
@krox477
@krox477 3 місяці тому
This is huge discovery
@frank4425
@frank4425 3 місяці тому
They should use this program to train GPT-4
@thomasyang9517
@thomasyang9517 3 місяці тому
Was chatgpt actually able to solve that usaco problem?
@AK-ox3mv
@AK-ox3mv 3 місяці тому
If AI know MATH perfectly , it can know how to build anything from scratch
@prilep5
@prilep5 3 місяці тому
I dropped my papers
@user-gh9ik2vu1w
@user-gh9ik2vu1w 3 місяці тому
Wow. It seems we are in the prologue of sci-fi book
@carloslemos6919
@carloslemos6919 3 місяці тому
They should compare AI's vs human's solution length, there is where intelligence is.
@Ctrl_Alt_Sup
@Ctrl_Alt_Sup 23 дні тому
Can AlphaGeometry solve the Riemann Hypothesis?
@asdfasdfasdf1218
@asdfasdfasdf1218 3 місяці тому
Mathematics is in other words deductive reasoning, a foundation to all actual knowledge. The other foundation is empirical observation. If AI can master mathematics, it would not take much before it can master the creation of new technology.
@AvastarBin
@AvastarBin 3 місяці тому
i wish they'd open source the data though. That would be much more helpful imo
@ShivanshSharma
@ShivanshSharma 3 місяці тому
But can it prove the Collatz conjecture?
@yyaa2539
@yyaa2539 3 місяці тому
I saw the first few seconds of the video and I don't know why I am SO sad 😢😢😢
@yyaa2539
@yyaa2539 3 місяці тому
See the stars come falling down from the sky, Gently passing, they kiss your tears when you cry. See the wind come softly blow your hair from your face, See the rain hide away in disgrace. Still I'm sad. For myself my tears just fall into dust, Day will dry them, night will find they are lost. Now I find the wind is blowing time into my heart, Let the rain fall, for we are apart. How I'm sad, How I'm sad, Oh, how I'm sad. The Yardbirds
@jmoreno6094
@jmoreno6094 3 місяці тому
Your tone speaking is a boxcar(t) function
@danielebaldanzi8383
@danielebaldanzi8383 3 місяці тому
4:50 Don't want to ruin the excitement, but this is incorrect. The gold medal isn't awarded just to the winner, put to a big number of contestants, still impressive, but not as much.
@AttilioAltieri
@AttilioAltieri 3 місяці тому
imagine the developers' reaction after they figured out they created something smarter than them..
@thechadeuropeanfederalist893
@thechadeuropeanfederalist893 2 місяці тому
How long until an AI solves one of the millenium prize problems? I guess only 2 years.
@waarschijn
@waarschijn 3 місяці тому
Two caveats: 1. A long proof is worse than a short proof. The official solutions to these problems are usually very short. 2. "Gold medal" != smartest participant. About 1 in 12 IMO participants get a gold medal.
@jacobnunya808
@jacobnunya808 3 місяці тому
AI has a way to go before it can learn and solve problems like humans can, even in more narrow fields. Right now even if it finds elegant solutions it takes million or billions of examples or simulations before it can do so. Example is with self driving. A human can become decent after 1 year (give or take). An AI needs lifetimes of experience to become good. I am sure this will improve a lot in the few coming years though.
@jareddias7932
@jareddias7932 3 місяці тому
​@@jacobnunya808 why does it matter when you can live 10000 lifetimes in 1-3 days. It's literally irrelevant and only will improve
@jacobnunya808
@jacobnunya808 3 місяці тому
​@@jareddias7932 Not everything can or will be simulated countless times. For narrow tasks this might work but not everything. For example Tesla FSD still cannot drive as well as a human despite being trained on much more data than a human. Over time AI will improve and so will data collection and simulations but right now the point still stands.
@sganicocchi5337
@sganicocchi5337 3 місяці тому
i wish i could train on 100 million synthetic geometry problems
@smetljesm2276
@smetljesm2276 3 місяці тому
Wow people found such an efficient way to make geometry as complicated as they could! This looks nothing like geometry we learned in school 😂😂😂
@rodericksasu6976
@rodericksasu6976 3 місяці тому
Shit this is too good too fast 😨
@peetiegonzalez1845
@peetiegonzalez1845 3 місяці тому
OK now I'm having a real hard time believing that this is anything close to what a LLM is supposed to be able to do. This is not a language predictor. It has short term memory and seemingly logically consistent views of what it's talking about. What is actually happening, here?
@vighneshkannan7896
@vighneshkannan7896 3 місяці тому
It's a combination of two models from what I understand, one which can understand the semantics of the questions, and the other which can perform logical operations. I think they play off each other, in a kind of dance to achieve an end result
@CalebTerryRED
@CalebTerryRED 3 місяці тому
This is an AI that's LLM-ish making guesses at what a method could be, and running that guess into a calculator that validates it using formal logic. The second part is something that has existed for a while, mathematicians use them kind of like how we use calculators, it does the tiresome calculations that are long and repetitive, but doesn't have any sort of intelligence to know what calculations need to be done. It's kind of like using the wolfram alpha plugin in chatGPT, just taken to the next level. use an LLM to translate and understand the problem, but use regular calculator functions to do the math so that the LLM doesn't make basic mistakes
@peetiegonzalez1845
@peetiegonzalez1845 3 місяці тому
Thanks for your answers but I still don't understand how it's possible for these models to do what they are currently capable of. As Terence Tao himself said quite recently... We are on the cusp of these models being actual literal co-authors of new mathematical research. Hold on to your papers, indeed.
@peetiegonzalez1845
@peetiegonzalez1845 3 місяці тому
@@CalebTerryRED The results show way more contextual intelligence than your description would suggest. [citation needed]. I've watched probably every Two Minute Paper video in the last 2 years but that's nowhere near enough to understand what's going on in the so-called "AI" world right now. Yes I watch other videos and try to keep up with the research but it's just coming so thick and fast right now it's practically impossible to keep up unless it's literally your job to report or expand on this stuff.
DeepMind AlphaFold 3 - This Will Change Everything!
9:47
Two Minute Papers
Переглядів 134 тис.
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Переглядів 941 тис.
Normal vs Smokers !! 😱😱😱
00:12
Tibo InShape
Переглядів 6 млн
Does -1/12 Protect Us From Infinity? - Numberphile
21:20
Numberphile
Переглядів 425 тис.
DeepMind’s New AI Makes Games From Scratch!
7:42
Two Minute Papers
Переглядів 134 тис.
New DeepMind AI Beats AlphaGo 100-0 | Two Minute Papers #201
5:52
Two Minute Papers
Переглядів 286 тис.
AI Generated Videos Just Changed Forever
12:02
Marques Brownlee
Переглядів 8 млн
Gemini: ChatGPT-Like AI From Google DeepMind!
10:52
Two Minute Papers
Переглядів 146 тис.
Something Strange Happens When You Follow Einstein's Math
37:03
Veritasium
Переглядів 8 млн
NVIDIA’s New AI: 50x Smaller Virtual Worlds!
7:32
Two Minute Papers
Переглядів 121 тис.
Why Would AI Want to do Bad Things? Instrumental Convergence
10:36
Robert Miles AI Safety
Переглядів 244 тис.
DeepMind’s New Robots: An AI Revolution!
8:39
Two Minute Papers
Переглядів 197 тис.
How AI Discovered a Faster Matrix Multiplication Algorithm
13:00
Quanta Magazine
Переглядів 1,3 млн
How Neuralink Works 🧠
0:28
Zack D. Films
Переглядів 25 млн
iPhone - телефон для нищебродов?!
0:53
ÉЖИ АКСЁНОВ
Переглядів 3,8 млн