#53 Prof. BOB COECKE - Quantum Natural Language Processing

  Переглядів 12,205

Machine Learning Street Talk

Machine Learning Street Talk

День тому

Bob Coecke is a celebrated physicist, he's been a Physics and Quantum professor at Oxford University for the last 20 years. He is particularly interested in Structure which is to say, Logic, Order, and Category Theory. He is well known for work involving compositional distributional models of natural language meaning and he is also fascinated with understanding how our brains work. Bob was recently appointed as the Chief Scientist at Cambridge Quantum Computing.
Bob thinks that interactions between systems in Quantum Mechanics carries naturally over to how word meanings interact in natural language. Bob argues that this interaction embodies the phenomenon of quantum teleportation.
Bob invented ZX-calculus, a graphical calculus for revealing the compositional structure inside quantum circuits - to show entanglement states and protocols in a visually succinct but logically complete way. Von Neumann himself didn't even like his own original symbolic formalism of quantum theory, despite it being widely used!
We hope you enjoy this fascinating conversation which might give you a lot of insight into natural language processing.
Pod: anchor.fm/machinelearningstre...
Tim Intro [00:00:00]
The topological brain (Post-record button skit) [00:13:22]
Show kick off [00:19:31]
Bob introduction [00:22:37]
Changing culture in universities [00:24:51]
Machine Learning is like electricity [00:31:50]
NLP -- what is Bob's Quantum conception? [00:34:50]
The missing text problem [00:52:59]
Can statistical induction be trusted? [00:59:49]
On pragmatism and hybrid systems [01:04:42]
Parlour tricks, parsing and information flows [01:07:43]
How much human input is required with Bob's method? [01:11:29]
Reality, meaning, structure and language [01:14:42]
Replacing complexity with quantum entanglement, emergent complexity [01:17:45]
Loading quantum data requires machine learning [01:19:49]
QC is happy math coincidence for NLP [01:22:30]
The Theory of English (ToE) [01:28:23]
... or can we learn the ToE? [01:29:56]
How did diagrammatic quantum calculus come about? [01:31:04
The state of quantum computing today [01:37:49]
NLP on QC might be doable even in the NISQ era [01:40:48]
Hype and private investment are driving progress [01:48:34]
Crypto discussion (moved to post-show) [01:50:38]
Kilcher is in a startup (moved to post show) [01:53:40
Debrief [01:55:26]
References;
Categorical Quantum Mechanics I: Causal Quantum Processes
arxiv.org/pdf/1510.05468.pdf
From quantum foundations via natural language meaning to a theory of everything
arxiv.org/pdf/1602.07618.pdf
Kindergarden quantum mechanics graduates (...or how I learned to stop gluing LEGO together and love the ZX-calculus)
arxiv.org/abs/2102.10984
The Mathematics of Text Structure
arxiv.org/abs/1904.03478
Foundations for Near-Term Quantum Natural Language Processing
arxiv.org/abs/2012.03755
Quantum Algorithms for Compositional Natural Language Processing
arxiv.org/abs/1608.01406
A Compositional Distributional Model of Meaning
www.cs.ox.ac.uk/people/stephen...
Mathematical Foundations for a Compositional Distributional Model of Meaning
arxiv.org/pdf/1003.4394.pdf
YT videos with Bob;
• Bob Coecke: From quant...
• Q2B 2020 | Quantum Nat...
• QNLP 2019: Why NLP is ...
• Bob Coecke: Quantum Na...
• Physics from Compositi...
• Bob Coecke: Quantum Na...

КОМЕНТАРІ: 62
@shabamee9809
@shabamee9809 3 роки тому
My girlfriend is a Linguist and I’m a Data Analyst, so together, we understood about 80% of the material. Super interesting topic! Love these videos :)
@shiva_kondapalli
@shiva_kondapalli 2 роки тому
The mathematician Coecke was referring to was René Thom, a brilliant topologist with deep geometric insight and formidable algebraic rigor. Topology is part of geometry actually. A topologist cares about the global "shape of a space" ignoring local properties like distances and curvature. Now, what is super cool is that catastrophe theory is a subset of a very deep area in mathematics called singularity theory. This has been applied by Sumio Watanabe in what is called Singular Learning theory, yes learning theory! It recognizes that all deep learning models are singular, i.e the parameter space (space where weights and biases live) have sharp cusps and twists, and the map from parameter space to function space is not injective, different choices of parameters can realize the same function that a Neural Network computes. Watanabe uses the Resolution of Singularities, a result by Japanese mathematician Heisuke Hironaka, from Algebraic Geometry to desingularize these models. Not gonna pretend I know what that means but Hironaka won the field's medal for it! That's how deep deep learning gets. And last but not least, Coecke mentions topology as being somehow fundamental. Hmm well, let's see- Homotopy theory is now considered a unifying theme in mathematics, shows up in number theory, physics, and even type theory! The topology of the space cut out by polynomial equations is intimately connected to its' solutions in finite fields and the rational numbers, see the Weil conjectures for more on this. There exists a vast generalization of the very notion of a "topological space" due to Alexander Grothendieck who calls it a "topos", greek for "place". Now topology and entropy are also connected through a theory called Information cohomology, a topological approach to information theory and entropy using cohomological tools from algebraic topology, category theory, and topos theory. For the final bang, this notion of topos due to Grothendieck marries the discrete with the continuous! In the words of Alexander Grothendieck himself on toposes: “It is the theme of toposes which is this “bed”, or this “deep river”, in which come to be married geometry and algebra, topology and arithmetic, mathematical logic and category theory, the world of the continuous and that of the “discontinuous” or “discrete” structures. It is what I have conceived of most broad, to perceive with finesse, by the same language rich of geometric resonances, an “essence” which is common to situations most distant from each other".
@EmadGohari
@EmadGohari 2 роки тому
I study and do NLP for my job, trying this discrete vs. connectionist challenge every day. This is VERY interesting!!
@daveman683
@daveman683 3 роки тому
This was a mind-blowing episode. I really love that this show is optimizing so well for the diversity of thought.
@oncedidactic
@oncedidactic 3 роки тому
Tim your summaries are so on point it’s ridiculous. I was positively bouncing up and down with excitement. If the content was solely these introductory distillations, it would still be 5 stars. MLST is a treasure. 👌
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 3 роки тому
Thanks a lot!
@lenyabloko
@lenyabloko 2 роки тому
I second that.
@universetwisters
@universetwisters 3 роки тому
This is the best gift to end my birthday with, thank you gentlemen
@sheggle
@sheggle 2 роки тому
Happy belated birthday!
@stalinsampras
@stalinsampras 3 роки тому
This episode is going to be a wild one lol.
@carlossegura403
@carlossegura403 3 роки тому
Wow, really fascinating material! This has been one of my favorite episodes 🔥
@sabawalid
@sabawalid 3 роки тому
As always, great episode guys! Loved your questions.
@liiveinternationalinitiati5004
@liiveinternationalinitiati5004 3 роки тому
Thank you for this video, very insightful
@MattiasWikstrom
@MattiasWikstrom 2 роки тому
I found it very illuminating to listen to professor Coecke. It is probably worthwhile listening to him many times and let your understanding of all sorts of things improve as a result.
@CristianGarcia
@CristianGarcia 3 роки тому
The skeleton part was super trippy :o
@jaapterwoerds9850
@jaapterwoerds9850 2 роки тому
I love that dr Tim's grin on his face in the introduction gives away the excitement about the episode that is about to come.
@tenzin8131
@tenzin8131 2 роки тому
Here we go !!
@mindaugaspranckevicius9505
@mindaugaspranckevicius9505 2 роки тому
You made my day (Friday evening :)). One of my professors in ~1992 said that world is hybrid so the computations should be hybrid too. At that time idea was CPU + analog when you add 5V +3V = 8V at the output. Is a qubit an analog or digital entity? We all want AGI :). I was already thinking that embedded(object)-embedded(verb)-embedded(subject) should work to represent meaning, especially after Francois episode, and these parts also could be used elsewhere. So "Bob loves Alice" is almost the same as "[My colleague Mr.] Bob wants to spend the rest of his life with [beautiful] Alice" with the bureaucracy but meaning stays binary. I thought that there is no restrictions that only one set of discrete-continious (type1-type2) may exist - could be many in the system. I also agree that ambiguity is a huge obstacle too.
@fteoOpty64
@fteoOpty64 2 роки тому
"So "Bob loves Alice" is almost the same as "[My colleague Mr.] Bob wants to spend the rest of his life with [beautiful] Alice" with the bureaucracy but meaning stays binary. " ---- Not so fast. This is but just one interpretation of the initial statement. The issue here is "love" has many meaning and implications depending on "state" of each noun/person stated. The tacit knowledge might be captured by background knowledge of particular persons involved. Plus the context might depend on the environment and mood of the conversion.
@katerynahorytsvit1535
@katerynahorytsvit1535 3 роки тому
Thank you for this episode! it is really amazing! And those thoughts that were interchanged about the academic world -- they are so true. The UK universities are commercial institutions in the first place, whereas real science pursue idealistic goals. It comes from the Platonic realm. It is sad that they prioritize money over Truth.
@sajidanaseer1361
@sajidanaseer1361 2 роки тому
Muslim scientists prioritize truth over money
@CandidDate
@CandidDate 2 роки тому
They just realized that the Truth is ineffectual in perpetuating progress. I mean will AI calculate that without humanity, life on earth would be better off, so why not execute all humans, because machines are more environmentally friendly? No, some amount of falsehood is necessary to keeping us all alive.
@SimonJackson13
@SimonJackson13 3 роки тому
Grammar is a data compression mechanism using the 1D serial nature of an audio stream. As a graph, the grammar can become topological.
@SimonJackson13
@SimonJackson13 3 роки тому
Really grammar is a slot existential before fill mechanism, so that routing to activation of understanding can happen before completion.
@SimonJackson13
@SimonJackson13 3 роки тому
A zeroth differential future estimator in a feedback loop allows open-loop gradient decent training.
@CristianGarcia
@CristianGarcia 2 роки тому
Listening to some of the meta comments here about science and technology was really interesting. That said, I didn't understand a single thing about his theory, it seems that you relate entities with each other in some structure and QM then resolves the system but how you build the entities and their relations wasn't explained. I got the sense that non of the hosts could figure it out either 😅. Again, thanks for the awesome videos!
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 роки тому
www.cambridge.org/core/books/picturing-quantum-processes/1119568B3101F3A685BE832FEEC53E52 😀
@Rockyzach88
@Rockyzach88 3 місяці тому
Coming back to this video. One of the first videos I watched on this channel. Got this guy's book for Christmas after seeing it another video (I think about category theory). Interested to see what insights it provides me.
@vtrandal
@vtrandal 17 днів тому
I’m here because of a sentence from a book: ==> All the chapters of "Quantum In Pictures" are also available as videos on Quantinuum's UKposts channel, featuring some special guests. And this video was on a playlist there. I am lost.
@mahrard
@mahrard 2 роки тому
Quick note: The vector spaces you describe are continuous spaces, as they are modules over the real or complex fields. (Density) matrices are also vector spaces, which is why the monoidal category of vector spaces is closed.
@bob_coecke
@bob_coecke 2 роки тому
Closed is an understatement. They all are compact closed. www.mscs.dal.ca/~selinger/papers/#dagger
@mahrard
@mahrard 2 роки тому
@@bob_coecke true.
@SamJoseph
@SamJoseph 2 роки тому
Such a great episode, but I have a little problem with the idea that a spoken sentence represents a single unambiguous internal thought. My intuition is that internally there are many thoughts and ideas struggling with one another, and it’s difficult to say with any accuracy what precisely a human utterance means in terms of internal thoughts
@CandidDate
@CandidDate 2 роки тому
So I have a couple of ideas: 1> He who discovers a new tractable and implementable idea, will be stolen from by another researcher, and the thief will get all the credit and go down in history. Think Tesla and AC. 2> In the end, it is how the idea is used that eclipses the understanding of the idea. Me trying to invent a "thinking machine" in the end is completely useless because of (1) and because if I don't succeed, someone else will. It is a race to figure out how we can talk to a computer in NLP, and we will do it given enough time, but there are no amount of books that will enable one to "own" the actual AI. There will be a "father" of AGI, but he won't be the genetic father, just a metaphorical one.
@michaelwangCH
@michaelwangCH 3 роки тому
In past few years mathematicians and physicists contribute the understanding of topological manifold a lot. But many problems are not GD-learnable because the related manifolds are discontinuous, disconnect, not Lipschitz and not convex - gradient methods are in this case suboptimal(local optimal). We use correlation as proxy for reasoning, sometime is true, but sometime is false - we have similar problem with knowledge and understanding. Knowledge has to do with learning and memorization of content - here you will have many shortcuts - but understanding has to do with study and you will not have any shortcut. That is the reason why a PhD prog. in CS will take you 5 to 8 years - you are starting to study specific topic and less learning.
@segelmark
@segelmark 3 роки тому
Tim, always love you intros but there was a bit too much jargon today for me to follow, Im sure they are a lot of work but they are soooo great!
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 3 роки тому
Thanks for the feedback
@kirsty_iso
@kirsty_iso 2 роки тому
Bob talk to me about your rockstar dreams you honestly seem like the type
@VMac-eg7fb
@VMac-eg7fb 3 роки тому
Do you have a library for each simple, discreit concept and a master, global library to contain the simple, global libraries required to generate flowing thought patterns, the quantum computer could handle almost limitless simple globals libraries, "yes am a beginner"... but thourougjy enjoyed this episode, ......just taking part of your inspiration. Why don't they build a dedicated Space Station soley to contain numerous banks of Quantum computers, this would keep them extremely cold using the newest higher temperature processors in low kelvin range and would eliminate the need for all the tubing used in cooling process back on earth, enhancing a higher density of processors.
@XOPOIIIO
@XOPOIIIO 3 роки тому
Speech is one dimensional and environment is two dimensional is because human mind is designed to find a path on the plane.
@lenyabloko
@lenyabloko 2 роки тому
In what sense is speech one dimensional. Do you mean sound? Even sound is not - it has a spectrum of frequencies.
@XOPOIIIO
@XOPOIIIO 2 роки тому
@@lenyabloko In the sense that you read the text as a line.
@sedenions
@sedenions 3 роки тому
See if you can interview Pei Wang (Artificial general intelligence)
@iamskeeet
@iamskeeet 2 роки тому
Invite someone from LightON. They do have actual optical computing and bunch of related papers.
@dinasina3558
@dinasina3558 2 роки тому
Interesting. Firstly i listen to the podcast and by his speech concluded he is Indian guy .
@lenyabloko
@lenyabloko 2 роки тому
Haha 😂. I think he is from Ireland or Scotland.
@rajuaditya1914
@rajuaditya1914 Рік тому
@@lenyabloko He's Belgian.
@lenyabloko
@lenyabloko 2 роки тому
Please invite Ben Goertszel from SingularityNet
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 роки тому
He's booked in for early Aug
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic 2 роки тому
did gpt3 write your opening line
@sabawalid
@sabawalid 3 роки тому
So, Tim: what are the three comments? :)
@TimScarfe
@TimScarfe 3 роки тому
😂😂
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic 2 роки тому
Interaction dynamics is called Dialectics, not togetherness lol. It's dialectical thought / which flows from the fact that reality itself follows the dialectical laws.
@bob_coecke
@bob_coecke 2 роки тому
With all that covid isolation, I am really glad that now there is again a chance for some dialectics.
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic 2 роки тому
@@bob_coecke lol
@stalinsampras
@stalinsampras 3 роки тому
First😋
@andybaldman
@andybaldman Рік тому
This seems like it was a big deal.
@muhammadaliyu3076
@muhammadaliyu3076 3 роки тому
I like this guy when he talks but the problem is he doesn't listen at all.
@bobcoecke2866
@bobcoecke2866 3 роки тому
I played too much load music in my life. Killed my ears.
@muhammadaliyu3076
@muhammadaliyu3076 3 роки тому
@@bobcoecke2866 Oh sorry to hear that
@bobcoecke2866
@bobcoecke2866 3 роки тому
@@muhammadaliyu3076 No worries. :) It was a lot fun!
@HATERPROOFisQ
@HATERPROOFisQ 2 роки тому
Hi !!!! This is Jesus walky Talker over and out..........? uhgg hi : I AM : haterptoof.. Thanks Sir... how may I serve my Lord ?
#54 Prof. GARY MARCUS + Prof. LUIS LAMB - Neurosymbolic models
2:24:13
Machine Learning Street Talk
Переглядів 53 тис.
NLP is not NLU and GPT-3 - Walid Saba
2:20:33
Machine Learning Street Talk
Переглядів 11 тис.
Bob Coecke: From quantum processes to cognition via pictures
59:43
Latvijas Universitāte
Переглядів 4,3 тис.
SpaceX's Massive Updates for Flight 4 are Astounding!
24:50
Marcus House
Переглядів 244 тис.
Why flat earthers scare me
8:05
Sabine Hossenfelder
Переглядів 182 тис.
#036 - Max Welling: Quantum, Manifolds & Symmetries in ML
1:42:32
Machine Learning Street Talk
Переглядів 21 тис.
Planet Mars NEW Footage 2024: Curiosity Rover (Part 26)
11:44
Quick Solutions - Data
Переглядів 559
Francois Chollet - On the Measure Of Intelligence
2:33:32
Machine Learning Street Talk
Переглядів 13 тис.
#51 FRANCOIS CHOLLET - Intelligence and Generalisation
2:01:54
Machine Learning Street Talk
Переглядів 67 тис.
#67 Prof. KARL FRISTON 2.0 [Unplugged]
1:42:11
Machine Learning Street Talk
Переглядів 11 тис.