The Long Story of How Neural Nets Got to Where They Are: A Conversation with Terry Sejnowski

  Переглядів 185,876

Wolfram

Wolfram

День тому

Stephen Wolfram plays the role of Salonnière in this on-going series of intellectual explorations with special guests. Watch all of the conversations here: wolfr.am/youtube-sw-conversat...
Originally livestreamed at: / stephen_wolfram
00:00 Start stream
5:26 SW starts talking
5:39 When did people first realize there were neurons in the brain?
6:30 Before the discovery of neurons, what did people think the brain was made of?
7:28 How did people figure out things in the brain had to do with electricity?
9:22 After discovery of electrical signals in the brain, where there huge developments of people studying and dissecting the brain?
10:47 When did people building machinery and people studying the brain connect?
13:17 The big moment for neural nets, logical structuring, the brain might be computing like a computer
18:47 Developments in the 40s and 50s- perhaps neural nets were original idea of what computers might be like
22:30 Developments in the 60s and 70s - Computers can prove theorems so what else can they do? Early development of machine translation projects.
29:09 Going back to McCulloch Pitts 1943 and idealized artificial neurons, Neuron structure and function in the human brain
35:04 Can artificial neural net weight matrices and biases capture characteristics of neurons and dendritic trees found in the brain?
47:26 Japanese advancements in neural networks in the 60s and 70s: Neocognitron, Linear predicates, Precursors of Hopfield networks
51:06 Early 60s resurgence of interest in artificial networks and intelligence.
1:03:07 Historical involvement of physicists in neural network development, communities coming together to figure out what was going on in the brain and how patterns of information can be stored in neural nets
1:18:50 Developments in the 80s and precursors to Boltzmann Machines
1:32:16 Developments in neural networks between the mid 1980s until 2011/2012
1:55:17 How did research change with the beginning of 2010s?
2:07:16 What's the next stage of neural nets? How is ChatGPT involved?
2:12:08 Looping Procedures and Learning Methods
2:17:28 Do you think there can be a human understandable theory of what you're seeing with dimensions and mathematics?
2:23:40 2012 breakthrough
2:26:19 Discussing project with Gerald Tesauro
2:23:27 What does solve the problem mean?
2:36:40 Was Dennis Gabor's 1959 'Electronic Inventions and their Impact on Civilisation' the first proposal of random circuits with weight tuning (to learn a black box function)?
2:40:28 What does he think of Robert Hecht Nielsen's Confabulation Theory as a top-down approach to the neocortex?
2:42:44 The HBP or Human Brain Project is worth Your attention in the future, It is amazingly complex plan to simulate how the Brain functions starting from simulating the Brain of the mice.
2:45:26 Terry mentioned that current models are about 3 or 4 orders of magnitude less compute than the brain - how many orders of magnitude in algorithmic development (backprop etc.) does he estimate we are?
2:49:42 One of the challenges with neural networks is that they can end up solving problems in ways that aren't intuitive to humans, and this results in issues with both trust and unexpected behaviour in edge cases or extrapolation beyond the training set. Is there much progress in "guiding" neural networks so they behave in a more intentional, human-like way?
2:51:42 What is the future? Reviewing the history.
Follow us on our official social media channels.
Twitter: / wolframresearch
Facebook: / wolframresearch
Instagram: / wolframresearch
LinkedIn: / wolfram-research
Contribute to the official Wolfram Community: community.wolfram.com/
Stay up-to-date on the latest interest at Wolfram Research through our blog: blog.wolfram.com/
Follow Stephen Wolfram's life, interests, and what makes him tick on his blog: writings.stephenwolfram.com/

КОМЕНТАРІ: 107
@JasonCunliffe
@JasonCunliffe Рік тому
05:26 Start
@shafqatullah5032
@shafqatullah5032 Рік тому
Really great conversation! Eagerly looking forward to the next one on Future of AI/ML. Thanks @Stephen and @Terry!
@coolhead20
@coolhead20 Рік тому
These are super informative. Thank you for sharing!
@masterraccoon2883
@masterraccoon2883 9 місяців тому
10/10, was dreaming amazing while listening to this. A subject in which I have no idea of, being talked by 2 incredibly smart individuals, and being extremely sleepy, amazing.
@alberth3356
@alberth3356 Рік тому
excited to hear part 2 of this conversation: Back To The Future!
@sillystuff6247
@sillystuff6247 Рік тому
Stephen is a walking, talking _Encyclopedia Galactica_ Wonderful to hear the history of AI.
@godynnel7680
@godynnel7680 Рік тому
Great interview by Stephen here, best I've seen in terms of balancing giving airtime to the guest while also adding useful comments himself
@wesleyhein3999
@wesleyhein3999 10 місяців тому
This was an exceptionally interesting interview. Two really smart people talking about a fascinating set of subjects. I don't normally watch something 3+ hours in one sitting but that's what I did with this one.
@d96002
@d96002 Рік тому
incredibly interesting conversation
@JustinHedge
@JustinHedge Рік тому
Wonderful discussion.
@SR-hm7cf
@SR-hm7cf Рік тому
a trip down memory lane - and an intro to nn - great interview
@BillBenzon
@BillBenzon Рік тому
I sometime think of the brain as a "polyviscous" fluid. There are components with a very high viscosity, on the order of months and years, very low viscosity, on the order of milliseconds, and many levels in between. These components are all intermingled in the same physical space.
@dr.mikeybee
@dr.mikeybee Рік тому
Your depth-interviewing skills are extraordinary, Stephen. And Terry has an amazing mind into which to delve!
@charlesnutter127
@charlesnutter127 10 місяців тому
AIl seemed so simple before the discussion; afterward, I am unsure! Great discussion.
@taopaille-paille4992
@taopaille-paille4992 Рік тому
IT would be useful to have a few introductory words about the guest in the description for those that may not know him.
@AlexShkotin
@AlexShkotin Рік тому
great! is there any chance for timeline? thank you.
@SluttyPhone
@SluttyPhone Рік тому
That ending bit, I knew it was the same nlp models from the past! Crazy.
@rQuadrant
@rQuadrant Рік тому
If you came here out of an overwhelming sense of needing to catch up, you won't be disappointed. From the 3:04:31 mark: S. Wolfram, "But it so bizarre that the original neural net idea from whatever it is--85 years ago or something--is still, that's what is running inside Chat GPT." T. Sejnowski "It is! Shocking!" Listen to the rest on a long walk to get filled in between now and 85 years ago.
@joanmyron
@joanmyron 11 місяців тому
Very interesting.
@briancase6180
@briancase6180 Рік тому
This is an essential discussion; it should be viewed far and wide. Thanks! OMG, nettalk and dectalk. I think I vaguely remember those. I had no real interest in AI at the time, but nettalk made a big splash if I recall correctly. The Ridge computer's claim to fame at the time was that it was one of the first true commercial RISC architectures, so it bested the VAX in price/performance. There was a lot of minicomputer competition for the VAX at the time (Pyramid technology was another). Unfortunately, they didn't make the transition to a single chip fast enough. There were many computer companies that thought designing with SSI and MSI chips (small gate array and 7400 chips) would be competitive, but the window closed quickly (the Rational ADA machine was one example).
@user-fp8wg6eq9y
@user-fp8wg6eq9y 8 місяців тому
😢😢😢😢😢😢😢😢😢😢😢😢😢
@user-fp8wg6eq9y
@user-fp8wg6eq9y 8 місяців тому
😢😢😢😢😢😢😢😢😢😢😢😢😢
@user-fp8wg6eq9y
@user-fp8wg6eq9y 8 місяців тому
😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢
@user-fp8wg6eq9y
@user-fp8wg6eq9y 8 місяців тому
😢
@user-fp8wg6eq9y
@user-fp8wg6eq9y 8 місяців тому
😢😢
@Mentaculus42
@Mentaculus42 16 днів тому
1:30:07 → “Computing is now dirt cheap”❗️In Brian Greene’s interview with Brian Schmidt, Schmidt said that each “Training Iteration” cost $100s of MILLIONS to do for the LLMs! AND the latest are getting up to $400,000,000.
@Jimserac
@Jimserac 9 місяців тому
This brings back such great memories. I was a programmer, and later software engineer, from 1972 and the IBM 360 Mainframe days, later taking a break to spend 3 semesters at U.R.I. taking courses in E.E., Computer Engineering and Advanced Math, 4 years at a company programming high precision industrial high precision Industrial scanning gauges in 8080 and Z-80 assembly language, going past the late 70's and some military work, through the 1980's programming mainly "C" for International Data Sciences in Lincoln, Rhode Island with my trusty copy of Kernighan and Ritchie on my desk (a young Mark Pesce sat in the cubicle next to me, he later moved to Australia) and finally 10 years at a financial services company programming an Annuities design system in C and C++ which ended in 2004. Along the way, in the mid 1980's, I would hang out at Brown Univ. where an enormous amount of talk and excitement revolved around neural networks, in particular Associative Networks and various other types. I went to a seminar run by Leon Cooper, physicist and Nobel prize winner involving a startup he was working on for intelligent traffic light control if I recall correctly. The feeling was that a breakthrough was imminent. We studied Bart Kosko whose mathematics were impressive but then got lost in a maze of "Fuzzy Logic" and talk about the Japanese 5th Generation project which seems to have created a storm of expectations but ended more like Shakespeare's "full of sound and fury, signifying nothing". In 2004, my job having been outsourced to India I could see that "ancient geeks" in their mid to late 50's were of no interest to companies and switched careers, entering a College of Oriental Medicine in Florida for a whole new set of adventures which persist to this day in my quest to learn Qing era Chinese. So a talk of this kind, which appears to fill in the gaps from the mid 1980's to the exciting stuff happening now, is of interest tome. Many thanks for the timeline breakdown !!
@MrMdb81
@MrMdb81 Рік тому
Terry bringing up the alien language from Arrival at the end of the interview - Stephen had a bit of a twinkle in his eye, I think, because didn't his son help create the language? That's great!
@yoyo-jc5qg
@yoyo-jc5qg 5 місяців тому
The 1990's is not a decade i would just gloss over, because both big data and processing power were crucial to neural net advancement, and the 90's brought us the internet, intel's major breakthrough CPU the pentium pro 64 bit, and Nvidia's first GPU a coprocessor that specializes in computationally intensive tasks.
@Chirislamentation
@Chirislamentation 9 місяців тому
Len best man
@robertabitbol6454
@robertabitbol6454 11 місяців тому
A lot of ideas thrown around but, as usual, the essential question How does AI work is never addressed.
@clockent
@clockent 10 місяців тому
Which Ai? There is no single "Ai".
@user-tn9dl8kl8e
@user-tn9dl8kl8e 11 місяців тому
The real talent is resolute aspirations。
@silberlinie
@silberlinie Рік тому
No, of course we can only start to stop the accumulation of cumputing nodes when we have reached the order of magnitude of the numbers of the natural model - our brain. 2:01:42
@grfdeadfg
@grfdeadfg 10 місяців тому
Why The Fuck Was This In my "Relaxation & Sleep" Playlist...
@BillBenzon
@BillBenzon Рік тому
Chomsky was irrelevant to computational linguistics. Syntax was based on dependency grammar not phrase structure. The worlds of AI and MT (machine translation) were separate well into the 1970s and 1980s. Different communities, journals, and confereneces.
@jabowery
@jabowery Рік тому
The Chomsky hierarchy was relevant but outside of the algorithmic information theory community is still ignored. Even within AIT it is mainly referred to in an attempt at Outreach to the language modeling community since Turing complete languages are presumed in algorithmic information.
@qikura
@qikura Рік тому
It was on autoplay after I fell asleep and woke up thinking I was listening to Bill Gates
@xDontStarve
@xDontStarve 9 місяців тому
😊
@valentinene7584
@valentinene7584 9 місяців тому
😊😊
@valentinene7584
@valentinene7584 9 місяців тому
😊😊😊😊
@valentinene7584
@valentinene7584 9 місяців тому
😊😊😊
@valentinene7584
@valentinene7584 9 місяців тому
😊😊😊😊
@MassDefibrillator
@MassDefibrillator Рік тому
Neural Net is a misonomer and is causing a great deal of confusion. Artificial Neural Nets ANNs have not resembled our knowledge of how the brain and neurons operate for at least 50 years. For example, we've known since at least the 90s that individual neurons are capable of simple computations, simple multiplications etc. Where as the so called artificial "neurons" take none of this on board, and just act as simple linear thresholds. Realistically, artificial neurons and biological ones have virtually nothing in common.
@sparkyy0007
@sparkyy0007 Рік тому
A single so called simple neuron broken down has more internal functional complexity than a mid sized city, hardly comparable to a hidden layer. You are correct; other than general circuit topography, they have very little in common.
@aadilansari5997
@aadilansari5997 Рік тому
​​@@sparkyy0007 sparkyy007 & massdefibrillatir, both of you, can i ask you what do you do in real life? Your comments are so good that I had to ask..
@athyneil
@athyneil 9 місяців тому
​@@aadilansari5997Internet experts
@Drineeeeee
@Drineeeeee 9 місяців тому
😢😢😂😅😢😅😅❤😅😢eu😢😮 1:59:50 😅🎉puojfogj
@torrey1913
@torrey1913 9 місяців тому
@tatcyr206
@tatcyr206 11 місяців тому
One day if AI come so far to copy this guy I deem thus it has been surpassed human.
@Blitz_1
@Blitz_1 10 місяців тому
дзз
@Silly.Old.Sisyphus
@Silly.Old.Sisyphus Рік тому
2:58:44 "what's understanding mean?" only someone who doesn't understand what he's talking about could utter such a banal rhetorical question. Just because AI hasn't yet figured out how to make a machine that can truly understand language (although CYC gave it a pretty good go) doesn't mean you can step in with your pseudo-language snake oil and claim that it understands.
@silberlinie
@silberlinie Рік тому
It's a shame that in her important dialogue they didn't want to discuss how and if one could achieve an ongoing learning after the training phase with the current NN architecture. Pitiful in the extreme.
@_ARCATEC_
@_ARCATEC_ Рік тому
Thank you Terry and Stephen. Great conversation, really enjoyed listening to it. 🤓👍•(()())•
@clockent
@clockent 10 місяців тому
Nice t1ts
@Runescapedocumentary
@Runescapedocumentary 10 місяців тому
5:26 thank me later
ISSEI funny story 😂😂😂Strange World 🌏 Green
00:27
ISSEI / いっせい
Переглядів 76 млн
Самый большой бутер в столовке! @krus-kos
00:42
Кушать Хочу
Переглядів 3,3 млн
Science & Technology Q&A for Kids (and others) [Part 144]
1:08:16
Stephen Wolfram: Can AI Solve Science?
2:33:17
Wolfram
Переглядів 10 тис.
Business, Innovation, and Managing Life (April 10, 2024)
1:14:40
What is ChatGPT doing...and why does it work?
3:15:38
Wolfram
Переглядів 2,1 млн
History of Science and Technology Q&A (April 17, 2024)
1:03:31
Wolfram
Переглядів 1 тис.
Time Stops at the Speed of Light. What Does that Mean?
8:20
Sabine Hossenfelder
Переглядів 209 тис.
Wolfram Physics Project Launch
3:50:19
Wolfram
Переглядів 1,4 млн
Future of Science and Technology Q&A (April 12, 2024)
1:21:50
Wolfram
Переглядів 1,2 тис.
Respect the powerful girls 🤯🔥💯🥶👌#shorts
0:30
Mohammed Ayan
Переглядів 16 млн
Чей танец нравится больше?👀
0:23
Katya KUDOS
Переглядів 2,8 млн
ОТСТОЯЛ СВОИ ЛИЧНЫЕ ГРАНИЦЫ😂😂😂
0:42
СЕМЬЯ СТАРОВОЙТОВЫХ 💖 Starovoitov.family
Переглядів 17 млн
NO NO NO YES! (40 MLN SUBSCRIBERS CHALLENGE!) #shorts
0:27
PANDA BOI
Переглядів 46 млн