Scientists warn of AI collapse

  Переглядів 703,429

Sabine Hossenfelder

Sabine Hossenfelder

День тому

Learn more about AI, math, and physics with courses such as Neural Networks on Brilliant! First 200 to use our link brilliant.org/sabine will get 20% off the annual premium subscription.
We’ve all become used to AI-generated art in the form of text, images, audio, and even videos. Despite its prevalence, scientists are warning that AI creativity may soon die. Why is that? What does this mean for the future of AI? And will human creativity be in demand after all? Let’s have a look.
🤓 Check out our new quiz app ➜ quizwithit.com/
💌 Support us on Donatebox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#science #technews #ai #tech #sciencenews

КОМЕНТАРІ: 6 100
@jalvrus
@jalvrus 2 місяці тому
"The more it eats its own output, the less variety the output has"... sounds exactly like the UKposts recommendations algorithm.
@kenboydart
@kenboydart 2 місяці тому
Good grief .... I think your right !
@n.v.9000
@n.v.9000 2 місяці тому
It sound like 99.99% of humans.... UKposts is already created output we use as input. AI has learned that behavior from us.
@SuliXbr
@SuliXbr 2 місяці тому
the YT algo is a reflection of your searches, my recommendation feed is always changing as I type in new searches for different content. but if instead you only live in the recommendations clicking away and never do your own new searches... sure it will get stale and sammey over time.
@Dr.JustIsWrong
@Dr.JustIsWrong 2 місяці тому
Which now only recommends from my own 'watch later' and 'watched history' lists. 🙄 Oh, and utterly random music videos which I've never watched, ever.
@FRACTUREDVISIONmusic
@FRACTUREDVISIONmusic 2 місяці тому
Sounds like the problem with CBS Star Treks.
@femkeligtvoet8896
@femkeligtvoet8896 2 місяці тому
This reminds me of when I was playing with play-doh when I was young. You start out with many different colors, and somehow always end up with a big brown ball.
@Marc83Aus
@Marc83Aus 2 місяці тому
Great analogy.
@JasonW.
@JasonW. 2 місяці тому
Same happens with corn, radishes, and carrots.
@carlag9888
@carlag9888 2 місяці тому
The LLM's are becoming inbred lol
@bleeckerstblues
@bleeckerstblues 2 місяці тому
Agreed, excellent analogy.
@willisbarth
@willisbarth 2 місяці тому
Did it smell the same either way?
@Koperviking
@Koperviking Місяць тому
I remember as a kid I used to record my voice and play it back on my speakers, which I then proceeded to record once more. By repeating it, I could hear how it slowly degraded until it was nothing more than a weird, synth-like sound.
@organfairy
@organfairy Місяць тому
My biggest problem with AI is that it needs to get the information from somewhere, and sometimes these sources can be slightly dodgy. I did an experiment where I asked ChatGPT about some very narrow subjects: The Danish organplayer Peter Erling, the trio Klyderne, and the artist Jørgen Fonemy. These are subjects that I have some knowledge about and actually have written Wikipedia articles about. I could see that most of the answer that I got from ChatGPT was based on the exact Wikipedia articles that I wrote! I have tried to write the truth in those articles, but if I didn't care if things were correct - or worse; if I deliberately wanted to mislead people, then AI would base the answers on wrong data, if there wasn't multiple sources available. The problem I see with AI is that we trust it too much. Already now there are people who believe that it is an omniscient trustworthy source of all answers and that it will always be more correct than human knowledge or just knowledge that we have googled or looked up in an oldfashioned book.
@illarionbykov7401
@illarionbykov7401 Місяць тому
Thank you for posting that. I suspected something like that is true. I like to test LLMs with riddles and verbal puzzles. The first impression I got was that the best of the LLMs were brilliant, as they could solve some of the toughest puzzles correctly, puzzles famous for being difficult even for the sharpest humans, and they even had good answers to pointed follow up questions. Then I tried novel puzzles based on famous ones, but with the questions reworded (by me) in subtle ways which changed the correct answer, and then the LLMs usually defaulted to "pattern matching" and giving me answers which were correct answers to the original "pattern" puzzles, but wrong answers to the novel reworded versions of the puzzles they were answering at the moment. They are good at answering known questions with known answers which are already published or posted on the Internet, but have trouble with novel variations which have never been published before. They are not figuring out the answer, but giving their best guess based on what they've already seen in their dataset. OTOH, the best LLMs keep getting better at adapting to novel variations, month to month, so it's wrong to generalize based on results from more than a few months ago. Their abilities are progressing rapidly at the moment.
@sportsentertained
@sportsentertained Місяць тому
It's also bad at interpreting articles. It told me something related to tech that I knew to be false and it provided links to articles "proving" it was correct. I read the articles and ChatGPT misinterpreted the text of every single cited source. Complete garbage as a research tool.
@illarionbykov7401
@illarionbykov7401 Місяць тому
@@sportsentertained which version did you use? I've read that they keep dumbing down ChatGPT to save on backend resources. The paid version is better than the free version, but still not as good as it was at the beginning (before it got flooded with new users)
@robertagren9360
@robertagren9360 27 днів тому
Then start printing books.
@katsmiles6734
@katsmiles6734 17 днів тому
In other words, it's scraping data and possibly reorganising it slightly or cutting and pasting and not attributing where the data is from. Very sneaky.
@alieninmybeverage
@alieninmybeverage 2 місяці тому
3rd possibility: AI learns how to gaslight us, and we forget how many legs elephants have.
@arctic_haze
@arctic_haze 2 місяці тому
This is a real possibility. I already noticed that my brain accepts the AI generated images as real even as I know what problems they have.
@alieninmybeverage
@alieninmybeverage 2 місяці тому
@@arctic_haze agreed. While it was said tongue in cheek, there are many kinds of peripheral knowledge about which we are impressionable.
@arctic_haze
@arctic_haze 2 місяці тому
@@alieninmybeverage I think it already happens on Instagram. People are using filters aimed at making them look like AI generated photos (smooth and symmetrical faces).
@markdowning7959
@markdowning7959 2 місяці тому
Sabine is a particularly good AI avatar. 🤖
@allenshafer1768
@allenshafer1768 2 місяці тому
Oh no
@markvoelker6620
@markvoelker6620 2 місяці тому
Apparently in the original Matrix movies storyline, the reason why the machines needed to keep those troublesome humans around was not as an energy source (“batteries”) but as a source of creativity. But the writers thought that this idea was too complex so they substituted the battery idea instead.
@GabrielLeni
@GabrielLeni 2 місяці тому
It's also in 'The Machine Stops'
@firecat6666
@firecat6666 2 місяці тому
Too bad, that's a much better idea. Although with all this talk about creativity, and AI putting an end to creativity and whatnot, I've never seen anyone mention that creating doesn't only mean creating good stuff, it also means creating crap. It seems to me that the people behind all these AI programs usually want them to create good stuff and not crap, so to me it's no wonder that they tend to end up converging (to creating good stuff, I'd hope) if they're trained on their own creations. Even if the original idea for The Matrix was better, I'd find it hard to believe that after a while the machines would still need humans at all, after they had learned enough about how to have ideas, good and crap, from us humans (and obviously, over time their thought processes would converge in the direction of having better and better ideas). EDIT: forgot a comma
@rumination2399
@rumination2399 2 місяці тому
Lol. The battery idea was the dumbest thing in the movie
@sh4dow666
@sh4dow666 2 місяці тому
I agree that the creativity idea would have been much better than the battery one, but ... all our knowledge about physics comes from inside the matrix, so maybe they just fabricated a different "physics engine" for it, so anyone escaping would be sufficiently confused to be easily captured?
@Rapscallion2009
@Rapscallion2009 2 місяці тому
Same in the Terminator universe. Well, almost. Skynet keeps useful people around to develop terminators and so on. In the early stages it actually preserves workers until they have built automated factories.
@dunmatta2670
@dunmatta2670 2 місяці тому
That plastic analogy is probably the most succinct depiction of AI generated content contaminating the environment and why I always thought that human intervention in the use of computers is always necessary. We can fake human thinking to a degree, but getting the full complexity is still a pipe dream.
@LukaMagda1
@LukaMagda1 24 дні тому
I don't understand why would we want machines thinking for us in the first place.
@magonus195
@magonus195 12 днів тому
​@@LukaMagda1sloth, indolence, and eventually totalitarian control
@trip_t2122
@trip_t2122 11 днів тому
​@@LukaMagda1 I think we can be dumb as a species. Just the same way we develop bombs that can completely wipe us. But maybe we do it for the sake of it or because we're just curious 🤷
@MensHominis
@MensHominis 11 днів тому
@@LukaMagda1 There’s this rather grim meme (I can’t remember the source): “Years back we were thrilled about AI taking over all of our annoying work so we could all focus on self-improvement and self-fulfilment, all become artists and the likes. What has happened instead is that AI is now creating our art and our writing while we’re still cleaning toilets for a living.”
@primus0348
@primus0348 10 днів тому
@@MensHominis Instead of Doing what we imagined it to do, its does the exact opposite, How did we as a Species Fuck up the Simplest Idea that AI is suppose to be, we had one job and we made that concept into the Worst thing Possible.
@Rosie-uf5ox
@Rosie-uf5ox Місяць тому
I love that this underscores how complex human intelligence really is.
@squamish4244
@squamish4244 Місяць тому
It doesn’t seem to be that complex, however, given how quickly AI went from stupid to smart.
@martakrasuska2483
@martakrasuska2483 Місяць тому
Or perhaps we just fell deep into the trap of belief system that as a society and civilisation we have already learnt everything there ever was about ourseleves and our human consciousness. @@squamish4244
@A.waffle
@A.waffle Місяць тому
Yes, we will believe anything 🤣
@man.horror
@man.horror Місяць тому
@@squamish4244No, its not smart at all in reality. It’s not even actual AI. It is an algorithmic system. Give it more data and it will get sharper. Thats how it’s programmed. It has no ability to think or comprehend what its outputting. A true AI that simulates the human mind in digital means would likely use algorithms as part of its system, but not as the entire basis. Todays “AI” is nothing but a generation system. And it’s not able to think and uniquely create anything truly new, based on the limitlessness of the human mind. It can mash and mutate things due to its flaws of understanding, but it is actually not truly and willingly making something new. It copies and makes mistakes which could be claimed to be creativity, which these algorithms have no actual ability to harness.
@squamish4244
@squamish4244 Місяць тому
@@man.horrorYes, the expert swoops in. Whatever. It's not that AI is that smart, it's that humans are not as smart as we thought we were. I'll take Max Tegmark's books over your two paragraphs here, thank you very much. Copium over 9000.
@realpdm
@realpdm 2 місяці тому
This could lead to a .. Nightmare on LLM Street....
@naparcasc
@naparcasc 2 місяці тому
That’s way funnier than it has any right to be 😂😂😂
@aromaticsnail
@aromaticsnail 2 місяці тому
Dad??? Did you get the milk?
@enriquea.fonolla4495
@enriquea.fonolla4495 2 місяці тому
THAT WAS GENIUS!
@pvanukoff
@pvanukoff 2 місяці тому
Win.
@deltaxcd
@deltaxcd 2 місяці тому
LLM are fine that's washt is known as training in synthetic data and it is done deliberately and it is reason why LLMs are getting better
@robertruffo2134
@robertruffo2134 2 місяці тому
As someone who used to play with photocopiers as a kid... A copy of a copy of a copy is always much worse and weirder than you might think. Small flaws amplify until you all you get is a smudged blur.
@phattjohnson
@phattjohnson 2 місяці тому
That's if you're using so-called "AI" exclusively. Using it sporadically, as merely another software tool in your creative arsenal, will give you the edge on those who flatly refuse to use it on principle. Anyway, there's financial incentives for big tech companies to ensure their AI is more accurate, faster, easier to access etc. than the competition. They're not just going to press the red button and let their AI run loose.. It's all still a service that needs 24/7 support by HUMANS behind the scenes.
@justafriend5361
@justafriend5361 2 місяці тому
Especially if the original was the nth copy of a blueprint. Had this in highschool...
@QIKUGAMES-QIKU
@QIKUGAMES-QIKU 2 місяці тому
Especially if that photocopy is of your butt😂
@QIKUGAMES-QIKU
@QIKUGAMES-QIKU 2 місяці тому
​@@phattjohnsonBot 😂
@Also_sprach_Zarathustra.
@Also_sprach_Zarathustra. 2 місяці тому
You truly don't know how ai systems (& AGI) work. Real AI systems are'nt photocopiers.
@sheshotjfk8375
@sheshotjfk8375 2 місяці тому
I recognized this as a possible problem when I learned that they were training AI by allowing them to converse with people on Reddit. AI developers can now apparently pay a fee to be allowed to plug their AI into Reddit and have it learn by having conversations there. It occurred to me that, "wait a minute, wont then the AI's end up conversing with each other and training each other??? Won't this cause problems?"
@diegocrusius
@diegocrusius 28 днів тому
to me the scariest thing is how quickly people advocated against themselves the moment they realized the potential with AI
@MichaelDembinski
@MichaelDembinski 2 місяці тому
A friend in the UK is a graphic designer; he says that over the past few months, more and more clients have been saying 'NO!' to AI-generated artwork - "it's too samey". They'd rather pay more for something original. Trouble is, AI has pushed down the rates; so while designers and artists are noting an uptick in requests for proposals, the money is much worse.
@typograf62
@typograf62 2 місяці тому
Yes, AI images looks horrible. To many details that make no sense, glittering stuff, imposing backgrounds, flaming skys, opulent clothing ... Often I do not wan't to read the text, it just feels like candy all the day.
@MCRuCr
@MCRuCr 2 місяці тому
AI will teach us what truly matters ... Human connection and true emotions is what we should care about. Spending time with your loved ones, (com-)passion etc.
@jackmiddleton2080
@jackmiddleton2080 2 місяці тому
It just seems like there is so much competition in anything creative that whoever is paying can have people jump through whatever hoops they want. And why wouldn't you ask for original art instead of AI generated art if you have the leverage.
@zperdek
@zperdek 2 місяці тому
​@@typograf62Only way out of it is that designers has to use Ai and start to manage it.
@ghasttastic1912
@ghasttastic1912 2 місяці тому
ai cannot get what a roblox game thumbnail looks like. it can generate one but its not convincing at all. even the other styles of roblox thumbnail dont fit what ai generates.
@tandt7694
@tandt7694 2 місяці тому
My experience with chat GPT is that you can ask it 2 or 3 questions, get it to contradict itself, and when you point out the contradiction, it starts to ask if you are angry, and/or says IT'S taking a break from YOU to let you relax...😊😮😂
@l.w.paradis2108
@l.w.paradis2108 Місяць тому
So it does have gaslighting down pat. 🤣🤣🤣
@tandt7694
@tandt7694 Місяць тому
@@l.w.paradis2108 That's exactly what happened. 💯
@officialpennsyjoe
@officialpennsyjoe Місяць тому
Makes one wonder if the AI engineer had a lack of qualification or lack there of critical thinking skills.
@JesseDLiv
@JesseDLiv Місяць тому
The gaslighting has begun
@derrickmcadoo3804
@derrickmcadoo3804 Місяць тому
Don't stare into the Dark Crystal. Has no one watched the movie?
@tygorton
@tygorton Місяць тому
Glad you started this conversation. There is also the theft component of generative AI. A UKpostsr like yourself will get automatically copyright struck for using 4 seconds of a clip in a 20 minute original video. Yet these generative AI companies can use entire social media platforms with content painstakingly created by individuals across decades to create their data sets. This is peak hypocrisy in which, as per usual, corporate "big money" is protected while the individual is left with no means of defending their content. Generative AI is 100% theft in my opinion.
@barbi111
@barbi111 18 днів тому
I agree
@darkushippotoxotai9536
@darkushippotoxotai9536 10 днів тому
No art is truly "original". Artists are inspired by previous artists who are inspired by their surroundings and modify reality slightly based on their mental conception of what they want to highlight. Art is derivatised inherently by nature of human learning. Generative AI follows similar processes. It doesn't 'copy and paste' as people have claimed. It has a distinct concept, albeit less defined than a human, of what it is asked to portray. AI art is inspired by and not directly copying actual works. If we start copyright striking AI, it should follow that we strike virtually every other art piece.
@tygorton
@tygorton 10 днів тому
@@darkushippotoxotai9536 You keep believing that. It's a tired and completely flawed argument. First off, there is a human TIME factor involved. A human artist must first put in the hundreds of hours of work to accomplish some level of mastery over their craft before they can even THINK about mimicking another artist's style. That process produces mutual respect. This entire component is lost with "AI" slop. There is so much more at play here but it's just not worth getting into in a comment section on UKposts for someone who has no actual desire to objectively weigh new perspectives. You want the AI future. Well, it's coming. Nothing will stop it. The tech overlords are investing trillions so you'll get your wish. I hope it is everything you want it to be.
@darkushippotoxotai9536
@darkushippotoxotai9536 10 днів тому
@@tygorton So, simply requiring more time and being less efficient and sometimes even of a lower quality is better because a human made it ? Sidenote, I didn't really say mimicry, but rather drawing inspiration. Sure, AI can do that as well, but I was moreso talking about inspiration or to put it simply, pointers or definitions or Illustrations of art. Humans do not make an unprecendented or completely unique art. It's subconciously drawing on other works and surroundings of the artist. Almost Same as an AI, just very inefficient. As for intent, It's a human writing a prompt. An AI doesn't simply mash things into a image. How many artists you know of have drawn a celtic man chasing a dog through a world made up of needles ?
@tygorton
@tygorton 10 днів тому
@@darkushippotoxotai9536 Enjoy the "efficiency". Like I said, your AI future is coming. It will be a world of emptiness filled with people who lack wisdom; the evidence of this is already permeating every aspect of our culture and it hasn't even started yet. Enjoy.
@grosvenorclub
@grosvenorclub 2 місяці тому
A friend of mine who has been a musician since the late 1950's explained tp me a few years back how there is very little originality in music these days as much is dependent on preset rhythms , chords , etc much due to "electronic " devices . It can only get worse with so called AI
@la6136
@la6136 Місяць тому
Music will get worse in the future. Music production programs are all adding AI now. In the future tt will be record labels using AI to write lyrics, produce beats, and then they will get a hologram to perform instead of a human so they don't have to pay artists.
@pigcatapult
@pigcatapult Місяць тому
@@la6136 hopefully there will always be indie bands
@patientzerobeat
@patientzerobeat Місяць тому
It's not that music is getting worse. It's that WAY more people do it now, and a growing percentage of them are indeed cranking out unoriginal stuff because of the cheap tools that allow that to happen. There's still the same amount if not MORE original unique music being made now, but it can get lost in an ocean of cookie cutter creations. There is also the bias that happens when only the best stuff from the past has staying power; there is so much crap music from decades ago that is forgotten and/or unavailable. Whereas everything current is available right now obviously, and crap music from just a few years ago is way less likely to be forgotten and/or unavailable. And in this digital age, nothing ever really gets "erased". Who's going to digitize and put online some garbage unoriginal music from 1982?
@grosvenorclub
@grosvenorclub Місяць тому
@@patientzerobeat Yes I agree but there were actually far more amateur musicians back in the 50's and 60's as that was probably the last of the generations who relied on home entertainment ( ie some or a number of members of a family played some sort of musical instrument ( Radio initially and then TV started to kill that of )
@SpiceFox
@SpiceFox Місяць тому
People have always used similar cord progressions. You use the same base stuff to make something new. If you can’t find good music nowadays, you aren’t looking hard enough
@ZappyOh
@ZappyOh 2 місяці тому
1) AI is trained on data from the Internet. 2) AI outputs data to the Internet. 3) Goto 1 ... haven't anybody acquainted themselves with the topic of "inbreeding" ?
@user-kw8kh8dg3h
@user-kw8kh8dg3h 2 місяці тому
Yep...After all, AI is a tool... It's like eating soup with a mesh strainer....
@user-kh7kx9en9l
@user-kh7kx9en9l 2 місяці тому
Solve that problem with using an A.I. classifier for detecting whether data is synthetic or not. Diversity isn't going down its just laziness in coming to creating datasets.
@LtFoodstamp
@LtFoodstamp 2 місяці тому
There are solutions to this though. 1) AI scientists run AI through quality data. 2) AI scientists run AI through a comparison between quality data and its outputs to provide corrective comparison. 3) Give AI real vision (robotic eyes) so it can observe real life examples from the real world. 4. Humans keep involvement in the process of determining what gets posted to the internet. If AI produces garbage it's less likely to be selected. If it produces something accurate, it's more likely to be accepted. Survival of the fittest response.
@alieninmybeverage
@alieninmybeverage 2 місяці тому
I took your advice and asked AI what "inbreeding" is. It replied: "SUAVE WHARRRRRGARBLE" and knowing is half the battle.
@SebSenseGreen
@SebSenseGreen 2 місяці тому
Never, ever, use a Goto statement!
@creatingwithlove
@creatingwithlove 2 місяці тому
This is exactly what I was telling people the other day. Our greatest danger with AI isn't that it'll take over but that at the moment we begin relying on it most, the more it will collapse because it's going to end up cannibalizing itself.
@moose9211
@moose9211 2 місяці тому
Wouldn’t there be backups for ai to be set back is this were to happen?
@goodlookinouthomie1757
@goodlookinouthomie1757 2 місяці тому
@moose9211 Ideally you'd think so but from what I understand nobody even knows how these things think any more so it's hard.
@AparnaGurudiwan
@AparnaGurudiwan 2 місяці тому
Why would it cannibalize itself ? I don't get it
@moose9211
@moose9211 2 місяці тому
@@goodlookinouthomie1757 guess we’ll have to wait and find out
@Szpagin
@Szpagin 2 місяці тому
​@@AparnaGurudiwan AI being trained with AI-generated content.
@drachimera
@drachimera 29 днів тому
Sabine, as a professional in the application of machine learning in medicine I would like to thank you for making this video! It’s understandable and it reaches a lot of people! There is the AI hype (which people should not believe because it comes from executives and rookies) and there is the machine learning reality that veterans understand. This technology will be useful in automating some drudgery and common simple tasks…. It’s dogshit at doing anything truly valuable. What’s most worrying is the very real threat, without laws, that this nonsense will create such a firehouse of bullshit that we can’t get through our email, find what we need on the web, tell the difference between fact and fiction, and generally think for ourselves!
@arifchagla8752
@arifchagla8752 Місяць тому
That’s really interesting.. because comparing it to ‘bad cinema’, most bad cinema is bad in the same way, if that makes sense. Overused tropes, predictable storylines, cliche characterisation. Is there someone that can expand on this thought?
@rogierb5945
@rogierb5945 10 днів тому
'Overused tropes, predictable storylines, cliche characterisation.' They exist for a reason, because most people like them. All the bad stories of the past have been shedded and only the really good ones remain. They inspire new generation of storytellers. Some new ideas might be added but most new ideas will be shedded because they arent like by the audience. Everything you see today are 'tried and tested' formulas. They have a proven trackrecord througout human history. Most people arent particularly interested in originality, they want what they like, and storytelling history has already filtered most of the ideas wich people like.
@arifchagla8752
@arifchagla8752 10 днів тому
@@rogierb5945 most people might like them, but there was a time when they didn’t. Production companies try to ‘play it safe’ and by doing so release stuff that leaves audiences feeling empty/unfulfilled. I recently watched a film, Challengers, it was not what I expected, not that film exactly, but maybe the answer lies to taking risks and creating something truly engaging and unique, then the trope cycle repeats. Is it self cleansing? Right now it really needs a cleanse I feel like
@nerdexproject
@nerdexproject 2 місяці тому
It's true, when you've worked enough with ChatGPT you can immediately recognize a ChatGPT text. It just always has a certain vibe that makes it distinguishable from human text.
@manutosis598
@manutosis598 2 місяці тому
I saw a guy using chatgpt on youtube comments and i can confirm
@NoFeckingNamesLeft
@NoFeckingNamesLeft 2 місяці тому
Corporate-lobotomy vernacular English
@destructionman1
@destructionman1 2 місяці тому
Is your comment chatGPT? Is this?
@gmenezesdea
@gmenezesdea 2 місяці тому
I fear it gets so good that we can't even pick up on those little flaws and quirks any more especially for videos. When those Sora videos were released the only one I could tell was AI was the woman walking on the street (her hand and face had weird details). I imagine the next ones will fool me better.
@TCakes
@TCakes 2 місяці тому
The key is for a human to utilize and modify AI generated content, not just copy/paste. Also realizing that some ai is better than others at specific tasks (gemini for emails, chatgpt for code, etc)
@joanlopez8769
@joanlopez8769 2 місяці тому
You reminded me of Pandora, the music recommendator that provided you with music according to the 👍 and 👎 that you gave to the songs proposed. No matter if you started with Black Sabbath, Chopin or Yunchen Lahmo. Eventually, after a couple dozen songs, you always ended up in a Coldplay loop.
@Lazarus1095
@Lazarus1095 2 місяці тому
What dystopian hellscape is this?!
@shrimpkins
@shrimpkins 2 місяці тому
All roads lead to Coldplay.
@mipmipmipmipmip
@mipmipmipmipmip 2 місяці тому
This seems to be the algorithm UKposts Music uses for their "song radio" playlists 😢
@xizar0rg
@xizar0rg 2 місяці тому
This sounds like user error; I've had a sub to Pandora basically since it was still just the Music Genome Project and I've never heard Coldplay on my stations. Coldplay: Not even once.
@user-mz6iy5ip9o
@user-mz6iy5ip9o 2 місяці тому
Damn this reminds me of youtube. I've had to start making new accounts all the time because the algorithm is quickly devolving into recommending quite literally the same videos I've already watched over and over and over and i can;t find anything new or exciting. Music is by far the worse, I decide I want to go outside my usual tatse and listen to nostalgic dirty pleasure pop from my youth, youtube wants me to listen to my usual stuff again... It's absolutely gotten worse than it used to be without a doubt
@GaryJust
@GaryJust Місяць тому
Short, to the point, informative. Thank you, Sabine.
@Ibhenriksen
@Ibhenriksen Місяць тому
It's kinda like Google search engine. It started out sucking, then there was a time it was pretty good to find stuff. Now it sucks again....
@icls9129
@icls9129 2 місяці тому
Adding random variation probably isn't as easy as it may sound because the randomness still has to follow certain rules. For example, no one is going to believe that elephant with two trunks.
@a_kazakis
@a_kazakis 2 місяці тому
I think you are mistaking randomness for imperfections. She is not saying images need to have faults on them. Diversity here means for example some elephants are young, some adult. Some are eating some are sleeping some are drinking. Some are photographed at night, some are walking on grass, some on rock, etc. If you see the AI samples provided, they all look exactly the same. Zero diversifying.
@lolbajset
@lolbajset 2 місяці тому
@@a_kazakis but that's his point... how will the AI know what's appropriate and what's not? how can it know to add diversity in lighting and background, and not in the number of trunks or skin color?
@drno87
@drno87 2 місяці тому
AI models usually have some way of computing how likely they think different outputs are. A model that turns a written prompt into an image has some notion of how "close" an image is to the prompt. Instead of taking the closest image to the prompt, you might instead take another nearby image determined by some random number. Unfortunately, there isn't a good rule for defining the precise details of the randomization scheme. There's a lot of ad-hoc methods that work well for one group of prompts but fail for others.
@lip3gate
@lip3gate Місяць тому
@@a_kazakis It makes no sense. There are already millions of photos of elephants from different angles carrying out different activities in different scenarios. If ALL the photos available on the internet (copyright or not) are not enough for the model to be able to generate convincing photos, the problem is not having more diversity in the dataset
@audreylin3466
@audreylin3466 Місяць тому
​@@a_kazakis It reminds me of a children's art class. One kid will draw a house, car and tree; And a dozen other kids will copy them. There may be variations like an apple tree or a dog but they're all relatively alike.
@PhilMoskowitz
@PhilMoskowitz 2 місяці тому
Garbage In/Garbage Out. I've been saying the same thing about both AI and Analytics for the past decade and a half. People only want to look at processes, algorithms, ease of use, speediness, raw power, TCO, design and pretty UI with both AI and Analytics. You rarely hear people talk about things like bias, data integrity and context. Those three things only come into conversation when AI and Analytics produce horribly incorrect results.
@KevinOlsen-cd9ez
@KevinOlsen-cd9ez Місяць тому
But understanding bias, data integrity and context would require...uh...you know, uh...like...thinking. we can't have that.
@lg2971
@lg2971 20 днів тому
Thank you for clearly articulating what many have been trying to point out.
@notBeggingMattandLissy2PlayRE4
@notBeggingMattandLissy2PlayRE4 Місяць тому
This is already true for me when writing. In the first few rounds it appears as if the AI is very creative but soon after some repetitions it becomes clear that the AI keeps repeating itself over and over again. This is one of the reasons I am not concerned too much. It appears that the human still has to input A LOT of guidance to make sure it doesn't repeat itself and actually gives you more interesting "mixes" instead of repetition.
@tamlynburleigh9267
@tamlynburleigh9267 2 місяці тому
You make a good point. Already I can usually pick the ‘style’ of AI generated images. They have a certain ‘style’ because they are in a sense too perfect, too smooth, too balanced. It is not something one could define in some cases, but the human brain is good at recognising patterns.
@TheMillionDollarDropout
@TheMillionDollarDropout 2 місяці тому
Tell that to the tons of people coming at a genuine human artists because they all thought he was lying and using an image generator
@RustOnWheels
@RustOnWheels 2 місяці тому
Too smooth..? I find there is this blurriness that makes them so easily recognizable.
@MyAmpWamp
@MyAmpWamp 2 місяці тому
They often have this fractal like composiotion. Many generated images with people have a bit of paintairly feeling because the most of database was artists pages like Artstation. You can easily see Artgerm's style in many of pritty girls pictures. And why all the images are young woman because this is young woman is most popular subject on these pages when it comes to people. In photography young woman is I believe also one of the most popular subject in human cathegory.
@RustOnWheels
@RustOnWheels 2 місяці тому
@@MyAmpWamp this calls for Erik swamping AI with shirtless old men
@capnbarky2682
@capnbarky2682 2 місяці тому
There is no composition to AI art. Human artists will be selective about rendering in order to focus the viewer on certain things.
@Muxxyy
@Muxxyy 2 місяці тому
There's a third option: there may be soon a deliberate attempt to poison the content to make it unreadable for AI. There are already tools out there that scramble images just enough to make them confusing for AI to use as a training set.
@phattjohnson
@phattjohnson 2 місяці тому
Given how much these systems have already been trained, any 'poisoned' images would now likely be ignored as the noise they probably amount to.
@darkthunder301
@darkthunder301 2 місяці тому
@@phattjohnson if there's enough poison then statistically it will reach the sample set of plenty AI systems and lock itself into garbage. If the poison is ignored then that's a smaller sample space AI has access to and become boring and derivative.
@ArcanePath360
@ArcanePath360 2 місяці тому
The only way I know is by setting your meta data to your images to be erroneous. How can you scramble an image and still have it viewable to humans? Doesn't the AI access it the same way we would?
@rmidifferent8906
@rmidifferent8906 2 місяці тому
​@@ArcanePath360You can change a lot of pixels slightly without humans noticing any changes. AI will see it though and learn accordingly
@ArcanePath360
@ArcanePath360 2 місяці тому
@@rmidifferent8906 But if it's unnoticeable, what's the point?
@RobertMarzullo
@RobertMarzullo 5 днів тому
Unfortunately it will only be a matter of time and they will either get better with the parameters that introduce random variables that simulate creativity or they will achieve singularity which they are moving extremely fast towards. I believe we just have to keep creating original works of art. Ai can never replace that. Draw and paint more originals and let Ai exist for the people that don’t want the joy and fulfillment that true creativity brings them.
@mishalzeera8172
@mishalzeera8172 29 днів тому
After the autotune-as-an-effect started, with Cher singing "Believe", you have multiple generations of singers who mimic the sound of autotune artifacts quite naturally and spontaneously. Also, young people having plastic surgery to mimic the look of phone camera filters. We are also trainable, it turns out. I think that is an important element to consider when predicting the future of this stuff.
@lundsweden
@lundsweden 2 місяці тому
So basically, if you keep feeding the output back into the input, you could get a feedback loop.
@Greenmachine305
@Greenmachine305 2 місяці тому
Not exactly in this case but I certainly get your point, in that the result is undesirable if one values health or positivity. To your point, I think a better description of Sabine's observation about the failing of AI would be "garbage in, garbage grows". Perhaps the creators should take this to heed and develop systems that augment the process to manage the generated information in a way that aligns with what is in humanity's best interest. Less garbage is in everyone's best interest.
@SandersMacLane
@SandersMacLane 2 місяці тому
This would make an interesting experiment. Begin with a discrete distribution of objects which is peaked, like a Gaussian.Sample the entire distribution gauging similarity as a dot product. Exclude one most-dissimilar object each time the entire distribution is sampled. ultimately you should sharpen the distribution until you get a spike at the most probable /identical objects.
@Greenmachine305
@Greenmachine305 2 місяці тому
@@SandersMacLane What field do you work in?
@fromfareast3070
@fromfareast3070 2 місяці тому
Sounds like Systems theory
@g0d182
@g0d182 2 місяці тому
😮😮....and or a prompting problem, using basic prompts and expecting deep answers
@ZackLee
@ZackLee 2 місяці тому
As an artist, this is a known issue in HUMANS Thats why the art solution is to look at the "old masters" as mentors before learning how to draw from more modern artists
@jeanclaudethedarklord6205
@jeanclaudethedarklord6205 2 місяці тому
Really "love" how a tool for HUMAN expression is now replaceable by a fucking machine
@asdu4412
@asdu4412 2 місяці тому
Call me a snob, but I'm even more pessimistic about the decline of human taste than I am about the technical shortcomings of AI, which is a problem that reliance on AI for the production of images, text, music, etc. will likely exacerbate, but certainly didn't create. From my point of view, even before it started to become obvious how bad and samey AI art really was, it was already quite obvious how the stuff people wanted AI to create was junk in the first place: pop culture fanart and stuff that mimicked stereotypical pop culture tropes, done in a glossy, quasi-realistic style. The only "interesting" AI art occurred early, when AIs tended to fail at their task and produced bizzarre unintentional surrealism. There was a famous image of a collection of completely unrecognizable objects that made the rounds a few years ago and which was (incorrectly) described as an attempt at reproducing the visual experience of someone having a stroke (whereas it was just AI image generation still being too primitive to successfully reproduce its models): that might well have been the aesthetical peak of AI art.
@DarkFox2232
@DarkFox2232 2 місяці тому
Or adopt creative mentality. Next time you create, take piece of paper. Crumple it. And use it as stippling too. For following projects, paint paper with some thin color, let it dry. Put layer of transparent soap or similar material. Dry again. Layer of another color, followed by different color. Repeat few more times. Final layer should be black or white paint. Then use scratching tool to "draw" with different pressure. Even lid of some jar can be used as artistic tool for painting. Or plastic body of old pen as spraying tool. Same applies to sculpting, dancing, music, ... Just let your mind free itself from cage of mundane existence.
@truck6859
@truck6859 2 місяці тому
And then the true output comes from the human soul, which AI doesn't have.
@FragmentOfInfinity
@FragmentOfInfinity 2 місяці тому
​@@truck6859correct. Eventually with enough training and data purification, AI will have more soul than humanity.
@Jumptownwore
@Jumptownwore Місяць тому
Makes me think of the 3 body problem issue/chaos theory. The more variables, the faster chaos erupts.
@billdavis5483
@billdavis5483 7 днів тому
I think Frank Herbert might have already told us the eventual solution in Dune.
@johnelmer1556
@johnelmer1556 2 місяці тому
My experiance with ChatGPT shows it to be a regurgitator, the test questions were in an area of X-ray physics that I know well and it spewed out all the usual stuff with no insight, no deep understanding, no creativity, nothing that would indicate any form of curiosity.
@lukeskyvader3217
@lukeskyvader3217 2 місяці тому
Still enough to replace 98% of the current jobs ;)
@othercryptoaccount
@othercryptoaccount 2 місяці тому
3.5 or 4?
@Threemore650
@Threemore650 2 місяці тому
I think Meghan Markle gets it to write her speeches. It’s all wordsoup.
@Glacierlune
@Glacierlune 2 місяці тому
​@@lukeskyvader3217 I like how you said it like it actually happened but there isn't any evidence beyond some idiot repeating marketing material that couldn't be proven as lying even tho everyone knows they are making shit up.
@user-ni2rh4ci5e
@user-ni2rh4ci5e 2 місяці тому
garbage in & garbage out. Put in the extremely usual stuff, expecting something novel? GPT is basically bound to what you ask, mirroring the original input.
@ExploringAI42
@ExploringAI42 2 місяці тому
The one thing people should know about machine learning is: a machine learning trained model will only be good as its training data. It's just learning (in theory) the pattern behind the data leading to a host of problems. The main issue is that it doesn't actually reason about the data. Let's say I train a model where I have several examples where I have pi as 3.14 and then one where it's 4. The model doesn't reason "you know.... this one example seems to be wrong" but rather it updates the model to make it slightly more likely it will give the wrong answer. So how do you prevent models training on information generated by another machine learning model? The current approach is to stick to information before generative AI become dominate but most of that information (for better or worse) is probably considered or part of the training dataset. The main problem is that there's a popular opinion in machine learning (and sadly AI) that, as an AI researcher, I have had to deal with. This opinion is the key to all AI problems is that we just need to use larger models, with more training data, and train it in the "correct" way. "Look how far LLMs have come. Just imagine how much better they will be in a couple years". But you run into the 90-10 principle: 10% of the effort for 90% of the results and vice versa. It's why self-driving cars are taking a long time: there is nearly an infinite extremely rare cases that the car needs to make the right decision in. As such, it should be expected for the current LLMs to plateau performance wise unless new smarter methods are found.
@JordanCorkins
@JordanCorkins 2 місяці тому
Thank you for your insight, I think I agree with this. In the case of LLMs, they clearly have a use case already that will not go away, but I don't think they can deliver on the promises being made. I do not see how to make them be reliable enough to work in most business situations. I feel that many companies are looking for a way to implement them, and almost making their engineers find a way to make them useful, even it it makes no sense. The scaling already seems unsustainable, and while the "emergent" behaviors are very cool, nobody really understands how they relate to scaling (aka its not a defined ratio of x amount of compute/data for x more emergent behaviors)
@phattjohnson
@phattjohnson 2 місяці тому
It's not even machine 'learning'. It's 'just' scripted data consolidation, procedural compression and re-generation, and some other mumbo-jumbo that honestly has all been around since the conception of PCs. Just now we've got several modules all running simultaneously in one disjointed codeblock.
@octavioavila6548
@octavioavila6548 2 місяці тому
I'll do you one better. We will never solve this issue. It's a fundamental impossibility. We will never have self-driving cars. There is no exponential curve, no singularity. Forget it. We are very close to the best AIs will ever be
@JordanCorkins
@JordanCorkins 2 місяці тому
@@octavioavila6548 You base this on what exactly? Claiming AGI will never happen, and self driving will never happen is the same as the people who think we will have AGI in 2 years because of the hype. Nobody knows the limits or timeline, but I don't see why it would be impossible.
@goodlookinouthomie1757
@goodlookinouthomie1757 2 місяці тому
"Hold up. Something's wrong here. Not sure what it is but I feel like we should take a step back and go through it again" Said no AI ever, past, present or probably future.
@andrewhall7176
@andrewhall7176 18 днів тому
This actually is not that suprising, when you think about it: these AIs are basically using huge amounts of data to approximate averages of various things, and with more iterations they extract more and more core features until they just have the same set of features they are using all the time. It's like taking data scores and continually averaging them until you are left with one value.
@jdmac44
@jdmac44 2 місяці тому
Worst case scenario, it'll be like Walmart moving in to a community, destroying the mainstreet businesses, everyone takes Wal-jobs, and then the Walmart closes because the local economy is crap simply leaving a ghost town with people who don't have the capital, business acumen or consumer base to reboot mainstreet.
@danre64
@danre64 2 місяці тому
Every email in the future will start with: "i hope this email finds you well" 😂
@edt6488
@edt6488 2 місяці тому
No, it has found me unwell! Please call an ambulance for me!
@spvillano
@spvillano 2 місяці тому
An excellent filter phrase... ;)
@sebastiankorner5604
@sebastiankorner5604 2 місяці тому
They even translate the phrase in German, where it make even less sense. Ich hoffe meine Nachricht erreicht Sie gut... Lastly erreicht Sie bei bester Gesundheit. Both are phrases not used in German.
@ashroskell
@ashroskell 2 місяці тому
As these AI errors flood the net, will they become more and more of the training data for other AI’s? Until images get increasingly mutated and standard emails all start with, “I hope this emu fondles your willy.”
@ashroskell
@ashroskell 2 місяці тому
@@edt6488: To which ChatGBT responds, “You’re an ambulance . . . Oh, wait. That didn’t work, did it?”
@gsvenddal728
@gsvenddal728 2 місяці тому
Wow... this is like ultra-high-speed "Groupthink"
@jovetj
@jovetj 2 місяці тому
Yup. And people fear this! LOL! (Not that herd mentality and groupthink aren't bad things, among human...)
@DJ_POOP_IT_OUT_FEAT_LIL_WiiWii
@DJ_POOP_IT_OUT_FEAT_LIL_WiiWii 2 місяці тому
This is not surprising. It's like trying to compress the same file again and again, it will inflate.
@PrivateSi
@PrivateSi 2 місяці тому
Soon with Forced Diversity Quotas too no doubt...
@Utoko
@Utoko 2 місяці тому
This is such nonesense. In terms of LLM's it is the desired outcome because you predict the most likely next token. You want the best answer, not any answer as default. and yes all models have already a "temperature" parameter, which regulated the unpredictability and range of the possible tokens which can be chosen. For images the same. The example is really bad in the paper they use the same prompt, don't inject random noise. Yes Midjourney as a consumer product has the issue but the underlying models don't have the issues. You can have as much randomness, creativity and variance as you want. This video displays the increase accuracy, which they aim for as issue, which it is not. temperature=0.6 or higher and you get your creative storytelling back.
@denisematteau
@denisematteau 8 днів тому
I was using an AI image generator to come up with rug designs. The first few were great but it soon deteriorated to regurgitated images similar to what it already produced.
@istvanpraha
@istvanpraha Місяць тому
In my industry gives vague answers when you ask about laws. It keeps saying that things vary. They don’t vary, there’s just more than two answers. That’s not the same as varying. Some customers go under option A and others under option B. You can’t just tell them it varies
@johnatyoutube
@johnatyoutube 2 місяці тому
As an AI scientist, we've been talking about this for years. Once the AI starts eating its own tail it will quickly optimize to a singularity of stupidity in its own echo chamber. The only way for AI to continue to work is to automatically label all AI output and ignore it for training. Or to manually post label it by humans. Humans are necessary for AI success in any case. It would be interesting for you to discuss both the labeling servant culture and its injustices as well as the impossibility of AGI if AI depends on human labeling.
@DKNguyen3.1415
@DKNguyen3.1415 2 місяці тому
Reminds me of the trend of compensating CEOs with stock.
@johnatyoutube
@johnatyoutube 2 місяці тому
​@@DKNguyen3.1415especially if the company is losing money and laying off workers.
@peznino1
@peznino1 2 місяці тому
"...it will quickly optimize to a singularity of stupidity..." Think you just optimized for word salad.
@DKNguyen3.1415
@DKNguyen3.1415 2 місяці тому
@@johnatyoutube Well, it's basically optimizing the short term stock-price at the expense of everything else so the CEO cash out. Long-term viability, product quality, worker productivity, accurate book-keeping and finances, even the best interests of shareholders and real profits and revenue don't matter if sacrificing them can result in a stock payout before the consequences hit.
@anywallsocket
@anywallsocket 2 місяці тому
As a layman I disagree. You’re right if you don’t think outside the box, but we can use AI to sample evolutionary algorithms to generate networks for more AI models. This space is practically limitless.
@thebooksthelibrarian8530
@thebooksthelibrarian8530 2 місяці тому
2) More randomnes in AI output might do away with the problem of repetitive AI output, but it might increase the mistakes. Instead of elephants with big heads or two heads, we might get elephants with two big heads.
@red.aries1444
@red.aries1444 2 місяці тому
Or we get more pink elephants or other colors or with red instead of green grass...
@robadkerson
@robadkerson 2 місяці тому
@@red.aries1444 that wouldn't be so bad if we can get AI to help us create real pink elephant and red grass DNA
@Rich-Oh
@Rich-Oh 2 місяці тому
Downside: elephants with two big heads Upside: two big headed elephants are all young and good looking.
@thebooksthelibrarian8530
@thebooksthelibrarian8530 2 місяці тому
@@red.aries1444Actually, I would prefer green elephants. That's more environmentaly friendly.
@matheussanthiago9685
@matheussanthiago9685 2 місяці тому
​@@Rich-Oh and white
@TheBigdog868
@TheBigdog868 Місяць тому
Doctor Frankenstein used snippets from a whole bunch of people to make his monster. I was told the experiment didn't turn out well for him either. 😂
@CogitoBcn
@CogitoBcn 26 днів тому
The problem is oldest than your video suggests. Automatic translation and even grammar correctors have been distorting human language (and reducing language variance) for decades, and we have incorporated their language quirks in our day to day language.
@hunteralderman4867
@hunteralderman4867 2 місяці тому
I think a big part of the convergence is that people often are attracted to certain tropes and conventions when it comes to what they like, so AI produced images are actively being 'pruned and purified' by our preference of our existing cultural paradigms. What I think is really interesting is the feedback, where people's tastes of which conventions they like are in turn influenced by AI art.
@juanausensi499
@juanausensi499 2 місяці тому
Yep. It is easy to see AI images tend to be standarized. What is not that easy to see is if that is really a problem. People like standards. Just look how actors and actresses look like.
@thenonexistinghero
@thenonexistinghero 2 місяці тому
You couldn't be more wrong. The woke crap is purposefully programmed into it. Same for the censorship. Has nothing to do with preference and cultural paradigms.
@engelbertgruber
@engelbertgruber 2 місяці тому
means it is a problem of biological intelligence too ? 😂
@Marquis-Sade
@Marquis-Sade 2 місяці тому
@@juanausensi499They dont
@Marquis-Sade
@Marquis-Sade 2 місяці тому
@@engelbertgruberWhy?
@davemottern4196
@davemottern4196 2 місяці тому
This is exactly what I've been thinking since all of this exploded into popular awareness. It's like a giant ouroboros eating it's own tail. I'm glad to see that people are talking about this. Editing to add: Will you critics please lighten up? I'm not anti-AI. I'm just agreeing with Sabine that this is a potential problem that should be studied. All new technologies have potential problems that need to be studied and understood. Pointing this out does not make me some kind of neo-luddite.
@2ndfloorsongs
@2ndfloorsongs 2 місяці тому
People have been eating their own tales since there were people, I'm not sure why AI is expected to be different. Most people aren't that creative, but a few are; most AIs won't be creative, but a few will. Same old, same old.
@kikijuju4809
@kikijuju4809 2 місяці тому
@@2ndfloorsongs Most Ai will be X time better than best human in creativity, you can compete with machines
@mr_pigman1013
@mr_pigman1013 2 місяці тому
AI inbreeding is real
@HarryNicNicholas
@HarryNicNicholas 2 місяці тому
remember when photography was going to destroy art?
@milferdjones2573
@milferdjones2573 2 місяці тому
On appearance the science on that will show the AI photo shown are popular world wide. But of course it becomes too much of the same causing desire for diversity. Important to point out there actually a science in area of attraction both in humans and other species and we need to start shutting down those with non scientific type opinions especially the it just one culture imposing its values and the effort to make all appearance beautiful which is impossible our brains demand an ugly. Example make overweight attractive healthy becomes ugly. Better to push the traditional view of attraction only skin deep and accept your appearance state great to bad as unimportant to one’s value as a human being. And of course set beauty for the weights that actually healthy and live longer. Note some studies show a tad underweight might live longest.
@Youngmichaelthekid
@Youngmichaelthekid Місяць тому
I really like this take. Thank you for all the information.
@FixTechStuff
@FixTechStuff 6 днів тому
I called this a few months ago, good to see I'm not the only one who can see where this is heading.
@maphezdlin
@maphezdlin 2 місяці тому
Look how people are more and more hating CGI in movies to the point that some movies refuse to do any. If you have ever read anything written by AI you know it has the ability to make the most exciting subjects boring.
@cara-setun
@cara-setun 2 місяці тому
Can you name any of these movies?
@icyjaam
@icyjaam 2 місяці тому
Even Nolan uses very heavy CGI
@maphezdlin
@maphezdlin 2 місяці тому
@@cara-setun, Oppenheimer (2023), Skyfall (2012), Inception (2010), Mission Impossible: Ghost Protocol (2011), Mad Max: Fury Road (2015), The Dark Knight (2008), Casino Royale (2006), 1917 (2019), Top Gun: Maverick (2022)
@Felixr2
@Felixr2 Місяць тому
@@maphezdlin All of those movies used CGI. All of them. Many of the stunt scenes are mostly real footage, sure, but a lot of them are edited beyond recognition. Oppenheimer only lists 49 vfx artists on IMDB, but that's mostly because 80% of them weren't credited. Skyfall lists 578 vfx artists. Inception had 295. Mission Impossible: Ghost Protocol had 347. Mad Max: Fury Road had a whopping 742. The Dark Knight had 468. Casino Royale had only 161, which is in fact impressively low, but still not 0. 1917 had 422. Top Gun: Maverick had 455. For reference, Avatar: The Way of Water (2022), a movie we can hopefully all agree had immense amounts of CGI, credits 1113 vfx artists. The Hobbit: The Desolation of Smaugh (2013) had 915. Most of the movies you mentioned had close to if not more than half of that. What did all these people do if there's no CGI?
@maphezdlin
@maphezdlin Місяць тому
@@Felixr2, K VFX and CGI are different. But you are right the links that I saw that said NO CGI lied. They should have said minimized CGI. Thanks for catching it.
@theprogram863
@theprogram863 2 місяці тому
Consider what generative AI actually is. It's designed to produce data which most closely resembles its training data. So distinctive and idiosyncratic ideas are actively selected against.
@sluggo206
@sluggo206 2 місяці тому
It's right there in the name: "GENERATIVE" AI.
@WeirdWizardDave
@WeirdWizardDave 2 місяці тому
The caveat being "unless you ask it to be distinctive and idiosyncratic". AI generated content isn't random, its the result of prompting. Short generic prompts will illicit generic content.
@SandersMacLane
@SandersMacLane 2 місяці тому
yes, a form of clustering and entropy reduction!!
@Nat-oj2uc
@Nat-oj2uc 2 місяці тому
​@@WeirdWizardDaveexcept it still won't produce original idea that is distinguishable from gibberish unless it's trained on sufficient data which is impossible in case of original ideas
@Chek94
@Chek94 2 місяці тому
@@WeirdWizardDave It will produce distinct and idiosyncratic content -- in a way that matches it's training data.
@theEisbergmann
@theEisbergmann 27 днів тому
I hope it boots universities a bit to start bringing individuality back into academia. The amount of students I heard that said "whatever I'll just chatgpt it and work over the bumps" is staggering.
@simplicity4904
@simplicity4904 Місяць тому
I’m not surprised by the finding and I dare say that it is obvious. I dare say that because such a thought has crossed my mind and I have often tried to convince others who are willing to listen beyond the hype; AI is “artificial” but not “intelligent”. There are different ways to assess and critique AI - philosophically, linguistically, psychologically, biologically, etc, which many thoughtful experts, beyond the industry, have challenged the claims of AI, in particular AGI - but, for me, it comes down to creativity and novelty, the latter AI lacks completely and the former AI can only mimic. If you want to be impressed, see a human child.
@morenofranco9235
@morenofranco9235 2 місяці тому
Great presentation, Sabine. I have always maintained that AI is like students cribbing exam answers. One student just has to copy one thing wrong, once. From then on it is a done disaster. When scientists hypothesised robots making copies of themselves - they never saw this far into the mess.
@bami2
@bami2 Місяць тому
You are 100% correct and it's already happening. I noticed it first when I was looking up a certain niche question that had a bunch of AI generated garbage in the search results, that somehow kept repeating a nonsensical "fact". I pinned it down to a single forum post that was made 10 years ago where somebody made a typo or something that made no sense, but this post was ingested by the machine learning dataset and that dataset was being used to generate a bunch of blogposts/websites, because of the way LLMs write (long dense sentences with very specific subjects) shot up high in search engine rankings. So now there's 20+ different sites all parroting this garbage information, which was then used in other datasets and ingested by most LLMs now, if I ask that specific question to any LLM, it will parrot out the same garbage because there's now 20+ "sources" all saying the same thing, but all based on some stupid forum post made a long time ago by a real person who made a typo or didn't fully understand english language.
@sjonnieplayfull5859
@sjonnieplayfull5859 Місяць тому
An old comic saw this coming: Storm. In the album 'The von Neuman machine' they are sent out to intercept a planet on a collision course with Pandarve, only to find out it is a conglomerate of small von Neuman machines who search for resources, then reproduce themselves, but the code got corrupted because small flaws were reproduced millionfold and got larger over time. Guess AI programmers are not nerdy enough to read comics
@fwiffo
@fwiffo 2 місяці тому
The popular image generation models prior to Stable diffusion were GANs (generative adversarial networks). The way they worked was to have two different networks - one trained to generate images, and the other trained to classify images as real or fake. This forced the generator to learn to avoid the most identifiable characteristics and to generate a diverse set of images. Stable diffusion was more effective and scaleable for higher resolution images, keeping the whole image globally coherent. But it's likely that reviving some adversarial techniques could help with the diversity issue.
@Coach-Solar_Hound
@Coach-Solar_Hound 2 місяці тому
Actually one of the biggest issues with GANs that they were very prone to "Mode Collapse". During mode collapse rather than producing a diverse set of images, the adversarial network would hone in on specific features which were not recognized by the discriminator network. The result: a lower diversity in images which get produced. The reason why diffusion took off in the first place is that due to noise being used as a base, the diversity was higher, as the initial noise served as a "random seed" for the generation in a sense. Mode collapse can be avoided, but takes a lot more effort to avoid, and can lead to problems in many architectures. (Note, im not a researcher.) This is mostly from scant reading I've done here and there.
@andersonfaria8949
@andersonfaria8949 2 місяці тому
@@Coach-Solar_Hound you're absolutely right but I'd like to add another point here, it's not just about model collapsing, the reason why GANs end up losing degrees of freedom is because of overfitting. The ultimate trick to win the discriminator is to draw exact copies of the dataset and that's why you need to save "backups" and move back in time of trainning when you see important details are being left out. Now, regarding diffusion vs. GANs that's a more broader discussion: GANs theoretically should excel in image generation but the investment towards diffusion (especially prompt to image) is way higher so while GANs seem to be lacking, they should actually be a better solution overall. What you said about taking "random seed" is also true for GANs, the generator will always take a random number and try to draw what it knows about the dataset from there. There's a really interesting video explaining all the details in computerphile channel: ukposts.info/have/v-deo/i6dqpm56g29pr2Q.html
@andersonfaria8949
@andersonfaria8949 2 місяці тому
Image controlling for GANs is still an active area of research, what we do today to influence latent space results is to move specific directions in latent space. To know where to move you can use dimensionality reduction techniques to find specific vectors controlling image relevant attributes (check the paper of GANSpace). Another option is to do img2img transfering style or mixing with prompting information
@8888Rik
@8888Rik 2 місяці тому
Your comment and the replies are extremely interesting.
@fwiffo
@fwiffo 2 місяці тому
@@Coach-Solar_Hound Yes, that's true, although there were a lot of developments going on to fix that. The biggest problem was either the generator or the discriminator getting too far ahead of the other, and the whole thing getting stuck. So the rate of learning of the two parts had to be balanced. There was another issue where the set of produced images was not representative of the training data because the generator favored generating "easy" images. For instance, if it was generating faces, it would avoid producing details like glasses or beards, or prefer to generate less angular faces (i.e. the output would overrepresent women). There are lots of types of regularization to be done, and techniques to help with those things. Adversarial learning, generally, is a really useful technique. So I think it's time to bring it back to diffusion. (I have done work on GANs personally, although it's been a few years).
@sarah-janelambert8962
@sarah-janelambert8962 17 днів тому
I've been drawing attention to iterative decay of information in these systems for some time. Eventually we will just end up with a 'grey goo' situation similar to that suggested for uncontrolled nanobot replication.
@dmitriydanilov6367
@dmitriydanilov6367 24 дні тому
Thank you for the video! I am not an expert on AI just used to work with big data. It has never occured to me that AI collapse due to low quality of input might be an issue, but it actually makes sense. I would like to point out that adding more randomness might now be even viable solution to this problem since all the existing random number generator are in fact pseudo-random (the random result can be predicted if you know the algorithm) Guess we will see how it will play out in the future
@BraydonAttoe-xs4yg
@BraydonAttoe-xs4yg 2 місяці тому
Surprised we arent already forcing watermarks on ai content. Actually blown away. Like giving a kid a staw house and fireworks and not expecting a fire😊
@adamshinbrot
@adamshinbrot 2 місяці тому
Who would force it? Who would enforce it? How?
@esbensloth
@esbensloth 2 місяці тому
How would you even watermark plain UTF-8 text like what LLMs produce and I am typing now?
@BraydonAttoe-xs4yg
@BraydonAttoe-xs4yg 2 місяці тому
@esbensloth use those intellectual problem solving skills we humans have and deduce that I'm referring to the concept of a watermark. Or at least I figured those reading would have assumed that. My bad
@BraydonAttoe-xs4yg
@BraydonAttoe-xs4yg 2 місяці тому
@@adamshinbrot people said that same thing before we had firefighters, roads, schools... etc
@BaddeJimme
@BaddeJimme 2 місяці тому
If the real beneficiaries of mandatory watermarking turn out to be people that train AIs, then I'm against it.
@keithdafox2257
@keithdafox2257 2 місяці тому
A third potential is that we decide to move on from brute forcing LLMs to work and get more efficient or different learning models. A human does not need to look at a billion images to learn how to draw. Even if we don't have AI that are capable of what we can do, it does demonstrate that there are better ways to design AI. Right now it's kinda brute forcing and incredibly inefficient
@Coach-Solar_Hound
@Coach-Solar_Hound 2 місяці тому
except we perceive images for the entirety of our lives.every waking moment. The amount of frames we see in a day is a topic which is disputed, however, you can quickly imagine how these pile up, I assume reaching a billion in a lifetime may be possible, even at 20 years we may be approaching around a billion images seen in our waking days, if not more. Small moments of perception (not necessarily visual) may leave an impact (emotional or otherwise). This then results in creativity.
@keithdafox2257
@keithdafox2257 2 місяці тому
@@Coach-Solar_Hound I never thought of that but you do have a point there. Still, even so, an AI can sift through many more fps on a specific topic than we can yet can still take a lot. But also we do have an understanding of the world I read somewhere about an AI system that first learned, via simulations, how physics works, understanding 3D objects and whatnot. Then it was able to learn a topic much more efficiently than the other. But, I don't recall the article so who knows. I do feel like LLMs are kinda a brute force method of training data, but I also definitely don't understand how they work enough so who knows. It will be interesting
@defaulted9485
@defaulted9485 2 місяці тому
​@@Coach-Solar_HoundCorrection. You perceive image when your brain isn't dozing off. Your conscious brain only learns one thing at a time and dumps the rest of the noises. AI eats everything up because its a server farm. It processes 100 image per CPU per second in a server made out of hundreds of CPUs. If you process every data like AI, your brain will have a seizure and dumps the rest of the information. This isn't including Tunnel Vision, the importance of peripheral vision, spectrum perception, object of focus, and more perspective where your brain dumps information on the visible Field of View to save your memory storage. It's far different.
@Coach-Solar_Hound
@Coach-Solar_Hound 2 місяці тому
@@defaulted9485 that's fair, but our subconscious brain and perception is still filtering categorizing and receiving all of this data. It's just that our system for cataloging and interprting visual data has had so many years of evolution that it has become this advanced and efficient. There's definitely a big difference in retention between active processing by the concious brain and simply perceiving. But I was moreso arguing that the amount of images we perceive through our lifetime is quite high in quantity. There are definitely layers to this, and the importance of abstract representations that we're able to make and share are not to be understated. Furthermore, I don't really know how much our unconscious brain influences the concious brain. But there is definitely a non-negligible impact. The advanced filtering and cataloguing is what makes us so special as a species anyway. The lack of semantic understanding in the largest thing that sets us apart from NNs currently. In my interpretation, current image based systems are really just advanced enough to mimic the following systems: encode visual data in some lower level (compact) representation and recall from this representation into some visual data. Much akin to a memory.
@user-ks3gz2bs5e
@user-ks3gz2bs5e 2 місяці тому
​@@defaulted9485 a computer learns one bit at a time, our brains learn multiple x multiple things at a time, both instantaneously. Our brains do not actually dump noise, it turns it down but continues working on everything recieved from our senses to our memories, to imagination, which is of course how we create.
@freecat1278
@freecat1278 Місяць тому
I was just called by an AI telemarketer. I am sick & this was reflected in my voice when I answered the phone. The AI tried to relate to me by matching the quality of my voice. It sounded like a drill sergeant or concentration camp guard mocking me.
@Roboartist117
@Roboartist117 2 місяці тому
AI as it currently stands will become a generalization of people. If it doesn’t become a unique individual, or become better at imitating us, we’ll just learn to recognize them as we grow up and live with them.
@2bfrank657
@2bfrank657 2 місяці тому
I kind of wonder if this problem actually started with the widespread use of the internet. We went from communicating with books, which had to meet a certain standard before the expense of publishing could be justified, to zero-cost sharing of opinions on the internet, to having machines lap up these opinions and feed them back to us. Each of the above steps involving less rigour than that which precedes it.
@user-iv5gy3rc2b
@user-iv5gy3rc2b 2 місяці тому
You're on to something. Everybody is an expert on the internet, even 10-year-olds and meth heads. Used to require some credentials to publish and teach others or at least experience and actual knowledge as opposed to opinions.
@mikemondano3624
@mikemondano3624 2 місяці тому
Yes, the truth and lies are now on equal footing. The village idiots that we tolerated compassionately now have joined together to form political and social blocs. We might even begin to question Silicon Valley's idea that everything they come up with is purely good.
@mikemondano3624
@mikemondano3624 2 місяці тому
@@user-iv5gy3rc2b Opinions are fine so long as they are correct.
@Reach41
@Reach41 2 місяці тому
Books on flying saucers, ancient space aliens building the pyramids, etc. have been published for at least 70 years... I'll bet one could get their horoscope reading from an online AI today, and perhaps a tarot card reading.
@fastestdraw
@fastestdraw 2 місяці тому
I'd disagree - you only need to open a random victorian book that isn't a 'classic' to see how little rigour went into the majority of written work. Its survivorship and recency bias. Easy to remember the classics, but pulp fiction gets pulped. We don't exactly remember victorian 'heres detailed descriptions of this weeks executions and gristly crimes' newspapers, but 'highly embelished true crime podcasts' are exactly the same thing. Ditto with 'news' that was basically made up - to the point that a lot of the british emprie's decisions in india were highly influenced by people claiming the earth was hollow, or that they had been there and writing entirely fictional accounts about the country. People have made terrible decisions on bad information for a long time. The main change AI is causing is that you can no longer say 'they probably didn't write three thousand pages and provide detailed illustration on something obviously false'.
@tullochgorum6323
@tullochgorum6323 2 місяці тому
AI can learn from itself when there is an objective outcome to measure. For example Chess, Go and Poker AI engines can improve by playing against themselves (though they also benefit from historical game records and playing against humans). Where there is no objective measure, such as art or creative writing, it's difficult to see how AIs can improve without human input.
@salvadoran_uwu
@salvadoran_uwu 2 місяці тому
Exactly, human input, that's why experts say one job that may rise after AI is "human trainer." I've seen many voice bots need human input to improve their accents and pronunciation.
@hovertank307
@hovertank307 Місяць тому
Art is intended to please humans. If we want AI to train on AI generated art, the set must first be curated by humans to contain images we find pleasing. If you let AI train on all images generated by AI, it will keep getting worse (unless some programmer figures out a trick around this)
@tullochgorum6323
@tullochgorum6323 Місяць тому
@@hovertank307 As a coder myself, I'd hate to be given the task of developing an algo to rank the quality of visual art! Music may be more doable. Interestingly, the very first computer scientist, Ada Lovelace, predicted way back in 1843 that computers could generate music. Because it's based on relatively predictable patterns, there are generative music AIs that produce interesting results or that interact with human players. They may soon have commercial applications for less demanding fields like advertising jingles, where originality is not the aim. Hack commercial composers must be fearing for their jobs...
@hovertank307
@hovertank307 Місяць тому
@@tullochgorum6323 yes, I would not even try it. I meant a trick to sidestep the need to write such an algorithm.
@ABH565
@ABH565 Місяць тому
So basically ai need a human brain. Take note matrix.
@rasmustorkel9568
@rasmustorkel9568 Місяць тому
Great video. This problem may be a teething problem, though. After the computers started beating the top human Chess players, Go players like myself felt smug. We said things like "Chess is about crunching through possibilities. Go requires real intelligence." For years we annoyed chess players with this sort of talk, citing the inability of computers to beat the top human Go players as proof. This lasted about 18 years until AlphaGo came along in 2016. Important point: AlphaGo does not play like humans, except faster and more accurately. It came up with some genuinely novel moves. So, I would not take any problems that AI is now experiencing as an accurate predictor of what AI will be like in another two decades or so.
@illarionbykov7401
@illarionbykov7401 Місяць тому
Amazingly, your comment is getting ignored. One of the biggest problems with AI is that mainstream reporting on AI has been woefully incomplete and ignorant for decades. Even most AI professionals know only the bits and pieces they work on. Very few people see the big picture, and our news media are largely responsible by failing to keep us informed. Simply catching people up on what's already been achieved in the AI field is a huge task.
@rasmustorkel9568
@rasmustorkel9568 Місяць тому
@@illarionbykov7401 Yes. We should work on the assumption that in the long term AI will be limited by what humanity allows and not by what is technically possible. And then we should think about and discuss what we will allow. Clearly marking AI generated stuff, as Sabine suggested, is a good start but not nearly enough.
@BoogieBoogsForever
@BoogieBoogsForever 2 місяці тому
I think that if we program in randomness, they'll introduce wacky and impossible and obvious problem elements. The problem is in explaining how to adjust and add randomness to a program which doesn't understand the original state and how it has simplified and made things uniform. It doesn't understand what it does, so how can it introduce some oomph. How can it know when it's introduced too much? There are way to many parameters which can be tweaked.
@SKLightenUpNow
@SKLightenUpNow 2 місяці тому
Please, would you put under the video the references of the sources you use? A study from Japan, another from France - please, give us the links! Thank you.
@ThehakPlay
@ThehakPlay Місяць тому
They are in the video bro. Right under both of those studies are arXIV citations that you can easily google. If you aren’t motivated enough to google them, you were not motivated enough to read and learn from an academic paper anyway
@WillyWP
@WillyWP 2 місяці тому
I agree. I also know that the amount of input required to create accurate images that are on par with illustrations, photos or graphic design, especially if it needs to fit within a preexisting brand visual language, and need to be otherwise visually descriptive to achieve a specific goal, is ridiculous. Try entering meeting notes, and text describing a brand visual language into an AI generator and to visually convey a concept and you won't get anything that works anytime soon. Give a 250 word creative brief to a qualified professional an you will get something back right away. In this way AI is not close outpacing the human brain in this way.
@Clarkillustrations
@Clarkillustrations Місяць тому
Than you for covering this! I've been saying this since I saw the 1st interest in AI art
@calimon00
@calimon00 2 місяці тому
I’ve been saying this to friends for about a year now. I’m glad I’ve finally run into an expert identifying and addressing this potential problem.
@covalentbond7933
@covalentbond7933 2 місяці тому
I hope your intuition leads to wealth and happiness bro, make sure to use it well
@11lvr11
@11lvr11 Місяць тому
Same
@Mihi967
@Mihi967 2 місяці тому
Very interesting findings there and suggest that while initial models create biases, refined models may also create average bias.
@djellisdee
@djellisdee 9 днів тому
Some call this phenomenon "dead internet theory", where some companies (barracuda networks) show that only 36% of the modern internet is actual human traffic, with the other 64% being automated traffic (eg. bots, GenAI, spam, etc.). There is only so much human original created content you can train these huge AI models on.
@brichan1851
@brichan1851 8 днів тому
This is something touched on in Halo regarding Cortana, and other “smart” A.I.s. This is known, in the Halo universe, as “rampancy.”
@Mike__G
@Mike__G 2 місяці тому
This issue has occurred to me for quite a while. I have worked with Big Data extensively and had brief real world experience with AI development. AI’s reuse of AI-generated data seems highly likely to result in a “creativity asymptote.”
@PanduPoluan
@PanduPoluan 2 місяці тому
The issue is that "creativity" is the totally wrong word to describe what AI does. An AI is currently a glorified summarisation machine with weighted forecasting ability. It has no capacity of becoming creative. It can only extrapolate, with zero understanding of what it is extrapolating. AI bros will defend AI tooth and nail to pull in more funding before they bail out. Just like Crypto and NFT. GAI is the "scam du Jour".
@rishyrish6508
@rishyrish6508 2 місяці тому
its already happening on youtube. the same videos with different thumbnails usually one or two words are changed
@pauljs75
@pauljs75 Місяць тому
This is probably a reflection of the filtering used to keep AI presentable to the public. Filtering limits the bandwidth of the input data and successive runs with a filter would narrow the scope even further. It's like running noise through a feedback loop that also has a filter on it. Eventually you may get a sinewave tone with enough passes through the filter. There's also something about the limiting rule sets that act a lot like quantization with generative sound design when used with random input values. If you change the rules to the point of quantizing down to one note, that one note is all you're going to get regardless of the input values and how random they are. Weird analogy, but I swear there's some very basic AI behavior that can be observed with something like generative music - and certain rules like picking scales or rhythm patterns will make a melody fit into a genre just by letting a computer do its thing. Sure images or language are more complicated, but it seems that similar nuances in emergent behavior are there.
@xeroforhire
@xeroforhire 8 днів тому
This is why I've never had any real fear of what they deem singularity. I will never get to a point where it can improve on itself because all things break down over time. Nothing gets better.
@chw1tt
@chw1tt 2 місяці тому
Exactly! I've been wondering about the potential for this problem. Thanks for pointing it out.
@tobykelsey4459
@tobykelsey4459 2 місяці тому
One potentially positive side-effect of this "averaging effect" of AI output - if it continues - is that creative people who want to distinguish their output from the common generative stuff will be forced to be more individualistic and idiosyncratic to be distinct and valuable. Of course if generative output is then trained on their later output this becomes an "arms race".
@manutosis598
@manutosis598 2 місяці тому
Best outcome, we get to laugh at obama pissing at mr beast skibidi sigma toilet and it doesn't steal jobs
@tchaika222
@tchaika222 Місяць тому
AI is geared to come up with a solution using the smallest amount of computations possible. It means that ignoring diversity and details is part of its basic make-up. It can't capture interesting quirks and details and spurt them out once in a while, in some outputs but not others. It also means that if it found one way to get to an acceptable solution, it will only try to get there quicker the next time around. If you've experienced getting stuck in a rut with ChatGPT, that's why.
@rightfootlefthand
@rightfootlefthand 20 днів тому
It's basically positive feedback: the circuit always rails. Analogous to if you only eat fast food and consume soft drinks - junk in, junk out.
@mhayato3
@mhayato3 2 місяці тому
This reminds me of something happened in bycicle industry, sales were declining so "creatively" they "invented" 29" wheel and stopped producing 26" .
@petter9078
@petter9078 2 місяці тому
Sounds like something Apple could do.
@seraph4581
@seraph4581 2 місяці тому
29" wheels are better though. Specially for climbing, due to physics. Bigger lever = less effort needed. 29" are actually just wider 700c wheels which road bikes had been using for decades at that point.
@gedeonducloitre-delavarenn8106
@gedeonducloitre-delavarenn8106 2 місяці тому
How does the size of the wheel improve the efficiency ? why not then go up to 35", 40" or even 50" wheels then ? why didn't we stick with pennyfarthings?
@p60091
@p60091 29 днів тому
​@@gedeonducloitre-delavarenn8106 more momentum, lower speed, better for going further. diminishing returns. Penneyfarthings were fixed gear, difficult to ride difficult to balance, easier to break, among other issues.
@fen4554
@fen4554 2 місяці тому
As a 80-90s kid, I just wanted to say your thumbnail looks like artwork for a gameboy game with that left stripe.
@EyMannMachHin
@EyMannMachHin 2 місяці тому
For some reason I noticed that right away when Sabine started using this picture format, but was afraid to ask. 🤣
@GANONdork123
@GANONdork123 2 місяці тому
I thought I was the only one lol
@hherpdderp
@hherpdderp 2 місяці тому
To go further Companies will have to start paying for people to create high quality data. Art, photography, text like stories or reviews, solving logic and math problems. Rather than just scraping the Web.
@SXZ-dev
@SXZ-dev Місяць тому
In Computer Science, the consensus i feel is that it CAN help you considerably, but students shouldn't be given access to it as they need to learn to work without it to not become dependent on it. And second, that AI puts much more stress on the code review process which needs to be done more carefully now than before. When we do use it, we typically use it like Google, and are always reviewing what it outputs carefully. In regular science i think this will soon also become the norm, it has the potential to aid researchers, but it puts stress on the peer-review process (which was already problematic before) and the wave of students using it is worrisome because they're not really being forced to really know the subjects deeply on their own. In regular science also, topics can be more complex to review, i can easily tell when the code the AI generates is invalid, my tests, compiler and other tools help me catch errors from the AI, scientists don't really have these guardrails and need to rely on their own parsing of the information. So they need to be doubly aware of the fact AI can hallucinate pure nonsense out of the blue.
@johndemeritt3460
@johndemeritt3460 2 місяці тому
I learned about these problems a LOOOONG time ago! I was raised on the acronym "GIGO". . . then again, my father was a computer programmer with Monsanto back in the 1950s and could program computers in MACHINE language. Eventually, he was able to learn the brand new "high level" languages of FORTRAN and COBOL. I've since turned my attention to Sociology and can see where mutually constructed social realities have crept into compute programs -- and those social constructs are especially strong in AIs.
@JustMe-ty2rp
@JustMe-ty2rp 2 місяці тому
Haven't heard the words 'FORTRAN and COBOL' in a loooong time lol. I spent some time a few decades ago toying with the idea of learning machine-level code (and the 'high level' (LOL) FORTRAN & COBOL) - but thankfully I decided against it. What a waste of time that would have turned out to be XD
@WisdomThumbs
@WisdomThumbs 2 місяці тому
My interest is piqued. What do you mean by social realities and social constructs? And what examples do you recall of them infiltrating computer programs?
@sh4dow666
@sh4dow666 2 місяці тому
​@@JustMe-ty2rpI don't think it would have been a waste - while hardware adjacent programming is more niche these days, it's still relevant in some areas (where very high performance is needed), and understanding the general principles is of value even when using modern languages, as long as performance isn't completely irrelevant.
@GreenPantsAllDay
@GreenPantsAllDay 2 місяці тому
I'd also like to know more about the social constructs within AI.
@1000orchids
@1000orchids 2 місяці тому
​@@WisdomThumbs I am social scientist, so let me explain some of these terms, if you are interested. A possible take on "social reality" refers to a system of values, as well as economic and cultural norms that an individual shares with their kin and people they interact with. It forms a particular understanding of the world. According to Pierre Bourdieu, even our taste is cultivated - you learn to appreciate some things, people and values, and dismiss others. There is always negotiation, of course - you might dislike things that your parents cherish - but you are affected by them, which is why you might decide to take a different path. "Social construct" looks more closely into social values, hierarchies and structures that cement over time and become naturalised. When you hear people saying: "it's in human nature to prey upon other humans!" - this is a social construct. It speaks volumes about the social reality of the individual who utters such statement. They've probably grown in a dog-eat-dog environment and their understanding of the world is shaped by this experience. Now, when it comes to AI, there is an issue of bias: if (white) computer scientists were training AI with a sample that features pictures of predominantly white people, AI will begin to identify 'human' and 'beauty' with whiteness - and that's a problem. Strictly speaking, it's not AI that is racist - in such case, there is a problem with the executive decisions of the team of people who trained it, and with the uniformity of the sample itself.
@Fido-vm9zi
@Fido-vm9zi 2 місяці тому
I absolutely love reading comments & knowledge shared by people. Seems like a computer or program doesn't really know the world, discernment. Still pretty interesting & useful.
@aktchungrabanio6467
@aktchungrabanio6467 2 місяці тому
Comments are bullshit thoough
@Fido-vm9zi
@Fido-vm9zi 2 місяці тому
@@aktchungrabanio6467 some
@julesy6922
@julesy6922 28 днів тому
one thing I saw someone bring up in reference to an assured AI model collapse, is that these AI models require more and more training data forever, in perpetuity, and there just isnt ever going to be enough unique data in the world
@91Vault
@91Vault Місяць тому
Business are going to realising they’ve been sold a faulty product based on hype.
@LaminarRainbow
@LaminarRainbow 2 місяці тому
I wonder if the generated elephants look so similar because usually generated images try to match the sample sizes (512x512, 1024x1024) which only leaves so much room for good compositions and wonder if in future with larger models we might see this change a bit more.
@phattjohnson
@phattjohnson 2 місяці тому
That example was from 2 years ago too. I've been playing around with "AI" art generation lately.. you do get the odd extra finger or third leg (giggle) but that's half the charm of it :P
@Ambienfinity
@Ambienfinity 2 місяці тому
It's becoming a real pollution issue now, with candidates titivating their CVs and students bolstering their theses with AI generated crap, which is already showing signs of becoming increasingly generic. It will inevitably settle down. Most people are developing a very good nose for AI, and as Sabine's examples show, it's starting to look increasingly like all those annoyingly garish CGI Marvel movies.
@zacheray
@zacheray 8 днів тому
There’s a chaos setting in Midjourney. I imagine that modifier will be available in everything. It works well for randomness although the weirdness is a bit much
@rubenmahrla9800
@rubenmahrla9800 Місяць тому
I have been using gemini's free version since it's release, and I have been noticing a sharp decline in reliability and a massive uptick in refusal to answer questions, i.e. ignoring questions on some very public information such as laws etc.
@murex0909
@murex0909 2 місяці тому
I love listening to your channel, you explain the most complex subjects in a clear easy and simple way for easy comprehension Thank you and keep up the great channel Love it
@TheEVEInspiration
@TheEVEInspiration 2 місяці тому
1:10 Why so surprised? It's well known that in echo chambers all differentiating opinion/perception gets eliminated. And when AI is following the input data given, it will convergence to a consensus in order to establish its rules. This is also why "learning the rules" works, the randomness is just to be less sensitive to small input variations.
@numberones9831
@numberones9831 14 днів тому
Thank you for telling the truth! Most people seem to be really naïve about it
@chrishoyt7548
@chrishoyt7548 18 днів тому
Indeed, thank you. Chris
My dream died, and now I'm here
13:41
Sabine Hossenfelder
Переглядів 2,3 млн
I don't believe in free will. This is why.
19:59
Sabine Hossenfelder
Переглядів 982 тис.
Surprise Gifts #couplegoals
00:21
Jay & Sharon
Переглядів 28 млн
How much energy AI really needs. And why that's not its main problem.
8:06
Sabine Hossenfelder
Переглядів 250 тис.
CERN Looks for Origins of Quantum Randomness
6:47
Sabine Hossenfelder
Переглядів 22 тис.
I Was Worried about Climate Change. Now I worry about Climate Scientists.
9:12
Sabine Hossenfelder
Переглядів 696 тис.
Why is everyone suddenly neurodivergent?
23:25
Sabine Hossenfelder
Переглядів 1,6 млн
Why flat earthers scare me
8:05
Sabine Hossenfelder
Переглядів 334 тис.
AI Deception: How Tech Companies Are Fooling Us
18:59
ColdFusion
Переглядів 1,3 млн
How do we know how much dark matter there is in the Universe?
15:57
Generative A.I - We Aren’t Ready.
16:10
Kyle Hill
Переглядів 1,5 млн
Stanford CS25: V4 I Aligning Open Language Models
1:16:21
Stanford Online
Переглядів 5 тис.
С Какой Высоты Разобьётся NOKIA3310 ?!😳
0:43
Лучший Смартфон До 149 Баксов!!!??? itel s24
20:25
РасПаковка ДваПаковка
Переглядів 51 тис.