10.3: Neural Networks: Perceptron Part 2 - The Nature of Code

  Переглядів 150,104

The Coding Train

The Coding Train

День тому

This is a follow-up to my Perceptron Video ( • 10.2: Neural Networks:... )
This video is part of Chapter 10 of The Nature of Code (natureofcode.com/book/chapter-...)
This video is also part of session 4 of my Spring 2017 ITP "Intelligence and Learning" course (github.com/shiffman/NOC-S17-2...)
Source Code from my first Perceptron Coding Challenge: github.com/CodingTrain/Rainbo...
Simple Perceptron code examples:
p5.js: github.com/shiffman/The-Natur...
Processing: github.com/shiffman/The-Natur...
Support this channel on Patreon: / codingtrain
To buy Coding Train merchandise: www.designbyhumans.com/shop/c...
To donate to the Processing Foundation: processingfoundation.org/
Send me your questions and coding challenges!: github.com/CodingTrain/Rainbo...
Contact:
Twitter: / shiffman
The Coding Train website: thecodingtrain.com/
Links discussed in this video:
My video on the map() function: • 2.5: The map() Functio...
My video explaining object overloading: • 8.5: More on Objects -...
My Perceptron Coding Challenge: • 10.2: Neural Networks:...
Session 4 of Intelligence and Learning: github.com/shiffman/NOC-S17-2...
Perceptron on Wikipedia: en.wikipedia.org/wiki/Perceptron
Source Code for the all Video Lessons: github.com/CodingTrain/Rainbo...
p5.js: p5js.org/
Processing: processing.org
For More Coding Challenges: • Coding Challenges
For More Intelligence and Learning: • Intelligence and Learning
Help us caption & translate this video!
amara.org/v/7wh0/
📄 Code of Conduct: github.com/CodingTrain/Code-o...

КОМЕНТАРІ: 159
@adario7
@adario7 6 років тому
_"Life is just one big refactoring"_ ~Daniel Shiffman, 2017
@rickmonarch4552
@rickmonarch4552 4 роки тому
x'D Yep
@numero7mojeangering
@numero7mojeangering 6 років тому
The math of the map function is : function map(value, minA, maxA, minB, maxB) { return (1 - ((value - minA) / (maxA - minA))) * minB + ((value - minA) / (maxA - minA)) * maxB; }
@sanchitverma2892
@sanchitverma2892 5 років тому
no one cares
@somedudeskilivinghislife3739
@somedudeskilivinghislife3739 5 років тому
I care.
@sanchitverma2892
@sanchitverma2892 5 років тому
@@somedudeskilivinghislife3739 oof
@somedudeskilivinghislife3739
@somedudeskilivinghislife3739 5 років тому
@@sanchitverma2892 but no lie, Numero7 Mojeagering is a nerd.
@xrayer4412
@xrayer4412 4 роки тому
thank you for taking your time
@anteconfig5391
@anteconfig5391 6 років тому
Now I truly understand the need for the bias. Thank You.
@MrSleightofhand
@MrSleightofhand Рік тому
I know this is an older video and I think you had something like this on the whiteboard at one point but I'm not sure it was fully explained about how the weights/inputs and lines are related. So if anyone is confused hopefully this helps? You can take the equation for a line you probably learned in school: y = mx + b and rearrange it into this form: 0 = mx - y + b m is the slope of the line which we can say is m=rise/run. So: 0 = (rise/run)x - y + b Then if we multiply everything by run we get this: 0 = run(rise/run)x - y(run) + b(run), which simplifies to 0 = x(rise) - y(run) + b(run) But rise, -run and b(run) are all just arbitrary numbers so call them p, q and r. Then the general equation for a line is: 0 = px + qy + r(1) Obviously the multiplication r(1) could just be r but it shows how everything is related: The inputs are x, y and 1 with the coefficients p, q and r being the weights. So a simple perceptron models a line because it's essentially a function which computes the points on a line. Or more specifically, the points (x, y) such that the right-hand side equals zero are on the line, points where the value is negative are on one side of the line and points where the value is positive are on the other. (Apologies if I'm being overly pedantic here. I think you did-and in general on all your videos, do-a great job explaining potentially confusing topics in an easy to understand way. This just struck me as one spot where there might be confusion and I love this kind of thing so I can't help myself.)
@hugomocho8745
@hugomocho8745 5 років тому
I wish I could have a teacher just like you. Just thank you so much, learning never seemed so fun :)
@josedomingocajinaramirez5086
@josedomingocajinaramirez5086 5 років тому
Thanks man! i'm student of physical engineering in México, and i 'm learning a lot with your videos! You're great! Thanks a lot!
@halomary4693
@halomary4693 3 роки тому
AWESOME LESSON - THANK you so much for all the painstaking effort to make the videos.
@MrRobbyvent
@MrRobbyvent 3 роки тому
it's very enlightening - it's all about abstraction and you can train it to do anything!
@jonathanmartincivriancamac9950
@jonathanmartincivriancamac9950 2 роки тому
after so many tries, thanks to you and 3blue1brown, now I have done my first perceptron, thank you! :D
@robinranabhat3125
@robinranabhat3125 6 років тому
i only know basic python .yet i understood our videos . YOU ARE THE REAL MAN
@FederationStarShip
@FederationStarShip 4 роки тому
Around 19:50 you start coding it to draw the current version of the line. That's quite nice way to do it by making it guess at two distinct points. I spent a while doing it algebraically from the weights alone. I never though of using the predict/guess fucntionality here!
@lucaxtal
@lucaxtal 6 років тому
Loving your channel!! Great job!!! Processing is really cool in prototyping.
@TheCodingTrain
@TheCodingTrain 6 років тому
Thank you!
@aleidalimacias9841
@aleidalimacias9841 2 роки тому
Hello!; I'm from Mexico, your videos are great, I just subscribed and I'm amazed how you make it easy to learn all these concepts. You'are doing a really good job and you are helping a lot of people !
@TheCodingTrain
@TheCodingTrain 2 роки тому
Thank you!
@tanmayagarwal8513
@tanmayagarwal8513 3 роки тому
Thank You SOOO much!! I made a perceptron of same kind which has an accuracy score of 1.0. OMG!! I can't imagine!! I made a perceptron! Thank you sooooo much!!
@sky96line
@sky96line 6 років тому
best video in series.. kudos.
@ronaldluo475
@ronaldluo475 Рік тому
watching this today this information is timeless
@realcygnus
@realcygnus 6 років тому
superb content.....as per usual
@ac2italy
@ac2italy 5 років тому
linear regression: you explained gradient without mentioning it ! great
@algeria7527
@algeria7527 6 років тому
realy, good job, well done, keep up doing the good staffs
@coolakin
@coolakin 6 років тому
you're such a delicately beautiful whiteboard scribe. love it
@user-eh4nz9rl9g
@user-eh4nz9rl9g 5 років тому
Love your videos, you are awesome!!
@jackball9081
@jackball9081 4 роки тому
YOU ARE JUST WONDEFUL
@EliasBurgos93
@EliasBurgos93 6 років тому
Watching this video I do not understand much, but reading it from the book is clearer and more conspiratorial, it always happens the other way around, I understand the videos better than the book but in this case it is easier to read it written than in video
@loubion
@loubion 6 років тому
Thank you so much, ML is finally understandable for me, even if it's not explained in my native language. Really, infinite thanks
@znb5873
@znb5873 3 роки тому
This is the hugest whiteboard I've seen in my life!
@kamilbolka
@kamilbolka 6 років тому
Great Video!!! again...
@raonioliveira8758
@raonioliveira8758 4 роки тому
I am probably a bit late for this and correct me if I am wrong, but it didn't work because of C, not the bias. It worked anyway because the way to solve it is the same. But when you have a line like: ax + by +c, you have to regard the C when you train the perceptron (adding a bias worked as if you were adding a 'c'). I hope I was able to explain it.
@lorenzopazzification
@lorenzopazzification 6 років тому
you can make a function that changes learning rate during time by it's own without using any user input(sliders and so..)?
@hfe1833
@hfe1833 5 років тому
I hope you will make another book for this
@cameronnichols9905
@cameronnichols9905 6 років тому
I was trying to think about a way to have machine learning with tic-tac-toe. Maybe you could do something on this? I was thinking having different weights for every possible placement of the X or O, depending on what is currently on the board.
@sonnymarinho
@sonnymarinho 6 років тому
Guy... Thanks for your video! You're awsome! =]
@zlotnleo
@zlotnleo 6 років тому
Since you do training in draw() it overtrains it on the input data and any unseen data is very unlikely to be classified correctly in general case. In this case it would work since the line's equation is the same as the calculation in the perceptron. Also, splitting the dataset would allow you to estimate the accuracy and hence analyse if any changes you make are statistically significant. On an unrelated note, may introducing higher powers of inputs into the equation produce useful results? It's clear it would improve classification of points to either side of a parabola, but what would be the best way to generalise it to work with an arbitrary curve?
@nbgarrett88
@nbgarrett88 5 років тому
I freaking love the Rogue NASA shirt... #Resist
@calebprenger3928
@calebprenger3928 5 років тому
Love your videos. Better than funfunfunction. That's saying alot
@torny6650
@torny6650 6 років тому
the coding train, could you do some basic example of unsupervised learning?
@snackbob100
@snackbob100 4 роки тому
QUESTION: you have a point in a data set of 10 points: point1 =[x, y] for point 1 the error is calculated and the weights are updated for point two does the algorithm take the previously updated weight and then update that and input weights, of which that is reupdated, with this process happening for points in the data set?? if this is the case, surely the order of the data points matters on the final result? for example the first weights are adjusted for point 1, and the weights are adjusted for point 2. could this mean that the adjustment for point 1 could now be redundant, as point two has nudged the weight out of favour for point 1 and into the favour of point 2, eg point 1 = incorrect classification weights adjusted due to error in point 1 point1= correct classification point 2 takes updated-weight point 2 is incorrect updated point2 is incorrect weights update point 2 is correct, point one is incorrect.
@loic.bertrand
@loic.bertrand 4 роки тому
There's a dead link in the description for "Source Code from my first Perceptron Coding Challenge:" ^^
@RahulSharma-oc2qd
@RahulSharma-oc2qd 2 роки тому
at 15:49, We got get 1 as output too, if we would choose thresold function having a value as negative and In such condition zero would be greater than the thresold function and it would fire an output of 1. Am I missing something here?
@epicmonckey25001
@epicmonckey25001 6 років тому
Hey Dan, I had a thought about your line function, will it still work if you input the formula for a parabola? Keep up the good work, -Alex
@MoDMusse
@MoDMusse 6 років тому
Nope, doesn't work, but don't know why
@TheCodingTrain
@TheCodingTrain 6 років тому
Will discuss more next stream!
@orchisamadas2222
@orchisamadas2222 6 років тому
The update equations for the weights will change if your function is a parabola. Taking the derivative with respect to m will now give you x^2, so maybe changing the update to error*(input^2) will work.
@ramseshendriks2445
@ramseshendriks2445 6 років тому
well a line is a line and not a graph
@mrrubixcubeman
@mrrubixcubeman 6 років тому
Shouldn't 0,0 inputted get outputted as 1 because of the activation function? I thought that after summing everything you saw if it was above or below 0 and then gave it a value of 1 or -1.
@Mezklador
@Mezklador 6 років тому
Hey Mr. Shiffman! Do you think - at the end of this video - that the gap between the 2 lines represents the error value between the training set and the formula?
@NathanK97
@NathanK97 6 років тому
no the perceptron just found a function that satisfied the condition.... with more points closer to the line it would be a lot more accurate
@Mezklador
@Mezklador 6 років тому
Yeah thank you but I've understand that: as the second line is getting close to the "primary" line, the Perceptron is getting more accurate. Right. But... At the end, the spaces between those 2 lines - as it seems at the end of this video - could be a set of data that represents the error marging between Perceptron and the dataset, isn't it? I'm asking that because in Machine Mearning, there's also concepts about accuracy, confidence and error rate, to fine-tune algorithms...
@williamsokol0
@williamsokol0 3 роки тому
hmm is it possible to make the learning rate different per weight it seems like the bias grows much slower than the others naturally.
@magneticking4339
@magneticking4339 3 роки тому
20:20 What if the dividing line is vertical?
@PaladinPure
@PaladinPure 6 років тому
I have a question, do you do any ActionScript tutorials?
@filipanjou2296
@filipanjou2296 6 років тому
You didn't have to scale down the m value of the line function. Dividing 3 by 10 doesn't "scale it down" but totally changes the slope of the function. (Also, thanks for another great video!)
@TheCodingTrain
@TheCodingTrain 6 років тому
Thanks for this important clarification!
@gufi7000
@gufi7000 6 років тому
Dear Senpai Dan/Shiffman/Daniel/TheCrazyCoderFromP5/TheCodingTrain I really like your videos! I attned to the HTL-Braunau (Higher Technical School - Braunau) with the background to learn coding. You are one reason why i want to learn the fascinating of coding. Your Videos are very funny but informing... You do your things with love and this is why i like your style! And one day I want to visit where ever you are and meet you to talk about coding things and your crazy but good ideas. I hope you will read this one day and say:"WoW... I changed someones life." Lg. David F. Ps.:Sorry for my bad english (I'm a 15 Austrian boy)
@Kino-Imsureq
@Kino-Imsureq 6 років тому
;) u did gud
@S4N0I1
@S4N0I1 6 років тому
gufi7000 Hey David, Grüße aus Simbach 😀
@gufi7000
@gufi7000 6 років тому
S4N0I1 Moin 🙃
@annac887
@annac887 6 років тому
This model can be used for data result proximity prediction by using more complex mathematics to create algorithms that have very low incorrect information feedback. Thanks for the video.
@snackbob100
@snackbob100 4 роки тому
Aso, is this an example of gradient decent?
@PaulGoux
@PaulGoux 3 роки тому
Not sure if you are going to read this but the simple perceptron repot is missing
@ZIT116rus
@ZIT116rus 6 років тому
Can't figure out something. Why does (w0*x + w1*y + w2*b) formula should equal to zero?
@Kino-Imsureq
@Kino-Imsureq 6 років тому
btw why not use 1 instead of bias?
@XKCDism
@XKCDism 6 років тому
Are you going to cover genetic algorithms combined with neural networks?
@TheCodingTrain
@TheCodingTrain 6 років тому
Yup!
@XKCDism
@XKCDism 6 років тому
Awesome
@MrGenbu
@MrGenbu 4 роки тому
Why the mapping between -1,1 and then multiplying again in the width , height did not get it why he did not generate them as the last video
@dominiksmeda7203
@dominiksmeda7203 3 роки тому
In my case I had to multiply learning rate for bias by 100 to make it work quickly. Does someone know why?
@geoffwagner4935
@geoffwagner4935 7 місяців тому
this must be how a robot knows when he's really crossed the line now
@carsonholloway
@carsonholloway 4 роки тому
21:24 - Can somebody explain to me why it's equal to zero?
@MrGenbu
@MrGenbu 4 роки тому
in the perceptron drawing you can see the inputs gets multiplied by weights and summed together then you compare that to a threshold"activation function " which makes it an inequality function wx+wy+wb > 0 , when you draw it you can just make it equal does not matter.
@isaacmuscat5082
@isaacmuscat5082 3 роки тому
Sort of late, but I had trouble with this too. guessY() is supposed to return the y position of the classifier (the line of the perception). And since the range of the activation function is between -1 and 1, the absolute center or divider between whether to label the point green or red is when the activation function(sign in this case) outputs 0. Therefore, the perceptron's decision boundary (the line of the perception) is a line which has the perceptron's prediction set to 0 - the specific number at which the point is neither green nor red (although we label the point as green if the activation function outputs a value >=0). Hope that helps anyone coming here late.
@pradeeshbm5558
@pradeeshbm5558 5 років тому
Can you please make a video to explain Newton raphson method of optimization...
@TheCodingTrain
@TheCodingTrain 5 років тому
Please suggest here! github.com/CodingTrain/Rainbow-Topics/issues
@julian.2031
@julian.2031 6 років тому
Maybe you could code a "Revelation 12 Sign" searcher? Would be nice.
@kamilbolka
@kamilbolka 6 років тому
I have a question: How do you get display Density in processing so all my shapes stay the same size when I change the windows resolution?
@agfd5659
@agfd5659 6 років тому
Why don't you take a look at the Processing reference page: processing.org/reference/
@TonyUnderscore
@TonyUnderscore 5 років тому
I would like to ask some questions which you didn't cover on your video. So this program you made is ment to work with randomly generated inputs and "learn" from these because you also give it the correct answer for each input. This process however is repeated every time and because of that the machine has to "learn" everything from scratch every time. Is it possible to train it in a way that it saves its data so if you decide to input numerous specific values it will already know which is right and which is wrong? Basically, i want to know if there is a way for the neural network to actually teach itself and then keep the "knowledge" it has obtained instead of making more accurate guesses over and over again until you restart it. If anyone replies keep in mind that i am extremely new to this so try explaining everything as much as possible
@DannyGriff97
@DannyGriff97 5 років тому
Isnt this typically the same concept as a discriminate function ? Similar to saving the "weights" as a discriminate function
@lil_zcrazyg1917
@lil_zcrazyg1917 4 роки тому
@@DannyGriff97Oh my! I'm great at discrimination, do you think I could be of use here?
@DannyGriff97
@DannyGriff97 4 роки тому
Lil_ZcrazyG not that kind of discrimination here ;)
@macsenwyn5004
@macsenwyn5004 3 роки тому
float f(X) says unexpected token x
@julianabhari7760
@julianabhari7760 6 років тому
Why does the formula that the neuron is trying to learn have to be equal to zero? The formula you wrote down was "w0(x) + w1(y) + w2(b) = 0" My question is why is it equal to 0?
@blasttrash
@blasttrash 6 років тому
I think it doesn't matter if its equal to zero or some other number. Hope someone can correct that for me. ax + by + c = 0 can also be represented as ax + by + c = d as you suggested. But now if you take d to LHS it becomes ax + by + (c - d) = 0. One could argue that (c-d) in and of itself is another constant. So we could call (c-d) as a k, so equation now becomes ax + by + k = 0. Which is similar to ax + by + c = 0. The value of constant(or the bias which we usually give as a 1) doesn't really matter as it is only there to make sure of that (0,0) thing that he explained in last video. Let's take an example. Lets say that the desired equation is x + y + 1 = 0. Now lets say that for our algorithm we fed inputs as (0,0,2) instead of say (0,0,1) meaning we are changing bias to be 2 instead of 1(coz we are crazy :P). Now the learning starts and we will end up with something like 2x + 2y + 2 = 0 (assuming that learning gives us the exact line implying that there is a lot of data that we don't end up with some other line that ALSO classifies our data). So 2x + 2y + 2 = 0 is same as x + y + 1 = 0; Meaning that the bias can be any number other than zero(why? coz of last video). So the bias value will not effect whether we will get final line or not. Bias effects other weights however. With a 0.5 bias in previous example we could end up with a line 0.5x + 0.5y + 0.5 = 0 or 0.25x + 0.25y + 0.25 = 0 which are all same as x + y + 1 = 0. So what I am trying to say is that the bias can be anything other than 0, so equating ax + by + c = 0 is pretty much the same as ax + by + c = d(any arbitrary d). Hope I am right and hope I helped. :D :P
@xianfenghor6635
@xianfenghor6635 6 років тому
I also keep thinking about this question. Any can kindly explain this?
@zendoclone1
@zendoclone1 6 років тому
The reason is "because math". With the equation "w0(x)+w1(y)+w2(b)=0" we can then make this "w0(x)+w2(b)=-w1(y)" which then becomes "y = -w0(x)/w1-w2(b)/w1"
@TheONLYFranzl
@TheONLYFranzl 6 років тому
The function x*w0+y*w1+bias has an output which is either >=0 or = 0, set2 contains all the points leading to an output
@gunjanbasak8431
@gunjanbasak8431 5 років тому
"w0(x) + w1(y) + w2(b) = 0" -> This is the equation of line. You can write it in this way -> " ax + by + c = 0" or "y = mx + c". The actual equation for the straight line in this example is "w0(x1) + w1(x2) + w3(b) = 0". Here 'y' is the output of the neural network. 'x1', 'x2' and 'b' are inputs of the neural network. And 'w0', 'w1' and 'w2' are the weights for the inputs. You may be confused in the 'y' notation, because he used 'y' to denote different things in different diagram. In the equation of straight line, he used it for Y-coordinates, and in the perceptron he used it for the output of the perceptron. Hopefully that makes sense.
@FredoCorleone
@FredoCorleone Рік тому
How does he arrive that the sum w0•x + w1•y + w2•b must be zero?
@pow3rstrik3
@pow3rstrik3 6 років тому
If you are going to refactor, please change the x_ and y_ to x and y and just use this.x = x and this.y = y. (refering to the construction of point)
@TheCodingTrain
@TheCodingTrain 6 років тому
Thanks for this feedback!
@bennet615
@bennet615 Рік тому
i woudnt lie , even though i really grasped the concept , the coding part was a hard to follow in this and the previous video in the series repectively
@marcusbluestone2822
@marcusbluestone2822 4 роки тому
Why does w0x + w1y + w2b = 0? It's not working for my code
@zaynbaig3157
@zaynbaig3157 6 років тому
I am making a video game, should I use p5.js or prosessing? p.s. you are awesome man!
@zaynbaig3157
@zaynbaig3157 6 років тому
Fulgentius Willy Thanks! I will take that into consideration.
@zunairahmed9925
@zunairahmed9925 6 років тому
which programming language do you use.? and suggestions to learn it
@marufhasan9365
@marufhasan9365 6 років тому
He is using a language called processing which is build on java. I haven't learn this language yet so i can't give you any advice but if you only want to learn processing just for this series then I think it is not necessary . If you know java you should be able to follow this tutorial. learning java would be more practical choice in that case, if you don't know that already. But if you find processing cool then go right ahead and fulfill your curiosity .
@TheFireBrozTFB
@TheFireBrozTFB 6 років тому
Make y = radical(x)
@patrickhendron6002
@patrickhendron6002 2 місяці тому
Perceptr-AI-n 🙂
@blackfox848
@blackfox848 5 років тому
imagine me take 1 whole day to convert this into java programming language :) i even learned processing language while doing it (WOW! i am proud of my self)
@jeffvenancius
@jeffvenancius Рік тому
It's interesting how it looks like that mutation algorithhm
@morethanyell
@morethanyell 6 років тому
24:36 CAPTCHA of Daniel Shiffman
@FredoCorleone
@FredoCorleone Рік тому
Also it doesn't make sense the rise over run analogy because he ends up with x•w0/w1, and that's run over rise...
@charbelsarkis3567
@charbelsarkis3567 6 років тому
Can the line be a curve
@MattRose30000
@MattRose30000 5 років тому
Charbel Sarkis a single perceptron can only solve linear seperation, so no. Dan explains this in the next video. Try changing the f(x) from 2*x + 1 to x * x + 1 and you will see that it doesn't find a solution
@calebprenger3928
@calebprenger3928 4 роки тому
I think your perceptron code link is broken. :(
@calebprenger3928
@calebprenger3928 5 років тому
What really should have been done on this lesson is the training data should differ from the data for guessing.
@TheCodingTrain
@TheCodingTrain 5 років тому
Great point!
@calebprenger3928
@calebprenger3928 5 років тому
I think i may have meant to comment on the first video. Sorry :(
@marionnebuhr4598
@marionnebuhr4598 5 років тому
Why isn't university like this?
@coffeecatrailway
@coffeecatrailway 6 років тому
float x, y; Point(float x, float y) { this.x = x; this.y = y; }
@MarosZilka
@MarosZilka 6 років тому
x_ y_ was so ugly to me that i started looking for comment like this...
@asharkhan6714
@asharkhan6714 6 років тому
Hello, I'm in 9th grade and I'm having some problems in learning calculus. So, can you recommend me some resources where I can learn calculus easily?
@TheCodingTrain
@TheCodingTrain 6 років тому
Hello! Love hearing from high school viewers! I would recommend 3Blue1Brown's calculus series and also maybe Khan academy videos?
@asharkhan6714
@asharkhan6714 6 років тому
The Coding Train Thank you, I checked out 3blue1brown's essence of calculus series and it's amazing.
@PrasadMadhale
@PrasadMadhale 6 років тому
I tried out this perceptrons example in Javascript using p5js and it worked properly. But, I was not able to visualize the line which shows algo's current guess. If anyone has completed this tutorial in p5js would you be willing to share the code?
@TheCodingTrain
@TheCodingTrain 6 років тому
Take a look here: github.com/shiffman/The-Nature-of-Code-Examples-p5.js/tree/master/chp10_nn
@PrasadMadhale
@PrasadMadhale 6 років тому
That helped. Thanks a lot!
@ConstantineTvalashvili
@ConstantineTvalashvili 6 років тому
25:02 \m/
@Chevifier
@Chevifier 2 роки тому
That moment when the AI guesses the line correctly but the line you make is wrong,(Cant figure out where I wrote something wrong)😂
@Chevifier
@Chevifier 2 роки тому
fixed on the Point i was checking if x > lineY instead of y > lineY lol
@psinha6502
@psinha6502 5 років тому
Can u make videos of such coding in python!!
@joraforever9899
@joraforever9899 6 років тому
i dont think that a line is a good representation of an equation, what if the equation contained square of x or root of x, the line will represent only the end points of the equation
@MadSandman
@MadSandman 6 років тому
eQuation
@renanemilio1943
@renanemilio1943 6 років тому
geometry dash coding challenge!!! plis
@lil_schub
@lil_schub 6 років тому
It would be really cool if u could do this series with java :D
@lorca3367
@lorca3367 6 років тому
cure 44 processing is built on java and the code is basically java
@lil_schub
@lil_schub 6 років тому
no, java and javascript are 2 totally different languages
@lorca3367
@lorca3367 6 років тому
im confused this is java?
@lil_schub
@lil_schub 6 років тому
no, its javascript
@lorca3367
@lorca3367 6 років тому
nah im pretty sure its java
@hjjol9361
@hjjol9361 6 років тому
You again ? Why did i look your video each day ??? i don't know.
@Nixomia
@Nixomia 6 років тому
Brick Breaker Game Coding Challenge
@monkeysaregreat
@monkeysaregreat 6 років тому
I coded an versions of this in python using matplotlib (github.com/ynfle/perceptron#perceptron). Can you take a look? It seems to unable to get close to the actual line, and it seems to have a consistent change in weight.
@monkeysaregreat
@monkeysaregreat 6 років тому
It works when y = x, but not other numbers
@monkeysaregreat
@monkeysaregreat 6 років тому
It was just a bug regarding the mapping of the points
@casanpora
@casanpora 3 роки тому
You don't know how much I appreciate this, thanks!!!
@ilie1697
@ilie1697 6 років тому
does anyone have the source code for this...I tried doing it on my own -> messed up -> tried fixing -> gave up -> cried -> and now begging for the source code
@shaunaksen6076
@shaunaksen6076 5 років тому
Here you go: github.com/ShaunakSen/Data-Science-Updated/tree/master/Math%20of%20Intelligence/The%20Coding%20Train/Simple%20Perceptron/CC_SimplePerceptron2
@cassandradawn780
@cassandradawn780 3 роки тому
press 4 if you're on computer (not in comments, just press 4)
@howzeman
@howzeman 6 років тому
best minute ukposts.info/have/v-deo/fHepfZl7oYarwpc.htmlm45s
@pedrovelazquez138
@pedrovelazquez138 4 роки тому
So this is the line trying to learn... boy, its really not doing a very good job. 😂😂😂😂😂😂
@xzencombo3400
@xzencombo3400 6 років тому
Will you make something creative and stop this machine learning xD?
@Tiara48z
@Tiara48z 6 років тому
xZen Combo will you go do that on your channel?
10.4: Neural Networks: Multilayer Perceptron Part 1 - The Nature of Code
15:56
The Coding Train
Переглядів 305 тис.
Coding Challenge #99: Neural Network Color Predictor
37:45
The Coding Train
Переглядів 95 тис.
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Переглядів 4,4 млн
КАК ГЛОТАЮТ ШПАГУ?😳
00:33
Masomka
Переглядів 2,1 млн
7.3: The Game of Life - The Nature of Code
16:04
The Coding Train
Переглядів 101 тис.
10.11: Neural Networks: Matrix Class Improvements - The Nature of Code
21:40
The Coding Train
Переглядів 45 тис.
Coding Challenge #72: Frogger Refactoring
37:20
The Coding Train
Переглядів 41 тис.
10.7: Neural Networks: Matrix Math Part 2 - The Nature of Code
13:37
The Coding Train
Переглядів 81 тис.
Perceptron Algorithm with Code Example - ML for beginners!
8:55
Python Simplified
Переглядів 100 тис.
Stable Diffusion in Code (AI Image Generation) - Computerphile
16:56
Computerphile
Переглядів 279 тис.
Python Hash Sets Explained & Demonstrated - Computerphile
18:39
Computerphile
Переглядів 100 тис.
Coding Challenge #62.2: Plinko with Matter.js Part 2
20:03
The Coding Train
Переглядів 48 тис.
What are MLPs (Multilayer Perceptrons)?
5:28
IBM Technology
Переглядів 56 тис.
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Переглядів 4,4 млн