Lesson - 4
Linear Regression 20 | Mean Square Error(MSE)
Hey! Welcome back ML enthusiasts today we gonna dive deep into the most important and most misunderstood topics mean square error and linear regression.
what misunderstood!
hahaha, ya linear regression is tuff in other articles and video courses out there but don't worry I'll make linear regression simple for you so excited to learn, that's the spirit lets go...
what misunderstood!
hahaha, ya linear regression is tuff in other articles and video courses out there but don't worry I'll make linear regression simple for you so excited to learn, that's the spirit lets go...
As we learned before that a model is something that learns from data and there are lots of complected model types out there.
Q. OMG this means that we have lots of interesting ways to learn from data?
ans. the answer is a big Yes.
Q. OMG this means that we have lots of interesting ways to learn from data?
ans. the answer is a big Yes.
So today we are going to start with a very simple and familiar model that is linear regression this will open the gateway to more sophisticated methods.
It's long been known that in real estate the housing prices increase with the increase of per sq unit land.
ok then let's verify this fact by building a model.
now in the picture above you can three things clearly...
1. x-axis: denoting the input features that are nothing but the plot size in per square foot.
2. y-axis: denoting the other output feature that is nothing but the house price.
3. dots: these multiple dots denote the training data given to our model.
Q. what a single dots represent?
ans. here each red dot represents a house with its X and Y coordinates.
where X is hosing in sq ft and Y is housing price.
Q. how to know where to place a dot?
ans. umm... that's a good question.
as we learn earlier in supervised learning to train a model we have to give it input data and output data as well. so each house has co-ordinates X(hosing in sq ft) and Y(housing price) they can be denoted as a house(x,y) put the value of x and y you get the position of each dot.
now put a little line...
this line is now a model that predicts multiple housing prices.
so, as we can clearly notice that in the graph as the land size increases the prices of houses also get increased.
now if we give our model a house lets say of size X sq ft then the model predicts its price for us 😃😃 .
so, as we can clearly notice that in the graph as the land size increases the prices of houses also get increased.
now if we give our model a house lets say of size X sq ft then the model predicts its price for us 😃😃 .
so the problem got resolved and I am pretty sure you get a full understanding of how this linear regression model works.
now let's cover some mathematical portion also...
according to our high school algebra, the equation of a line is...
where y is the house price
m is the slope
b is the y-intercept
x is the housing in sq ft
but in machine learning its looks like this........
y' is the label
b is the y-intercept
m is the slope replaced by w1 now called weight
x1 its a feature
for multiple linear regression add this eq multiple times
Q. how do we know if we have a good line?
ans. well for that we want to think of some notion called loss.
loss is basically plenty for bad prediction. the predicted model is perfect if the loss is zero.
and greater otherwise our aim is to keep the loss as minimized as we can.
loss is basically plenty for bad prediction. the predicted model is perfect if the loss is zero.
and greater otherwise our aim is to keep the loss as minimized as we can.
loss in linear regression |
in the image above these guys which are farther have some moderate size loss, these guys which are on the line have exactly-zero loss and these guys which are just touching the line have near-zero loss.
note: loss is always positive or zero.
Q. How might we define loss in linear regression?
ans. well, that's something we have to think about in a slightly more mathematical way...
loss = (original outcome) - (model outcome)
= observation(y) - predicted value(y')
so, Loss = y - y'
In ml, we define loss function as L2
therefore L2= (y - y')^2.
MSE(mean square error) it is the average squared loss in the whole dataset
MSE is most commonly used in ml.
Q. which one is the linear regression algorithm?
ans. the last one is MSE.
Q. can you give us a linear regression pdf?
ans. ya, ya sure drop your emails.
loss = (original outcome) - (model outcome)
= observation(y) - predicted value(y')
so, Loss = y - y'
In ml, we define loss function as L2
therefore L2= (y - y')^2.
MSE(mean square error) it is the average squared loss in the whole dataset
MSE is most commonly used in ml.
Q & A
Q. which one is the linear regression algorithm?
ans. the last one is MSE.
Q. can you give us a linear regression pdf?
ans. ya, ya sure drop your emails.
hope uhh guys understand every concept if not do mention in comments below 👇👇👇👇
Thanks!!! for Visiting Asaanhai or lucky5522 😀😀
Very nice blog on mean square error and linear regression. It is really helpful to us to learn Machine Learning.Thank you very much.
ReplyDeleteBest Machine Learning Traning in Mumbai