Linear Regression Analysis
Introduction to Linear Regression Analysis
Linear regression is a widely used supervised learning algorithm for various applications. The advantage of using linear regression is its implementation simplicity. A Linear regression algorithm is widely used in the cases where there is need to predict numerical values using the historical data.
Suppose we have 20 years of population data and we are interested in predicting the population for the next 5 years or we have product purchase data and we are interested to find the best selling price by changing the product related features, linear regression will be the right choice to tackle this kind of interesting problems.Even though we have a bunch of regression algorithms to predict numerical values. Such as : Polynomial Regression, Stepwise Lasso Regression andElasticNet Regression.
Linear regression mostly used method for solving linear regression kind of problems, because linear regression needs less computational power compared to other regression methods and it’s the best approach to find the relation between different attributes. Suppose finding the relation between increase in temperature leads to increase in cool drinks or not.
Linear Regression means predicting scores of one variable from the scores of the second variable. The variable we are predicting is called the criterion variable and is referred to as Y. The variable we are basing our predictions is known as the predictor variable and is referred to as X. When there is only one predictor variable, the prediction method is called simple regression.The aim of linear regression is to find the best-fitting straight line through the points. The best-fitting line is called a regression line.
The above equation is hypothesis equation
where:
hθ(x) is nothing but the value Y(which we are going to predicate ) for particular x ( means Y is a linear function of x)
θ0 is a constant
θ1 is the regression coefficient
X is value of the independent variable
Properties of the Linear Regression Line
Linear Regression line has the following properties:
- The line minimizes the sum of squared differences between observed values (the y values) and predicted values (the hθ(x) values computed from the regression equation).
- The regression line passes through the mean of the X values (x) and through the mean of the Y values ( hθ(x) ).
- The regression constant (θ0) is equal to the y-intercept of the regression line.
- The regression coefficient (θ1) is the average change in the dependent variable (Y) for a 1-unit change in the independent variable (X). It is the slope of the regression line.
The least squares regression line is the only straight line that has all of these properties.
Goal of Hypothesis Function
The goal of Hypothesis is to choose θ0 and θ1 so that hθ(x) is close to Y for our training data while choosing θ0 and θ1 we have to consider the cost function( J(θ) ) where we are getting low value for cost function( J(θ) ).
The below function is called as a cost function, the cost function ( J(θ) ) is nothing but just a Squared error function.
Let’s Understand Linear Regression with Example
Before going to explain linear Regression let me summarize the things we learn
Suppose we have data some thing look’s like this
No. | Year | Population |
---|---|---|
1 | 2000 | 1,014,004,000 |
2 | 2001 | 1,029,991,000 |
3 | 2002 | 1,045,845,000 |
4 | 2003 | 1,049,700,000 |
5 | 2004 | 1,065,071,000 |
6 | 2005 | 1,080,264,000 |
7 | 2006 | 1,095,352,000 |
8 | 2007 | 1,129,866,000 |
9 | 2008 | 1,147,996,000 |
10 | 2009 | 1,166,079,000 |
11 | 2010 | 1,173,108,000 |
12 | 2011 | 1,189,173,000 |
13 | 2012 | 1,205,074,000 |
Now our task is to answer the below questions
No. | Year | Population |
---|---|---|
1 | 2014 | ? |
2 | ? | 2,205,074,000 |
Let me draw a graph for our data
Python Code for graph
# Required Packages import plotly.plotly as pyfrom plotly.graph_objs import * py.sign_in("username", "API_authentication_code") from datetime import datetime x = [ datetime(year=2000,month=1,day=1), datetime(year=2001,month=1,day=1), datetime(year=2002,month=1,day=1), datetime(year=2003,month=1,day=1), datetime(year=2004,month=1,day=1), datetime(year=2005,month=1,day=1), datetime(year=2006,month=1,day=1), datetime(year=2007,month=1,day=1), datetime(year=2008,month=1,day=1), datetime(year=2009,month=1,day=1), datetime(year=2010,month=1,day=1), datetime(year=2011,month=1,day=1), datetime(year=2012,month=1,day=1)] data = Data([ Scatter( x = x, y = [1014004000, 1029991000, 1045845000, 1049700000, 1065071000, 1080264000, 1095352000, 1129866000, 1147996000, 1166079000, 1173108000,1189173000,1205074000]) ]) plot_url = py.plot(data, filename='DataAspirant')
- Now what we will do is we will find the most suitable value for our θ0 and θ1 using hypotheses equation.
- Where x is nothing but the years, and the hθ(X) is the prediction value for our hypotheses.
- Once we were done finding θ0 and θ1 we can find any value.
- Keep in mind we first find the θ0 and θ1 for our training data.
- Later we will use these θ0 and θ1 values to do the prediction for test data.
Don’t think too much about how to find θ0 and θ1 values, in linear regression implementation in python, I have explained how we can find θ0 and θ1 values with nice example and the coding part too.
Follow us:
FACEBOOK| QUORA |TWITTER| GOOGLE+ | LINKEDIN| REDDIT | FLIPBOARD | MEDIUM | GITHUB
I hope you like this post. If you have any questions, then feel free to comment below. If you want me to write on one particular topic, then do tell it to me in the comments below.
Related Courses:
Do check out unlimited data science courses
Title & links | Details | What You Will Learn |
Machine Learning A-Z: Hands-On Python & R In Data Science![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Course Overall Rating:: 4.6 |
|
Machine Learning: Classification![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Course Overall Rating:: 4.7 |
|
Practical Machine Learning![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Course Overall Rating:: 4.4 |
|