The Gradient descent algorithm is a first-order iterative optimization algorithm used to find a local minimum or maximum of a function. To find the minimum one takes steps proportional to the negative of the gradient of the function at the current point – To find the maximum on instead takes steps proportional to the positive of the gradient. Here the gradient descent is used to calculate the linear regression. The dark blue line is the found by the linear regression formula while the light blue line is found by iterations performing the gradient descent. Picking a good learning rate will impact the number of iterations it takes to find a good approximation and if at all an approximation can be found as it could be missed if the learning rate is to high.

```
GradientDescent = function()
{
this.computeErrorForLine = function( points, m, b )
{
var error = 0.0;
for( var i = 0; i < points.length; ++i )
{
var p = points[ i ];
error += Math.pow( ( p.y - ( m * p.x + b ) ), 2 );
}
return error / points.length;
};
this.computeGradient = function( points, m, b, learningRate )
{
var bGradient = 0;
var mGradient = 0;
var n = points.length;
for( var i = 0; i < n; ++i )
{
var p = points[ i ];
bGradient += -( 2 / n ) * ( p.y - ( ( m * p.x ) + b ) );
mGradient += -( 2 / n ) * p.x * ( p.y - ( ( m * p.x ) + b ) );
}
return { b: ( b - ( learningRate * bGradient ) ), m: ( m - ( learningRate * mGradient ) ) };
};
this.gradientDescent = function( points, b, m, learningRate, numIterations )
{
var errors = [];
var it = 1;
for( var i = 0; i < numIterations; ++i )
{
var g = this.stepGradient( points, m, b, learningRate );
b = g.b;
m = g.m;
}
return { b: b, m: m };
};
```