c - Neural network for linear regression -


i found great source matched exact model needed: http://ufldl.stanford.edu/tutorial/supervised/linearregression/

the important bits go this.

you have plot x->y. each x-value sum of "features" or how i'll denote them, z.

so regression line x->y plot go h(sum(z(subscript-i)) h(x) regression line (function)

in nn idea each z-value gets assigned weight in way minimizes least squared error.

the gradient function used update weights minimize error. believe may propagating incorrectly -- update weights.

so wrote code, weights aren't being correctly updated.

i may have misunderstood spec stanford post, that's need help. can verify have correctly implemented nn?

my h(x) function simple linear regression on initial data. in other words, idea nn adjust weights data points shift closer linear regression.

for (epoch = 0; epoch < 10000; epoch++){      //loop number of games     (game = 1; game < 39; game++){       sum = 0;       int temp1 = 0;       int temp2 = 0;       //loop number of inputs       (i = 0; < 10; i++){         //compute sum = x         temp1 += inputs[game][i] * weights[i];       }        (i = 10; < 20; i++){         temp2 += inputs[game][i] * weights[i];       }        sum = temp1 - temp2;        //compute error       error += .5 * (5.1136 * (sum) + 1.7238 - targets[game]) * (5.1136 * (sum) + 1.7238 - targets[game]);       printf("error = %g\n", error);       //backpropogate       (i = 0; < 20; i++){         weights[i] = sum * (5.1136 * (sum) + 1.7238 - targets[game]); //possible error here       }      }      printf("epoch = %d\n", epoch);     printf("error = %g\n", error);     } 

please check out andrew ng's coursera. professor of machine learning @ stanford , can explain concept of linear regression better pretty else. can learn essentials linear regression in first lesson.

for linear regression, trying minimize cost function, in case sum of squared errors (predicted value - actual value)^2 , achieved gradient descent. solving problem not require neural network , using 1 rather inefficient.

for problem, 2 values needed. if think equation line, y = mx + b, there 2 aspects of line need: slope , y-intercept. in linear regression looking slope , y-intercept best fits data.

in problem, 2 values can represented theta0 , theta1. theta0 y-intercept , theta1 slope.

this update function linear regression:

enter image description here

here, theta 2 x 1 dimensional vector theta0 , theta1 inside of it. doing taking theta , subtracting mean of sum of errors multiplied learning rate alpha (usually small, 0.1).

let's real perfect fit line @ y = 2x + 3, our current slope , y-intercept both @ 0. therefore, sum of errors negative, , when theta subtracted negative number, theta increase, moving prediction closer correct value. , vice versa positive numbers. basic example of gradient descent, descending down slope minimize cost (or error) of model.

this type of model should trying implement in model instead of neural network, more complex. try gain understanding of linear , logistic regression gradient descent before moving on neural networks.

implementing linear regression algorithm in c can rather challenging, without vectorization. if looking learn how linear regression algorithm works , aren't looking use c make it, recommend using matlab or octave (a free alternative) implement instead. after all, examples post found use same format.


Comments

Popular posts from this blog

Delphi XE2 Indy10 udp client-server interchange using SendBuffer-ReceiveBuffer -

Qt ActiveX WMI QAxBase::dynamicCallHelper: ItemIndex(int): No such property in -

Enable autocomplete or intellisense in Atom editor for PHP -