1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > 【机器学习】Logistic Regression逻辑回归原理与java实现

【机器学习】Logistic Regression逻辑回归原理与java实现

时间:2021-02-20 20:28:01

相关推荐

【机器学习】Logistic Regression逻辑回归原理与java实现

【机器学习】Logistic Regression逻辑回归原理与java实现

1、基于概率的机器学习算法2、逻辑回归算法原理2.1、分离超平面2.2、阈值函数2.3、样本概率2.4、损失函数3、基于梯度下降法的模型训练4、java实现

1、基于概率的机器学习算法

机器学习算法可以分为基于概率、基于距离、基于树和基于神经网络四类。基于概率的机器学习算法本质上是计算每个样本属于对应类别的概率,然后利用极大似然估计法对模型进行训练。基于概率的机器学习算法的损失函数为负的log似然函数。

基于概率的机器学习算法包括朴素贝叶斯算法、Logistic Regression算法、Softmax Regression算法和Factorization Machine算法等。

2、逻辑回归算法原理

2.1、分离超平面

Logistic Regression算法是二分类线性分类算法,分离超平面采用线性函数:

Wx+b=0Wx + b = 0Wx+b=0

xxx是样本特征矩阵,特征数为mmm,其中W(1∗m)W(1*m)W(1∗m)是权重矩阵。通过分类超平面可以将数据分成正负两个类别,类别为正的样本标签标记为1,类别为负的样本标签标记为0。

2.2、阈值函数

通过阈值函数可以将样本到分离超平面的距离映射到不同的类别,Logistic Regression算法中阈值函数采用Sigmoid函数:

f(x)=11+e−xf(x) = \frac{1}{{1 + {e^{ - x}}}}f(x)=1+e−x1​

sigmoid函数的图像如下:

对于样本xxx,其到分离超平面的几何距离DDD为:

D=Wx+bD = Wx + bD=Wx+b

2.3、样本概率

假设样本xxx为正类别,则其概率为:

p(y=1∣x,W,b)=σ(Wx+b)=11+e−(Wx+b)p\left( {y = 1\left| {x,W,b} \right.} \right) = \sigma \left( {Wx + b} \right) = \frac{1}{{1 + {e^{ - \left( {Wx + b} \right)}}}}p(y=1∣x,W,b)=σ(Wx+b)=1+e−(Wx+b)1​

负类别样本的概率:

p(y=0∣x,W,b)=1−p(y=1∣x,W,b)=e−(Wx+b)1+e−(Wx+b)p\left( {y = 0\left| {x,W,b} \right.} \right) = 1 - p\left( {y = 1\left| {x,W,b} \right.} \right) = \frac{{{e^{ - \left( {Wx + b} \right)}}}}{{1 + {e^{ - \left( {Wx + b} \right)}}}}p(y=0∣x,W,b)=1−p(y=1∣x,W,b)=1+e−(Wx+b)e−(Wx+b)​

将两种类别合并,属于类别yyy的概率为:

p(y∣x,W,b)=σ(Wx+b)y(1−σ(Wx+b))1−yp\left( {y\left| {x,W,b} \right.} \right) = \sigma {\left( {Wx + b} \right)^y}{\left( {1 - \sigma \left( {Wx + b} \right)} \right)^{1 - y}}p(y∣x,W,b)=σ(Wx+b)y(1−σ(Wx+b))1−y

2.4、损失函数

设训练数据集有nnn个训练样本{(x1,y1),(x2,y2),⋯ ,(xn,yn)}\left\{ {\left( {{x_1},{y_1}} \right),\left( {{x_2},{y_2}} \right), \cdots ,\left( {{x_n},{y_n}} \right)} \right\}{(x1​,y1​),(x2​,y2​),⋯,(xn​,yn​)},其似然函数为:

LW,b=∏i=1n[σ(Wxi+b)yi(1−σ(Wxi+b))1−yi]{L_{W,b}} = \prod\limits_{i = 1}^n {\left[ {\sigma {{\left( {W{x_i} + b} \right)}^{{y_i}}}{{\left( {1 - \sigma \left( {W{x_i} + b} \right)} \right)}^{1 - {y_i}}}} \right]}LW,b​=i=1∏n​[σ(Wxi​+b)yi​(1−σ(Wxi​+b))1−yi​]

Logistic Regression算法的损失函数为负的log似然函数:

lW,b=−1n∑i=1n[yilog⁡(σ(Wxi+b))+(1−yi)log⁡(1−σ(Wxi+b))]{l_{W,b}} = - \frac{1}{n}\sum\limits_{i = 1}^n {\left[ {{y_i}\log \left( {\sigma \left( {W{x_i} + b} \right)} \right) + \left( {1 - {y_i}} \right)\log \left( {1 - \sigma \left( {W{x_i} + b} \right)} \right)} \right]}lW,b​=−n1​i=1∑n​[yi​log(σ(Wxi​+b))+(1−yi​)log(1−σ(Wxi​+b))]

模型训练是为了求取最优的权值矩阵WWW和偏置bbb,将模型训练问题转化为最小化损失函数:

min⁡W,blW,b\mathop {\min }\limits_{W,b} {l_{W,b}}W,bmin​lW,b​

3、基于梯度下降法的模型训练

本博文中,最小化损失函数的求解采用梯度下降法。

第一步:初始化权重矩阵W=W0W=W_0W=W0​和偏置b=b0b=b_0b=b0​。

第二步:重复如下过程:

计算参数的梯度下降方向:

∂Wi=−∂lW,b∂W∣Wi\partial {W_i} = - \frac{{\partial {l_{W,b}}}}{{\partial W}}\left| {_{{W_i}}} \right.∂Wi​=−∂W∂lW,b​​∣Wi​​

∂bi=−∂lW,b∂b∣bi\partial {b_i} = - \frac{{\partial {l_{W,b}}}}{{\partial b}}\left| {_{{b_i}}} \right.∂bi​=−∂b∂lW,b​​∣bi​​

选择步长α\alphaα

更新参数:

Wi+1=Wi+α⋅∂Wi{W_{i + 1}} = {W_i} + \alpha \cdot \partial {W_i}Wi+1​=Wi​+α⋅∂Wi​

bi+1=bi+α⋅∂bi{b_{i + 1}} = {b_i} + \alpha \cdot \partial {b_i}bi+1​=bi​+α⋅∂bi​

第三步:判断是否达到终止条件。

假设xijx_i^jxij​是样本xix_ixi​的第jjj个特征分量,wjw_jwj​为权重矩阵WWW的第jjj个分量,取w0=b{w_0} = bw0​=b,则权重矩阵中第jjj个分量的梯度方向为:

∂wj=−1n∑i=1n(yi−σ(Wxi+b))xij\partial {w_j} = - \frac{1}{n}\sum\limits_{i = 1}^n {\left( {{y_i} - \sigma \left( {W{x_i} + b} \right)} \right)} x_i^j∂wj​=−n1​i=1∑n​(yi​−σ(Wxi​+b))xij​

4、java实现

完整代码和数据样本地址:/shiluqiang/Logistic_Regression_java

首先:导入样本特征和标签。

import java.io.*;public class LoadData {//导入样本特征public static double[][] Loadfeature(String filename) throws IOException{File f = new File(filename);FileInputStream fip = new FileInputStream(f);// 构建FileInputStream对象InputStreamReader reader = new InputStreamReader(fip,"UTF-8");// 构建InputStreamReader对象StringBuffer sb = new StringBuffer();while(reader.ready()) {sb.append((char) reader.read());}reader.close();fip.close();//将读入的数据流转换为字符串String sb1 = sb.toString();//按行将字符串分割,计算二维数组行数String [] a = sb1.split("\n");int n = a.length;System.out.println("二维数组行数为:" + n);//计算二维数组列数String [] a0 = a[0].split("\t");int m = a0.length;System.out.println("二维数组列数为:" + m);double [][] feature = new double[n][m];for (int i = 0; i < n; i ++) {String [] tmp = a[i].split("\t");for(int j = 0; j < m; j ++) {if (j == m-1) {feature[i][j] = (double) 1;}else {feature[i][j] = Double.parseDouble(tmp[j]);} } }return feature;}//导入样本标签public static double[] LoadLabel(String filename) throws IOException{File f = new File(filename);FileInputStream fip = new FileInputStream(f);// 构建FileInputStream对象InputStreamReader reader = new InputStreamReader(fip,"UTF-8");// 构建InputStreamReader对象,编码与写入相同StringBuffer sb = new StringBuffer();while(reader.ready()) {sb.append((char) reader.read());}reader.close();fip.close();//将读入的数据流转换为字符串String sb1 = sb.toString();//按行将字符串分割,计算二维数组行数String [] a = sb1.split("\n");int n = a.length;System.out.println("二维数组行数为:" + n);//计算二维数组列数String [] a0 = a[0].split("\t");int m = a0.length;System.out.println("二维数组列数为:" + m);double [] Label = new double[n];for (int i = 0; i < n; i ++) {String [] tmp = a[i].split("\t");Label[i] = Double.parseDouble(tmp[m-1]); }return Label;}}

然后,利用梯度下降算法优化Logistic Regression模型。

public class LRtrainGradientDescent {int paraNum; //权重参数的个数double rate; //学习率int samNum; //样本个数double [][] feature; //样本特征矩阵double [] Label;//样本标签int maxCycle; //最大迭代次数public LRtrainGradientDescent(double [][] feature, double [] Label, int paraNum,double rate, int samNum,int maxCycle) {this.feature = feature;this.Label = Label;this.maxCycle = maxCycle;this.paraNum = paraNum;this.rate = rate;this.samNum = samNum; }// 权值矩阵初始化public double [] ParaInitialize(int paraNum) {double [] W = new double[paraNum];for (int i = 0; i < paraNum; i ++) {W[i] = 1.0;}return W; }//计算每次迭代后的预测误差public double [] PreVal(int samNum,int paraNum, double [][] feature,double [] W) {double [] Preval = new double[samNum];for (int i = 0; i< samNum; i ++) {double tmp = 0;for(int j = 0; j < paraNum; j ++) {tmp += feature[i][j] * W[j];}Preval[i] = Sigmoid.sigmoid(tmp);}return Preval;}//计算误差率public double error_rate(int samNum, double [] Label, double [] Preval) {double sum_err = 0.0;for(int i = 0; i < samNum; i ++) {sum_err += Math.pow(Label[i] - Preval[i], 2); }return sum_err;}//LR模型训练public double[] Updata(double [][] feature, double[] Label, int maxCycle, double rate) {// 先计算样本个数和特征个数int samNum = feature.length;int paraNum = feature[0].length;//初始化权重矩阵double [] W = ParaInitialize(paraNum);// 循环迭代优化权重矩阵for (int i = 0; i < maxCycle; i ++) {// 每次迭代后,样本预测值double [] Preval = PreVal(samNum,paraNum,feature,W);double sum_err = error_rate(samNum,Label,Preval);if (i % 10 == 0) {System.out.println("第" + i + "次迭代的预测误差为:" + sum_err);}//预测值与标签的误差double [] err = new double[samNum];for(int j = 0; j < samNum; j ++) {err[j] = Label[j] - Preval[j];}// 计算权重矩阵的梯度方向double [] Delt_W = new double[paraNum];for (int n = 0 ; n < paraNum; n ++) {double tmp = 0;for(int m = 0; m < samNum; m ++) {tmp += feature[m][n] * err[m];}Delt_W[n] = tmp / samNum;} for(int m = 0; m < paraNum; m ++) {W[m] = W[m] + rate * Delt_W[m];}}return W;}}

Sigmoid函数

public class Sigmoid {public static double sigmoid(double x) {double i = 1.0;double y = i / (i + Math.exp(-x));return y;}}

Logistic Regression模型参数和测试结果存储。

import java.io.*;public class SaveModel {public static void savemodel(String filename, double [] W) throws IOException{File f = new File(filename);// 构建FileOutputStream对象FileOutputStream fip = new FileOutputStream(f);// 构建OutputStreamWriter对象OutputStreamWriter writer = new OutputStreamWriter(fip,"UTF-8");//计算模型矩阵的元素个数int n = W.length;StringBuffer sb = new StringBuffer();for (int i = 0; i < n-1; i ++) {sb.append(String.valueOf(W[i]));sb.append("\t");}sb.append(String.valueOf(W[n-1]));String sb1 = sb.toString();writer.write(sb1);writer.close();fip.close();}public static void saveresults(String filename, double [] pre_results) throws IOException{File f = new File(filename);// 构建FileOutputStream对象FileOutputStream fip = new FileOutputStream(f);// 构建OutputStreamWriter对象OutputStreamWriter writer = new OutputStreamWriter(fip,"UTF-8");//计算预测结果的个数int n = pre_results.length;StringBuffer sb = new StringBuffer();for (int i = 0; i < n-1; i ++) {sb.append(String.valueOf(pre_results[i]));sb.append("\n");}sb.append(String.valueOf(pre_results[n-1]));String sb1 = sb.toString();writer.write(sb1);writer.close();fip.close();}}

主类。

import java.io.*;public class LRMain {public static void main(String[] args) throws IOException{// filename String filename = "data.txt";// 导入样本特征和标签double [][] feature = LoadData.Loadfeature(filename);double [] Label = LoadData.LoadLabel(filename); // 参数设置int samNum = feature.length;int paraNum = feature[0].length;double rate = 0.01;int maxCycle = 1000;// LR模型训练LRtrainGradientDescent LR = new LRtrainGradientDescent(feature,Label,paraNum,rate,samNum,maxCycle);double [] W = LR.Updata(feature, Label, maxCycle, rate);//保存模型String model_path = "wrights.txt";SaveModel.savemodel(model_path, W);//模型测试double [] pre_results = LRTest.lrtest(paraNum, samNum, feature, W);//保存测试结果String results_path = "pre_results.txt";SaveModel.saveresults(results_path, pre_results);}}

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。