1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > kaggle房价预测特征意思_Kaggle竞赛丨房价预测(House Prices)

kaggle房价预测特征意思_Kaggle竞赛丨房价预测(House Prices)

时间:2022-06-18 02:28:51

相关推荐

kaggle房价预测特征意思_Kaggle竞赛丨房价预测(House Prices)

典型的机器学习流程如下图所示:

机器学习流程

数据收集→数据探索→数据预处理→模型训练→模型评估→性能改进→上线部署

如下视频介绍了机器学习的流程,感兴趣的同学可以点击查看:

机器学习介绍片/video/1238773062213763072

本项目结合机器学习的流程预测房价。

一、数据收集

本项目数据kaggle平台已提供,不需要收集数据。一般情况下,企业的数据在数据库或者本地文件中,需要你单独进行数据收集。

二、数据探索

数据探索一般包含数据质量分析、数据特征分析、或者图表分析。

① 数据源结构

导入数据,查看训练数据和测试数据的结构:

import pandas as pdimport numpy as nptrain_data = pd.read_csv("D:/kaggle项目数据/House-Prices-advance-regression-techniques/train.csv")test_data = pd.read_csv("D:/kaggle项目数据/House-Prices-advance-regression-techniques/test.csv")train_data.head()

test_data.head()

train_data.shape(1460, 81)test_data.shape(1459, 80)

如上训练数据中共有81个特征,1460条记录,测试数据中共有80个特征,1459条记录。

② 查看数据源中数据的数据类型:

train_data.dtypes.value_counts()object43int6435float643dtype: int64

其中有43个特征属于object类。

train_data.select_dtypes('object').head()

数据预处理阶段,object类的特征会根据类别使用不同的编码方式处理(label encoding和One-hot encoding)

③ 查看数据源中的缺失值

使用函数的功能查看缺失值:

def MissingValue(df):miss_value = df.isnull().sum()miss_percentage = miss_value / df.shape[0]miss_df = pd.concat([miss_value, miss_percentage], axis=1)miss_df = miss_df.rename(columns={0:'MissingValue',1:'%MissingPercent'})miss_df = miss_df.loc[miss_df['MissingValue']!=0, :]miss_df = miss_df.sort_values(by='%MissingPercent', ascending = False)return miss_df

训练数据的缺失值:

MissingValue(train_data)# 缺失值如下MissingValue %MissingPercentPoolQC 14530.995205MiscFeature14060.963014Alley 13690.937671Fence 11790.807534FireplaceQu6900.472603LotFrontage2590.177397GarageType810.055479GarageYrBlt810.055479GarageFinish810.055479GarageQual810.055479GarageCond810.055479BsmtExposure380.026027BsmtFinType2380.026027BsmtFinType1370.025342BsmtCond370.025342BsmtQual370.025342MasVnrArea80.005479MasVnrType80.005479Electrical10.000685

训练数据集中有几个100%的缺失的特征值。

测试集的缺失值:

MissingValue(test_data)# 缺失值如下MissingValue%MissingPercentPoolQC 14560.997944MiscFeature14080.965045Alley 13520.926662Fence 11690.801234FireplaceQu7300.500343LotFrontage2270.155586GarageCond780.053461GarageYrBlt780.053461GarageQual780.053461GarageFinish780.053461GarageType760.052090BsmtCond450.030843BsmtExposure440.030158BsmtQual440.030158BsmtFinType1420.028787BsmtFinType2420.028787MasVnrType160.010966MasVnrArea150.010281MSZoning40.002742BsmtFullBath20.001371BsmtHalfBath20.001371Functional20.001371Utilities20.001371GarageCars10.000685GarageArea10.000685TotalBsmtSF10.000685KitchenQual10.000685BsmtUnfSF10.000685BsmtFinSF210.000685BsmtFinSF110.000685Exterior2nd10.000685Exterior1st10.000685SaleType10.000685

结论:训练数据集中有19个特征含有缺失值,在测试数据集中有33个特征含有缺失值。

④ 数据分布分析

数据分布是观察数值型数据的方法之一。下面分析训练集中的数据分布(训练集特征有35个 int型特征,3个float型特征;测试集中有26个int型特征,11个float型特征)

找出数值型特征:

train_data.select_dtypes(['int64','float64']).columnsIndex(['Id', 'MSSubClass', 'LotFrontage', 'LotArea', 'OverallQual','OverallCond', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1','BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF','LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath','HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd','Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF','OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea','MiscVal', 'MoSold', 'YrSold', 'SalePrice'],dtype='object')train_data.select_dtypes(['int64','float64']).shape(1460, 38)

如上可以看出有38个数值型特征,本项目需计算各特征与目标值“SalePrice”之间的逻辑关系,这属于监督学习中的回归问题,可以使用线性回归或者岭回归进一步分析。

运用可视化方法观察几个特征与目标值的线性关系:

import seaborn as snsg = sns.pairplot(x_vars=['OverallQual', 'GrLivArea', 'YearBuilt', 'TotalBsmtSF'], y_vars=['SalePrice'], data=train_data,dropna=True)g.fig.set_size_inches(15,10)

特征变量与目标值关系

从上图可以看到特征“GrLivArea”和“TotalsmtSF”中存在一些异常值,这里暂不处理,这部分只进行数据的探索。

判断异常值的方法二:画箱型图

如下:

import matplotlib.pyplot as pltplt.rcParams['font.sans-serif'] = ['SimHei'] #用来正常显示中文标签plt.rcParams['axes.unicode_minus'] = False # 用来正常显示负号box_1, box_2, box_3, box_4 = train_data['OverallQual'], train_data['GrLivArea'], train_data['YearBuilt'], train_data['TotalBsmtSF']labels = 'OverallQual', 'GrLivArea', 'YearBuilt', 'TotalBsmtSF'plt.figure(figsize=(8,8))#建立图像plt.title("特征数据箱型分布图", fontproperties='YOUYUAN', fontsize=20)# plt.xlabel(fontproperties='Consolas', fontsize=15, color='green')plt.ylabel('Distribution', fontproperties='Consolas',fontsize=15, color='red')plt.boxplot([box_1, box_2, box_3, box_4], showfliers=True, showmeans=True, sym='*', labels=labels)plt.show

使用热图查看38个特征之间的关系:

plt.figure(figsize=(30,15))sns.heatmap(train_data.corr(),cmap='coolwarm',annot = True)plt.show()

三、数据预处理

上部分深入了解了数据集,本章节将对数据进行预处理,数据预处理中常用的方法可参考我之前写的这篇文章:

薛定谔的小胖橘:读完本文,让你快速掌握数据预处理丨机器学习​

① 删除特征中大量缺失的特征

前面我们了解到,训练数据和测试数据中存在大量缺失值的特征,因此,我们删除缺失部分大于47%的特征:

train_data = train_data.drop(['PoolQC', 'MiscFeature', 'Alley', 'Fence', 'FireplaceQu'], axis=1)test_data = test_data.drop(['PoolQC', 'MiscFeature', 'Alley', 'Fence', 'FireplaceQu'], axis=1)train_data.shape(1460, 76)test_data.shape(1459, 75)

② 类别特征编码

在数据预处理时,需要把特征为object型的值替换为数值型,在这里,我们使用标称型特征(categorical features)方法来转换:

categorical_feature_mask = train_data.dtypes==objectcategorical_cols = train_data.columns[categorical_feature_mask].tolist()from sklearn.preprocessing import LabelEncoderlabelencoder = LabelEncoder() #实例化train_data[categorical_cols] = train_data[categorical_cols].apply(lambda col: labelencoder.fit_transform(col.astype(str)))train_data

对测试数据集进行同样的处理:

categorical_feature_mask_test = test_data.dtypes==objectcategorical_cols_test = test_data.columns[categorical_feature_mask_test].tolist()test_data[categorical_cols_test] = test_data[categorical_cols_test].apply(lambda col:labelencoder.fit_transform(col.astype(str)))test_data

查看训练数据集和测试数据集的缺失值情况:

训练数据集:

MissingValue(train_data)MissingValue%MissingPercentLotFrontage2590.177397GarageYrBlt810.055479MasVnrArea80.005479

测试数据集:

MissingValue(test_data)MissingValue%MissingPercentLotFrontage2270.155586GarageYrBlt780.053461MasVnrArea150.010281BsmtFullBath20.001371BsmtHalfBath20.001371BsmtFinSF110.000685BsmtFinSF210.000685BsmtUnfSF10.000685TotalBsmtSF10.000685GarageCars10.000685GarageArea10.000685

可以看到在训练数据集中还有三个特征值有缺失值,用均值填充这三个特征值:

train_data['LotFrontage'] = train_data['LotFrontage'].fillna(train_data['LotFrontage'].mean())train_data['GarageYrBlt'] = train_data['GarageYrBlt'].fillna(train_data['GarageYrBlt'].mean())train_data['MasVnrArea'] = train_data['MasVnrArea'].fillna(train_data['MasVnrArea'].mean())

通过相关系数矩阵,为模型的创建挑选影响房价的关键特征。

#销售价格的相关系数矩阵k = 15 plt.figure(figsize=(20,10))corrmat = train_data.corr()# 选择与之最相关的15个特征cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].indexcm = np.corrcoef(train_data[cols].values.T)sns.set(font_scale=1.25)hm = sns.heatmap(cm,cmap='coolwarm', cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)plt.show()

train_data = train_data[cols]train_data

colsIndex(['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'GarageArea','TotalBsmtSF', '1stFlrSF', 'FullBath', 'TotRmsAbvGrd', 'YearBuilt','YearRemodAdd', 'MasVnrArea', 'GarageYrBlt', 'Fireplaces','BsmtFinSF1'],dtype='object')

如上是从训练数据集中筛选出来的14个特征(不含目标值)。

从测试数据集中取样同特征:

test_data = test_data[cols.drop('SalePrice')]test_data

检查下取样后数据中的缺失值:

MissingValue(test_data)MissingValue%MissingPercentGarageYrBlt780.053461MasVnrArea150.010281GarageCars10.000685GarageArea10.000685TotalBsmtSF10.000685BsmtFinSF110.000685

可以看出6个特征有缺失值,同样我们以均值填充缺失值:

test_data['GarageYrBlt'] = test_data['GarageYrBlt'].fillna(test_data['GarageYrBlt'].mean())test_data['MasVnrArea'] = test_data['MasVnrArea'].fillna(test_data['MasVnrArea'].mean())test_data['GarageCars'] = test_data['GarageCars'].fillna(test_data['GarageCars'].mean())test_data['GarageArea'] = test_data['GarageArea'].fillna(test_data['GarageArea'].mean())test_data['TotalBsmtSF'] = test_data['TotalBsmtSF'].fillna(test_data['TotalBsmtSF'].mean())test_data['BsmtFinSF1'] = test_data['BsmtFinSF1'].fillna(test_data['BsmtFinSF1'].mean())

检查test_data中数据是否有缺失值:

MissingValue(test_data)MissingValue%MissingPercent

缺失值已经处理完成。

四、模型训练和模型评估

在数据预处理部分,我们对训练数据的特征进行了采样,并且完善了各个特征的缺失值。这一部分,我们将对预处理后的数据构建模型。

如下我们将使用线性回归、梯度提升回归、决策树回归、支持向量机回归、随机森林回归、LightGBM模型训练并评估数据:

首先划分数据并且标准化数据:

from sklearn.linear_model import LinearRegression, SGDRegressor, Ridgefrom sklearn.model_selection import train_test_splitfrom sklearn.preprocessing import StandardScalerfrom sklearn.metrics import mean_squared_errorfrom sklearn.externals import joblibX_train, X_test, y_train, y_test = train_test_split(train_data.drop('SalePrice', axis=1), train_data['SalePrice'], test_size=0.3, random_state=101)y_train= y_train.values.reshape(-1,1)y_test= y_test.values.reshape(-1,1)sc_X = StandardScaler()sc_y = StandardScaler()X_train = sc_X.fit_transform(X_train)X_test = sc_X.fit_transform(X_test)y_train = sc_y.fit_transform(y_train)y_test = sc_y.fit_transform(y_test)

得到训练数据:

X_trainarray([[ 1.37391375, 2.5236405 , 0.31454703, ..., 0.46520725,0.58626866, 0.07421157],[-1.53923947, -0.2856721 , 0.31454703, ..., -0.54640203,-0.97000815, -1.01837681],[ 0.64562544, -0.01580724, 0.31454703, ..., 1.2660646 ,-0.97000815, -1.01837681],...,[-0.08266286, 0.08903234, 0.31454703, ..., 0.04370339,0.58626866, -0.83476756],[ 0.64562544, 0.03855402, 0.31454703, ..., 1.09746305,0.58626866, 1.86950201],[-0.81095116, -0.70308897, -1.02489906, ..., -0.84145473,-0.97000815, 1.58388762]])

线性回归模型:

lm = LinearRegression()lm.fit(X_train, y_train)lm.intercept_array([6.60022817e-17])lm.coef_array([[ 0.29434388, 0.31107005, 0.05109985, 0.06398884, 0.11932473,0.02209143, -0.044909 , 0.0334707 , 0.07675313, 0.09456834,0.05714788, 0.01910605, 0.04584189, 0.139 ]])# 预测数据predictions = lm.predict(X_test)predictions = predictions.reshape(-1,1)plt.figure(figsize=(15,8))plt.scatter(y_test,predictions)plt.xlabel('Y Test')plt.ylabel('Predicted Y')plt.show()

计算平均绝对误差、均方误差、均方根误差:

from sklearn import metricsprint('平均绝对误差:', metrics.mean_absolute_error(y_test, predictions))print('均方误差:', metrics.mean_squared_error(y_test, predictions))print('均方根误差:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))平均绝对误差: 0.29105407971784325均方误差: 0.29995756024517584均方根误差: 0.5476838141164807

梯度提升回归模型

from sklearn import ensemblefrom sklearn.utils import shufflefrom sklearn.metrics import mean_squared_error, r2_scoreparam = {'n_estimators':500, 'max_depth':4, 'min_samples_split':2, 'learning_rate':0.01, 'loss':'ls'}clf = ensemble.GradientBoostingRegressor(**param)clf.fit(X_train, y_train)# 预测数据clf_pred = clf.predict(X_test)clf_pred = clf_pred.reshape(-1,1)plt.figure(figsize=(15,8))plt.scatter(y_test,clf_pred, c= 'brown')plt.xlabel('Y Test')plt.ylabel('Predicted Y')plt.show()

计算平均绝对误差、均方误差、均方根误差:

print('平均绝对误差:', metrics.mean_absolute_error(y_test, clf_pred))print('均方误差:', metrics.mean_squared_error(y_test, clf_pred))print('均方根误差:', np.sqrt(metrics.mean_squared_error(y_test, clf_pred)))平均绝对误差: 0.22758042839523698均方误差: 0.1166928968659867均方根误差: 0.3416034204541674

决策树回归模型

from sklearn.tree import DecisionTreeRegressordtreg = DecisionTreeRegressor(random_state = 100)dtreg.fit(X_train, y_train)# 预测数据dtr_pred = dtreg.predict(X_test)dtr_pred= dtr_pred.reshape(-1,1)plt.figure(figsize=(15,8))plt.scatter(y_test,dtr_pred,c='green')plt.xlabel('Y Test')plt.ylabel('Predicted Y')plt.show()

计算平均绝对误差、均方误差、均方根误差:

print('平均绝对误差:', metrics.mean_absolute_error(y_test, dtr_pred))print('均方误差:', metrics.mean_squared_error(y_test, dtr_pred))print('均方根误差:', np.sqrt(metrics.mean_squared_error(y_test, dtr_pred)))平均绝对误差: 0.3327904077116327均方误差: 0.2297983841710415均方根误差: 0.4793729072142496

支持向量机模型

from sklearn.svm import SVRsvr = SVR(kernel = 'rbf')svr.fit(X_train, y_train)# 预测数据svr_pred = svr.predict(X_test)svr_pred= svr_pred.reshape(-1,1)plt.figure(figsize=(15,8))plt.scatter(y_test,svr_pred, c='red')plt.xlabel('Y Test')plt.ylabel('Predicted Y')plt.show()

计算平均绝对误差、均方误差、均方根误差:

print('平均绝对误差:', metrics.mean_absolute_error(y_test, svr_pred))print('均方误差:', metrics.mean_squared_error(y_test, svr_pred))print('均方根误差:', np.sqrt(metrics.mean_squared_error(y_test, svr_pred)))平均绝对误差: 0.23401679589999028均方误差: 0.1899647870349416均方根误差: 0.43584950044131243

随机森林回归模型

from sklearn.ensemble import RandomForestRegressorrfr = RandomForestRegressor(n_estimators = 500, random_state = 0)rfr.fit(X_train, y_train)# 预测rfr_pred= rfr.predict(X_test)rfr_pred = rfr_pred.reshape(-1,1)# 误差图plt.figure(figsize=(15,8))plt.scatter(y_test,rfr_pred, c='orange')plt.xlabel('Y Test')plt.ylabel('Predicted Y')plt.show()

计算平均绝对误差、均方误差、均方根误差:

print('平均绝对误差:', metrics.mean_absolute_error(y_test, rfr_pred))print('均方误差:', metrics.mean_squared_error(y_test, rfr_pred))print('均方根误差:', np.sqrt(metrics.mean_squared_error(y_test, rfr_pred)))平均绝对误差: 0.23723064460015603均方误差: 0.15361242417115376均方根误差: 0.3919342089830304

LightGBM模型

import lightgbm as lgbmodel_lgb = lgb.LGBMRegressor(objective='regression',num_leaves=5,learning_rate=0.01, n_estimators=3000,max_bin = 55, bagging_fraction = 0.8,bagging_freq = 5, feature_fraction = 0.2319,feature_fraction_seed=9, bagging_seed=9,min_data_in_leaf =6, min_sum_hessian_in_leaf = 11)model_lgb.fit(X_train,y_train)lgb_pred = model_lgb.predict(X_test)lgb_pred = lgb_pred.reshape(-1,1)plt.figure(figsize=(15,8))plt.scatter(y_test,lgb_pred, c='orange')plt.xlabel('Y Test')plt.ylabel('Predicted Y')plt.show()

计算平均绝对误差、均方误差、均方根误差:

print('平均绝对误差:', metrics.mean_absolute_error(y_test, lgb_pred))print('均方误差:', metrics.mean_squared_error(y_test, lgb_pred))print('均方根误差:', np.sqrt(metrics.mean_squared_error(y_test, lgb_pred)))平均绝对误差: 0.24608389926907812均方误差: 0.16035698718036656均方根误差: 0.40044598534679626

模型选择:

经过多次测试,得出LightGBM模型的均方误差最小

a = pd.read_csv("D:/kaggle项目数据/House-Prices-advance-regression-techniques/test.csv")test_id = a['Id']a = pd.DataFrame(test_id, columns=['Id'])test = sc_X.fit_transform(test_data)test_prediction_lgbm=model_lgb.predict(test)test_prediction_lgbm= test_prediction_lgbm.reshape(-1,1)test_prediction_lgbm =sc_y.inverse_transform(test_prediction_lgbm)test_prediction_lgbm = pd.DataFrame(test_prediction_lgbm, columns=['SalePrice'])result = pd.concat([a,test_prediction_lgbm], axis=1)resultIdSalePrice01461120406.05423711462146885.18869721463182996.97283331464187575.81128441465195659.740376.........1454291574649.0790331455291699345.69671714562917171166.83898314572918123106.81379814582919246580.913084# 保存结果result.to_csv('submission.csv',index=False)

上传kaggle得到的结果为:

~~~~码字不易,如果喜欢作者文章的话,点击“赞同”或“喜欢”,也可以关注作者,会不定期更新与数据分析和数据挖掘相关的文章!~~~~

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。