1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > 主成分分析提取好的特征

主成分分析提取好的特征

时间:2024-02-23 19:02:18

相关推荐

主成分分析提取好的特征

主成分分析

PCA顾名思义,就是找出数据里最主要的方面,用数据里最主要的方面来代替原始数据。具体的,假如我们的数据集是 n维的,共有 m 个数据 (x_{1},x_{2},…,x_{m}) 。我们希望将这 m 个数据的维度从 n 维降到 n’ 维,希望这 m 个 n’ 维的数据集尽可能的代表原始数据集。我们知道数据从 n 维降到 n’ 维肯定会有损失,但是我们希望损失尽可能的小。

成人收入分析

数据集

file = "adult/adult.data"data = pd.read_csv(file,header = None,names=["Age", "Work-Class", "fnlwgt", "Education","Education-Num", "Marital-Status", "Occupation","Relationship", "Race", "Sex", "Capital-gain","Capital-loss", "Hours-per-week", "Native-Country","Earnings-Raw"])# drop the blank linedata.dropna(inplace=True)data.iloc[:5]

data["Hours-per-week"].describe()

data["Education-Num"].median()

data["Work-Class"].unique()

data["LongHours"] = data["Hours-per-week"] > 40data[:5]

选取好的特征

from sklearn.feature_selection import SelectKBestfrom sklearn.feature_selection import chi2# choose three best featurestransformer = SelectKBest(score_func=chi2,k = 3)Xt_chi2 = transformer.fit_transform(X,y)print(Xt_chi2[:5])

#the highest is age,capital-gain,capital-lossprint(transformer.scores_)

from scipy.stats import pearsonrdef multivariate_pearsonr(X,y):scores,p_values = [],[]for column in range(X.shape[1]):cur_score,cur_p = pearsonr(X[:,column],y)scores.append(cur_score)p_values.append(cur_p)return (np.array(scores),np.array(p_values))

transformer = SelectKBest(score_func=multivariate_pearsonr,k=3)Xt_pearson = transformer.fit_transform(X,y)print(Xt_pearson[:5])# Age,Education-num,Hours-per-weekprint(transformer.scores_)

模型建立及评估

from sklearn.tree import DecisionTreeClassifierfrom sklearn.cross_validation import cross_val_scoreclf = DecisionTreeClassifier(random_state=14)scores_chi2 = cross_val_score(clf,Xt_chi2,y,scoring="accuracy")scores_pearson = cross_val_score(clf,Xt_pearson,y,scoring="accuracy")print("Chi2 performance {0:.1f}%".format(np.mean(scores_chi2)*100))print("Pearson performance {0:.1f}&".format(np.mean(scores_pearson)*100))

最终chi2结果为0.85,而pearson的结果为0.75。

网站广告分析

数据集

import numpy as npimport pandas as pdfile = "ad-dataset/ad.data"def convert_num(x):try:return float(x)except ValueError:return np.nanfrom collections import defaultdictconverters = defaultdict(convert_num)# set the final columnconverters[1558] = lambda x:1. if x.strip() == "ad." else 0.for column in range(1558):converters[column] = convert_num

ads = pd.read_csv(file,header=None,converters=converters)#ads[2] = pd.to_numeric(ads[2],errors='coerce')print(ads.shape)# for column in range(ads.shape[1]-1):#ads[column] = pd.to_numeric(ads[column],errors='coerce').astype('float')# ads[1558] = ads[1558].apply(lambda x:1. if x.strip() == "ad." else 0.)ads.dropna(inplace=True)print(ads.shape)print(ads[:5])

PCA选取特征

X = ads.drop(1558,axis = 1).valuesy = ads[1558]print(X.shape,y.shape)

from sklearn.decomposition import PCApca = PCA(n_components = 5)Xd = pca.fit_transform(X)print(Xd[:5])np.set_printoptions(precision = 3,suppress=True)pca.explained_variance_ratio_

模型建立及评估

from sklearn.tree import DecisionTreeClassifierfrom sklearn.cross_validation import cross_val_scoreclf = DecisionTreeClassifier(random_state=14)scores_pca = cross_val_score(clf,Xd,y,scoring='accuracy')print("The accuracy is {0:.4f}".format(np.mean(scores_pca)))

%matplotlib inlinefrom matplotlib import pyplot as plt# only two:is ad or notclasses = set(y)colors = ["red","green"]for cur_class,color in zip(classes,colors):mask = (y == cur_class).valuesplt.scatter(Xd[mask,0],Xd[mask,1],marker='o',color=color,label=int(cur_class))plt.legend()plt.show()

最终结果准确率为:0.9326

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。