1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > Python机器学习:多项式回归与模型泛化006验证数据集与交叉验证

Python机器学习:多项式回归与模型泛化006验证数据集与交叉验证

时间:2022-04-30 20:58:15

相关推荐

Python机器学习:多项式回归与模型泛化006验证数据集与交叉验证

交叉验证

引入数据集并且train_test_split

#交叉验证import numpy as npfrom sklearn import datasets

digits = datasets.load_digits()X = digits.datay = digits.target

from sklearn.model_selection import train_test_splitX_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.4,random_state=666)

#更多关于距离的定义#搜索明可夫斯基距离相应的pfrom sklearn.neighbors import KNeighborsClassifierbest_score = 0.0best_k = 0best_p = 0import timestart = time.time()for k in range(2,11):for p in range(1,6):knn_clf = KNeighborsClassifier(n_neighbors = k,weights="distance",p = p)knn_clf.fit(X_train,y_train)score = knn_clf.score(X_test,y_test)if score > best_score:best_k = kbest_p = pbest_score = scoreprint('best_p=',best_p)print('best_k=',best_k)print('best_score=',best_score)runtime = time.time() - startprint(runtime)

best_p= 4best_k= 3best_score= 0.986091794158553525.07357954978943

使用交叉验证

#使用交叉验证from sklearn.model_selection import cross_val_scorefrom sklearn.model_selection import cross_val_predictknn_clf = KNeighborsClassifier()print(cross_val_score(knn_clf,X_train,y_train))#print(cross_val_predict(knn_clf,X_train,y_train))

[0.99537037 0.98148148 0.97685185 0.97674419 0.97209302]

best_score = 0best_k = 0best_p = 0for k in range(2,11):for p in range(1,6):knn_clf = KNeighborsClassifier(n_neighbors = k,weights="distance",p = p)knn_clf.fit(X_train,y_train)scores = cross_val_score(knn_clf,X_train,y_train)score = np.mean(scores)if score > best_score:best_k = kbest_p = pbest_score = scoreprint('best_p=',best_p)print('best_k=',best_k)print('best_score=',best_score)

得到参数

best_p= 2best_k= 2best_score= 0.9851507321274763

使用最好的

best_knn_clf = KNeighborsClassifier(weights='distance',n_neighbors=2,p = 2)best_knn_clf.fit(X_train,y_train)print(best_knn_clf.score(X_test,y_test))

0.980528511821975

#回顾网格搜索#Grid Search定义好要搜索的参数的集合from sklearn.model_selection import GridSearchCVparam_grid = [{'weights':['distance'],'n_neighbors':[i for i in range(2,11)],'p':[i for i in range(1,6)]}]from sklearn.model_selection import GridSearchCVgrid_search = GridSearchCV(knn_clf,param_grid,verbose=1,cv = 3)#定义好网格搜索对象grid_search.fit(X_train,y_train)

Fitting 3 folds for each of 45 candidates, totalling 135 fits[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.[Parallel(n_jobs=1)]: Done 135 out of 135 | elapsed: 26.6s finished

GridSearchCV(cv=3,estimator=KNeighborsClassifier(n_neighbors=10, p=5,weights='distance'),param_grid=[{'n_neighbors': [2, 3, 4, 5, 6, 7, 8, 9, 10],'p': [1, 2, 3, 4, 5], 'weights': ['distance']}],verbose=1)

print(grid_search.best_score_)

0.9833023831631073

print(grid_search.best_params_)

{'n_neighbors': 2, 'p': 2, 'weights': 'distance'}

print(grid_search.best_estimator_)

KNeighborsClassifier(n_neighbors=2, weights='distance')

best_knn_clf = grid_search.best_estimator_print(best_knn_clf.score(X_test,y_test))

0.980528511821975

print(cross_val_score(best_knn_clf,X_train,y_train,cv=5))print(cross_val_score(knn_clf,X_train,y_train,cv=5))

[0.99074074 0.98148148 0.99074074 0.97674419 0.98604651][0.99537037 0.96759259 0.98611111 0.95813953 0.97674419]

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。