基于Python贝叶斯优化XGBoost算法调参运行情况如下:
报出如下错误:
Traceback (most recent call last):......suggestion = acq_max(File "/usr/local/python3/lib/python3.8/site-packages/bayes_opt/util.py", line 65, in acq_maxif max_acq is None or -res.fun[0] >= max_acq:TypeError: 'float' object is not subscriptable
参考关键代码如下:
def _xgb_logistic_evaluate(max_depth, subsample, gamma, colsample_bytree, min_child_weight):import xgboost as xgbparams = {'objective': 'binary:logistic', # 逻辑回归二分类的问题'eval_metric': 'auc','max_depth': int(max_depth),'subsample': subsample, # 0.8'eta': 0.3,'gamma': gamma,'colsample_bytree': colsample_bytree,'min_child_weight': min_child_weight}cv_result = xgb.cv(params, self.dtrain,num_boost_round=30, nfold=5)return 1.0 * cv_result['test-auc-mean'].iloc[-1]def evaluate(self, bo_f, pbounds, init_points, n_iter):bo = BayesianOptimization(f=bo_f, # 目标函数pbounds=pbounds, # 取值空间verbose=2, # verbose = 2 时打印全部,verbose = 1 时打印运行中发现的最大值,verbose = 0 将什么都不打印random_state=1,)bo.maximize(init_points=init_points, # 随机搜索的步数n_iter=n_iter, # 执行贝叶斯优化迭代次数acq='ei')print(bo.max)res = bo.maxparams_max = res['params']return params_max
参考stackoverflow上的解释:
This is related to a change in scipy 1.8.0, One should use -np.squeeze(res.fun) instead of -res.fun[0]
/fmfn/BayesianOptimization/issues/300
The comments in the bug report indicate reverting to scipy 1.7.0 fixes this,
UPDATED: It seems the fix has been merged in the BayesianOptimization package, but the new maintainer is unable to push a release to pypi /fmfn/BayesianOptimization/issues/300#issuecomment-1146903850
因此,卸载当前scipy 1.8.1,退回到scipy 1.7.0。
[root@DeepLearning bin]# pip3 uninstall scipy......Successfully uninstalled scipy-1.8.1[root@DeepLearning bin]# pip3 install -i https://pypi.tuna./simple scipy==1.7Successfully installed scipy-1.7.0
成功再运行贝叶斯优化调参程序。
参考:
seul233. python使用贝叶斯优化随机森林时出现TypeError: ‘float’ object is not subscriptable. CSDN博客. .03
/questions/71460894/bayesianoptimization-fails-due-to-float-error