课程介绍
课程名称:Python3入门机器学习 经典算法与应用 入行人工智能
课程章节:4-5;4-6
主讲老师:liuyubobobo
内容导读
- 第一部分 超参数的选择
- 第二部分 网格搜索
课程详细
第一部分 超参数的选择
运行机器学习算法之前需要确定的参数如KNN的K
模型参数:在算法过程中学习的参数
如何寻找好的超参数
- 领域知识
- 经验数值
- 实验搜索
使用sklearn 来学习
#老生常谈的内容,调用机器学习算法的函数
import numpy as np
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,random_state=666)
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_neighbors=3)
knn_clf.fit(X_train, y_train)
knn_clf.score(X_test, y_test)
超参数K
就是在一开始输入的参数K值
best_score = 0.0
besk_k = -1
for k in range(1, 11):
knn_clf = KNeighborsClassifier(n_neighbors=k)
knn_clf.fit(X_train, y_train)
score=knn_clf.score(X_test, y_test)
if score > best_score:
best_k = k
best_score = score
print('best_k',best_k)
print('best_score',best_score)
超参数距离
算法自己有封装的,意思就是考虑距离的权重
best_method = ""
best_score = 0.0
besk_k = -1
for method in ["uniform","distance"]:
for k in range(1, 11):
knn_clf = KNeighborsClassifier(n_neighbors=k,weights=method)
knn_clf.fit(X_train, y_train)
score=knn_clf.score(X_test, y_test)
if score > best_score:
best_k = k
best_score = score
best_method = method
print('best_k',best_k)
print('best_score',best_score)
print('best_method',best_method)
明可夫斯基距离相应的p
这个算法欧里几合的距离,其实是p等于2,解释起来有点麻烦
%%time
best_p= -1
best_score = 0.0
besk_k = -1
for k in range(1,11):
for p in range(1,6):
knn_clf = KNeighborsClassifier(n_neighbors=k,weights="distance",p=p)
knn_clf.fit(X_train, y_train)
score=knn_clf.score(X_test, y_test)
if score > best_score:
best_k = k
best_score = score
best_p = p
print('best_k',best_k)
print('best_score',best_score)
print('best_p',best_p)
第二部分 网格搜索
参数与参数之间其实是有相互关联的,那么怎么能在一次运算得出最好的超参数的组合呢.tklearn中已经封装了一个直接使用网格计算超参数
#老生常谈的内容,调用机器学习算法的函数
import numpy as np
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,random_state=666)
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_neighbors=3)
knn_clf.fit(X_train, y_train)
knn_clf.score(X_test, y_test)
#先定义一个参数,数组
#这里一共搜索两组数据,
param_grid = [
{
'weights': ['uniform'],
'n_neighbors': [i for i in range(3, 11)]
},
{
'weights': ['distance'],
'n_neighbors': [i for i in range(1, 11)],
'p': [i for i in range(1, 6)]
}
]
knn_clf = KNeighborsClassifier()
from sklearn.model_selection import GridSearchCV
grid_search = GridSearchCV(knn_clf, param_grid)
%%time
grid_search.fit(X_train, y_train)
knn_clf = grid_search.best_estimator_
knn_clf.score(X_test,y_test)
%%time
#n_jobs用几核来进行运算,
#verbose输入数字来判断计算的时候有一些数据输出的多少
grid_search = GridSearchCV(knn_clf, param_grid, n_jobs=-1, verbose=2)
grid_search.fit(X_train, y_train)
课程思考
对于各种各样的超参数的选择有点让人迷糊,这时候最重要的就是找准主次,就KNN算法而言,我认为影响准确性比重大的为K,另外有一个值得的注意的问题就是过拟合,现在还是有点不明朗