我想对不平衡的数据使用随机森林分类器,其中X是表示特征的np.array,y是表示标签的np.array(具有90%0值和10%1值的标签)。由于我不确定如何在交叉验证中进行分层,并且如果它有所作为,我也使用StratifiedKFold手动交叉验证。我预计结果不一样,但有些相似。由于情况并非如此,我想我错误地使用了一种方法,但我不明白是哪一种。这是代码
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import StratifiedKFold, cross_val_score, train_test_split
from sklearn.metrics import f1_score
rfc = RandomForestClassifier(n_estimators = 200,
criterion = "gini",
max_depth = None,
min_samples_leaf = 1,
max_features = "auto",
random_state = 42,
class_weight = "balanced")
X_train_val, X_test, y_train_val, y_test = train_test_split(X, y, test_size = 0.20, random_state = 42, stratify=y)
我还尝试了没有class_weight参数的分类器。从这里开始,我将这两种方法与f1分数进行比较
cv = cross_val_score(estimator=rfc,
X=X_train_val,
y=y_train_val,
cv=10,
scoring="f1")
print(cv)
来自交叉验证的10个f1分数都在65%左右。现在分层KFold:
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
for train_index, test_index in skf.split(X_train_val, y_train_val):
X_train, X_val = X_train_val[train_index], X_train_val[test_index]
y_train, y_val = y_train_val[train_index], y_train_val[test_index]
rfc.fit(X_train, y_train)
rfc_predictions = rfc.predict(X_val)
print("F1-Score: ", round(f1_score(y_val, rfc_predictions),3))
StratifiedKFold的10个f1分数让我获得了大约90%的值。这就是我感到困惑的地方,因为我不理解两种方法之间的巨大偏差。如果我只是将分类器拟合到训练数据并将其应用于测试数据,我也会得到大约90%的f1分数,这让我相信我应用cross_val_score的方式是不正确的。
MYYA
相关分类