1.算法原理
算法首先遍历每个特征的每一个取值,对于每一个特征值,统计它在各个类别中的出现次数,找到它出现次数最多的类别,并统计它在其他类别中的出现次数。
举例来说,假如数据集的某一个特征可以取0或1两个值。数据集共有三个类别。特征值为0的情况下,A类有20个这样的个体,B类有60个,C类也有20个。那么特征值为0的个体最可能属于B类,当然还有40个个体确实是特征值为0,但是它们不属于B类。将特征值为0的个体分到B类的错误率就是40%,因为有40个这样的个体分别属于A类和C类。特征值为1时,计算方法类似,不再赘述;其他各特征值最可能属于的类别及错误率的计算方法也一样。
统计完所有的特征值及其在每个类别的出现次数后,我们再来计算每个特征的错误率。计算方法为把它的各个取值的错误率相加,选取错误率最低的特征作为唯一的分类准则(OneR),用于接下来的分类。
2.源码实现
from collections import defaultdict
from operator import itemgetter
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
dataset = load_iris()
X = dataset.data
y = dataset.target
attribute_means=X.mean(axis=0)#按列求均值
#将特征值小于均值的赋值为0,大于均值的赋值为1
X_d=np.array(X >= attribute_means,dtype='int')
def train_feature_value(X, y_true, feature_index, value):
class_counts = defaultdict(int)
for sample, y in zip(X, y_true):
if sample[feature_index] == value:
class_counts[y] += 1
sorted_class_counts = sorted(class_counts.items(), key=itemgetter(1), reverse=True)
most_frequent_class = sorted_class_counts[0][0]
incorrect_predictions = [class_count for class_value, class_count in class_counts.items() if class_value != most_frequent_class]
error = sum(incorrect_predictions)
return most_frequent_class, error
def train_on_feature(X, y_true, feature_index):
values = set(X[:, feature_index])
predictors = {}
errors = []
for current_value in values:
most_frequent_class, error = train_feature_value(X, y_true, feature_index, current_value)
predictors[current_value] = most_frequent_class
errors.append(error)
total_error = sum(errors)
return predictors, total_error
#random_state为随机参数,固定为某一个值时,每次执行会得到相同的结果
Xd_train, Xd_test, y_train, y_test = train_test_split(X_d, y, random_state=None)
all_predictors = {}
errors = {}
for feature_index in range(Xd_train.shape[1]):
#print(feature_index)
predictors, total_error = train_on_feature(Xd_train, y_train, feature_index)
all_predictors[feature_index] = predictors
errors[feature_index] = total_error
#print(all_predictors,errors)
best_feature, best_error = sorted(errors.items(),key=itemgetter(1))[0]
#print(best_feature,best_error)
model = {'feature':best_feature,'predictor':all_predictors[best_feature]}
#print(model)
def predict(X_test,model):
variable = model['feature']
#print(variable)
predictor = model['predictor']
#print(predictor)
y_predicted = np.array([predictor[int(sample[variable])] for sample in X_test])
return y_predicted
y_predicted = predict(Xd_test,model)
#计算准确率
accuracy = np.mean(y_predicted==y_test)*100
print("The test accuracy is {:.1f}%".format(accuracy))
3.运行及其结果
$ python3 example.py
The test accuracy is 63.2%
网友评论