导图社区 高效入门机器学习-scikit-learn
高效入门python语言的机器学习,几乎不需要敲代码就可以入门机器学习的scikit-learn库,详细的机器学习算法说明和经典例子的代码。
编辑于2020-12-27 23:11:00机器学习Python实践
初始
1初始机器学习
学习机器学习的误区
专心python
沉入机器学习理论
脱离机器学习项目
什么是机器学习
监督学习(本书中心):学习得到目标函数,从而预测结果,训练集包括输入和输出
非监督学习(归纳性学习):用Kmean建立中心达成分类的目的
Python中的机器学习
机器学习的预测模型:六个基本步骤
定义问题
数据理解
数据准备
评估算法
优化模型
结果部署
学习机器学习的原则
用实现算法理解小项目,正反馈学习效果
学习机器学习的技巧
这本书不涵盖以下内容
代码说明
2Python机器学习生态圈
Python
面向对象的动态解释语言
SciPy
数学运算和工程学的Python类库,依赖矩阵库NumPy、2D绘图库Matplotlib、数据分析库Pandas
scikit-learn
保守的小规模的分类、回归、聚类、数据降维、模型选择和数据预处理库
环境安装
安装Python
安装SciPy
安装scikit-learn
更加便捷的安装方式
安装SciPy
安装scikit-learn
更加便捷的安装方式
Anaconda安装环境
#检测支持库的安装环境import scipy,numpy,matplotlib,pandas,sklearn def print_hi(): print('scipy:{}'.format(scipy.__version__)) print('numpy:{}'.format(numpy.__version__)) print('matplotlib:{}'.format(matplotlib.__version__)) print('pandas:{}'.format(pandas.__version__)) print('sklearn:{}'.format(sklearn.__version__)) if __name__ == '__main__': print_hi()
3第一个机器学习项目
机器学习中的hello World项目
Iris Flower多分类问题
导入数据
导入类库
导入数据集
概述数据
数据纬度
查看数据自身
统计描述数据
数据分类分布
样本类不平衡时影响模型准确度,要重新调整样本的数据平衡(过/欠抽样)
数据可视化
单变量图表
多变量图表
评估算法
分离出评估数据集
80%训练数据集+20%评估数据集
评估模式
10折交叉验证分离训练数据集:9份训练模型,1份评估模型
创建模型
线性算法(LR和LDA)
非线性算法(KNN、CART、NB和SVM)
选择最优模型
sklearn.model_selection.cross_val_score()
实施预测
knn.fit(X_train, Y_train)predictions = knn.predict(X_validation)
4Python和SciPy速成
Python速成
基本数据类型和赋值运算
控制语句
复杂数据类型
函数
with语句
简化对异常的处理,对象需要支持Context Management Protocol协议
NumPy速成
创建数组
访问数据
算数运算
Matplotlib速成
绘制线条图
使用ndarray类数据填充plot(),设定xlabel和ylabel,show()绘图
散点图
使用ndarray类数据填充scatter(),设定xlabel和ylabel,show()绘图
Pandas速成
Series可以标标签的一维数组,和Numpy.Array、python的list相同
DataFrame可以标行列标签的二维数组
数据理解
5数据导入
CSV文件
文件头
文件中的注释
#
分隔符
,
引号
空
Pima Indians数据集
采用标准Python类库导入数据
采用NumPy导入数据
采用Pandas导入数据
read_csv()导入时可返回dataFrame
6数据理解
简单地查看数据
数据的维度
数据属性和类型
描述性统计
数据分组分布(适用于分类算法)
read_csv().groupby('核心属性').size()每类核心属性是否等量
数据属性的相关性
read_csv().corr(method='pearson')解析两个属性的皮尔逊相关系数,接近1的项要降维,保证linear、逻辑回归算法的性能
数据的分布分析
read_csv().skew()高斯分布的偏离,0为无偏离
7数据可视化
单一图表
直方图
各属性的直方图read_csv().hist(),matplotlib.pyplot.show()
密度图
read_csv().plot(kind='density')
箱线图
read_csv().plot(kind='box')
多重图表
相关矩阵图
各属性两两相关性plt.figure().add_subplot(111).matshow(read_csv().corr(),vmin=-1,vmax=1)
散点矩阵图
各属性两两的聚合情况和分布,用于多元线性回归pandas.plotting.scatter_matrix(read_csv())
数据准备
8数据预处理
为什么需要数据预处理
格式化数据
导入数据
按照算法的输入和输出整理数据
格式化输入数据
scikit-learn
fit and multiple transform
combined fit-and-transform
绘图和汇总
总结显示数据的变化
调整数据尺度
定类尺度
定序尺度
定距尺度
定比尺度
MinMaxScaler将属性数据缩放和标准化,提升距离算法(K近邻算法)的精度
导入数据
将数据分为输入数据和输出数据
数据转换MinMaxScaler().fit_transform()
设定打印格式
正态化数据
StandardScaler().fit()transform()
必须先用fit_transform(trainData),之后再transform(testData)如果直接transform(testData),程序会报错如果fit_transfrom(trainData)后,使用fit_transform(testData)而不transform(testData),虽然也能归一化,但是两个结果不是在同一个“标准”下的,具有明显差异。(一定要避免这种情况)
fit_transform()=fit() 和 transform()
标准化数据
Normalizer().fit()transform()
归一元,将每行数据距离设成1,适合稀疏数据,提升权重输入的神经网络和K近邻算法
二值数据
Binarizer(threshold=0.0).fit(X)transform()
使用threshold将数据转为0和1
9数据特征选定
特征选定
降低数据的拟合度
提高算法的精度
减少训练时间
单变量特征选定
卡方检验:SelectKBest类的SelectKBest(score_func=chi2)
递归特征消除
如:使用递归特征消除RFE,以逻辑回归算法LogisticRegression()为基模型,选定对预测结果影响最大的3个数据特征属性RFE(LogisticRegression(),3)
主要成分分析
数据降维包括:主要成分分析(PCA,使用线性代数来转换压缩数据,无监督的降维):PCA(n_components=3)线性判别分析(LDA,分类模型,有监督的降维)
特征重要性
袋装决策树算法(Bagged Decision Tress)、随机森林算法和极端随机树算法等,用来计算各数据特征属性的分数。fit = ExtraTreesClassifier().fit(X, Y)print(fit.feature_importances_)
选择模型
10评估算法
评估算法的方法
分离训练数据集和评估数据集,多为2:1
K折交叉验证分离
弃一交叉验证分离
重复随机评估、训练数据集分离
分离训练数据集和评估数据集
X_train:X_test = 0.67:0.33,train_test_split(X, Y, test_size=0.33, random_state=seed)
K折交叉验证分离(主)
K组数据(K多取3,5,10),每组都以其余K-1组子集作训练集,得到的K个模型的验证集的分类准确率的平均数为K折交叉验证下分类器的性能指标。kfold = KFold(n_splits=10, random_state=seed_K)model = LogisticRegression()result_K = cross_val_score(model, X, Y, cv=kfold)
弃一交叉验证分离
K折交叉验证分离的K=样本个数N时,就是每个样本都作为独立的验证集,其余N-1个样本为训练集生成N个模型,N个模型的分类准确率的平均数为弃一交叉验证分离的性能指标。(开销和样本个数相关)model = LogisticRegression()result_leaveoneout = cross_val_score(model, X, Y, cv=LeaveOneOut())
重复随机分离评估数据集与训练数据集
重复K折交叉验证分离n_splits次数时,(开销和重复次数n_splits相关)model_K = LogisticRegression()result_K = cross_val_score(model_K, X, Y, cv=KFold(n_splits=10, random_state=seed_K))
11算法评估矩阵
算法评估矩阵
分类算法矩阵
回归算法矩阵
分类算法矩阵
分类准确度
分类正确的样本数/所有样本数,不可用于极低概率的属性分类。(海啸等)eg:K折交叉验证分离
对数损失函数
对数损失函数越小,模型越好kfold = KFold(n_splits=10, random_state=seed_K)cross_val_score(model, X, Y, cv=kfold, scoring='neg_log_loss')
AUC图
ROC反映敏感性(真正类率,正确的分类为正类 / 实际正类)和负正类率(分类器错认为正类 / 所有负类)的曲线AUC为ROC曲线下的面积,为0.5-1,越大越好auc = cross_val_score(model, X, Y, cv=kfold, scoring='neg_log_loss')
混淆矩阵
用于比较分类结果和实际值,列为预测分类,行为实际分类model.fit(X_train,Y_traing)predicted = model.predict(X_test)matrix = confusion_matrix(Y_test,predicted)
分类报告
计算精确率(正确分类为正类 / 所有分类器认为正类)、召回率(=敏感性=真正类率,正确的分类为正类 / 实际正类)和F1-score(=(精确率+召回率)/2)model.fit(X_train,Y_traing)predicted = model.predict(X_test)matrix = classification_report(Y_test,predicted)
回归算法矩阵
平均绝对误差
所有单个观测值与算数平均值的偏差的绝对值的平均值cross_val_score(model, X, Y, cv=kfold, scoring='neg_mean_absolute_error')
均方误差
评价数据的变化程度,均方误差越小越好cross_val_score(model, X, Y, cv=kfold, scoring='neg_mean_squared_error')
决定系数(R2)
反映因变量的全部变异能通过回归关系被自变量解释的比例,0<r2<1cross_val_score(model, X, Y, cv=kfold, scoring=‘r2’)
12审查分类算法
算法审查建议
用多种代表性算法
用多种机器学习的算法
用多种模型
算法概述
线性分类算法:逻辑回归和线性判别分析
非线性分类算法:K近邻、贝叶斯分类器、分类与回归树、支持向量机
线性算法
逻辑回归LR(输入符合高斯正态分布)
利用已知X预测一个离散Y(0/1类变量)的分类算法model = LinearRegression()
线性判别分析LDA(输入符合高斯正态分布)
将高维样本投影到最佳鉴别矢量空间,保证样本的可分离性,用于数据降维model = LinearDiscriminantAnalysis()
非线性算法
K近邻算法KNN
k个最相似最邻近的样本中的大多数属于某类,则该样本也属于这个类KNeighborsClassifier()
贝叶斯分类器NB(输入符合高斯正态分布)
基于统计的增类分类器,通过对象的先验概率,利用贝叶斯计算其所有类上的后验概率,选择最大后验概率的类为对象所属的类。GaussianNB()
分类与回归树CART
基于基尼系数的决策树DecisionTreeClassifier()
支持向量机SVM
用于分析数据,识别模型,分类和回归分析。建立训练样本的向量机模型,对新数据分类成为非概率二元线性分类。SVC()
13审查回归算法
算法概述
线性算法
线性回归算法
多元线性回归分析,确定变量间的线性关系LinearRegression()
岭回归算法
用于共线性数据分析的有偏估计回归方法Ridge()
套索回归算法
和岭回归算法类似Lasso()
弹性网络回归算法
混合套索回归算法和岭回归算法,综合使用L1和L2两种正则化方法训练模型Elasticet()
非线性算法
K近邻算法KNR
KNeighborsRegressor()
分类与回归树CART
DecisionTreeRegressor()
支持向量机SVR
SVR()
14算法比较
选择最佳的机器学习算法
用不同的维度审查数据,找到数据特征和选择算法模型
可视化的展示平均准确度、方差等
机器学习算法的比较
models = {}models['LR'] = LogisticRegression()models['LDA'] = LinearDiscriminantAnalysis()models['KNN'] = KNeighborsClassifier()models['CART'] = DecisionTreeClassifier()models['SVM'] = SVC()models['NB'] = GaussianNB()results = [] for name in models: result = cross_val_score(models[name], X, Y, cv=kfold) results.append(result) msg = '%s: %.3f (%.3f)' % (name, result.mean(), result.std()) print(msg)
# figuresfig = pyplot.figure()fig.suptitle('Algorithm Comparison')ax = fig.add_subplot(111)pyplot.boxplot(results)ax.set_xticklabels(models.keys())pyplot.show()
15自动流程
机器学习的自动流程
Pipeline能自动运行从数据转换到评估模型的流程
数据准备和生成模型的Pipeline
Pipeline能在数据分离中对所有数据子集做正态化等,解决训练数据集和评估数据集之间的数据泄露问题steps = []steps.append(('Standardize', StandardScaler()))steps.append(('lda', LinearDiscriminantAnalysis()))model_pineline = Pipeline(steps)result_pipeline = cross_val_score(model_pineline, X, Y, cv=kfold)
特征选择和生成模型的Pipeline
特征选择容易受数据泄露的影响,应保证数据稳固性# make FeatureUnionfeatures = []features.append(('pca', PCA()))features.append(('select_best', SelectKBest(k=6)))#make Pipelinesteps = []steps.append(('feature_union', FeatureUnion(features)))steps.append(('logistic', LogisticRegression()))model_pineline = Pipeline(steps)result_pipeline = cross_val_score(model_pineline, X, Y, cv=kfold)
利用主要成分分析进行特征选择
利用统计选择进行特征选择
特征集合
生成逻辑回归模型
优化模型
16集成算法
集成的方法
组合多种机器学习算法提高算法精度
装袋算法:将训练集分离成多个子集,训练多个模型
提升算法:训练多个模型组成序列,按序列修正前一个模型的问题
投票算法:训练多个模型,采用样本统计提高模型准确度
装袋算法(数据有大方差时有效)
装袋决策树(100棵树)
cart = DecisionTreeClassifier()model_bdt = BaggingClassifier(base_estimator=cart, n_estimators=100, random_state=7)
随机森林(100棵树,选取m=3<<特征数)
Bagging模型,随机由多个没有关联的决策树组成,随机森林允许单个决策树使用特征的最大数量max_features。model_rf = RandomForestClassifier(n_estimators=100, random_state=7, max_features=m)
极端随机树(100棵树,选取m=7=特征数)
也由许多的决策树组成,但每棵决策树都使用所有的训练样本,完全随机选择分叉特征属性model_et = ExtraTreesClassifier(n_estimators=100, random_state=7, max_features=m)
提升算法
ADaBoost(最大的弱学习器的个数30)
通过改变数据分布,将训练集训练成不同的弱分类器,迭代构成强分类器。AdaBoostClassifier(n_estimators=30, random_state=7)
随机梯度提升GBM(最大的弱学习器的个数100)
一次用一个样本更新回归系数,沿着函数的梯度方向找到最大值。GradientBoostingClassifier(n_estimators=100, random_state=7)
投票算法
利用加权投票算法将多个算法模型包装起来,计算各个子模型的平均预测情况。models_v = []models_v.append(('logistic', LogisticRegression()))models_v.append(('cart', DecisionTreeClassifier()))models_v.append(('svm', SVC()))model_ensemble = VotingClassifier(estimators=models_v)
17算法调参
机器学习算法调参(超参数优化)
影响训练集模型的准确度和防止过拟合
不影响训练集模型的准确度和防止过拟合
网格搜索优化参数
适合3,4个参数情形时,遍历参数列表,找到最优参数。models_G = Ridge()param_grid = {'alpha': [1, 0.1, 0.01, 0.001, 0]}grid = GridSearchCV(estimator=models_G, param_grid=param_grid)grid.fit(X, Y)print('Grid search cv score result: %f(best: %s)' % (grid.best_score_, grid.best_estimator_.alpha))
随机搜索优化参数
适合多参数情形时,通过固定次数的迭代,随机优化各参数构成的分布函数。models_rs = Ridge()param_grid = {'alpha': scipy.stats.uniform()} # 均匀分布(uniform) grid = RandomizedSearchCV(estimator=models_rs, param_distributions=param_grid, n_iter=100, random_state=7)grid.fit(X, Y)print('randomized search CV score result: %f(best: %s)' % (grid.best_score_, grid.best_estimator_.alpha))
结果部署
18持久加载模型
通过pickle序列化和反序列化机器学习的模型
Python的标准序列化方法,将机器学习算法模型序列化成文件,使用时反序列化成模型预测新数据。model = LogisticRegression()model.fit(X_train, Y_traing)# save modelmodel_pickle_file = 'save_model.sav'with open(model_pickle_file, 'wb') as model_f: pickle.dump(model_pickle, model_f)# load modelwith open(model_pickle_file, 'rb') as model_f: load_model = pickle.load(model_f) result_pickle = load_model.score(X_test, Y_test)
通过joblib序列化和反序列化机器学习的模型
采用Numy格式保存数据,对K近邻算法非常有效。model = LogisticRegression()model.fit(X_train, Y_traing)# joblib save modelmodel_joblib_file = 'save_model.sav'with open(model_joblib_file, 'wb') as model_f: joblib.dump(model, model_f)# joblib load modelwith open(model_joblib_file, 'rb') as model_f: load_model = joblib.load(model_f) result_joblib = load_model.score(X_test, Y_test) print('joblib result: %.3f' % (result_joblib))
生成模型的技巧
Python版本、类版本(scipy、scikit-learn等)需要一致
手动序列化算法参数能在其他平台重现模型
项目实践
19预测模型项目模板
在项目中实践机器学习
分类、回归模型的机器学习项目可以分成六个步骤: 1.定义问题2.理解数据3.数据准备4.评估算法5.优化模型6.结果部署
机器学习项目的Python模板
各步骤的详细说明
步骤1:定义问题
a)导入类库 b)导入数据集
步骤2:理解数据
a)描述性统计 b)数据可视化
步骤3:数据准备
a)数据清洗 b)特征选择 c)数据转换
步骤4:评估算法
a)分离数据集 b)定义模型评估标准 c)算法审查 d)算法比较
步骤5:优化模型
a)算法调参 b)集成算法
步骤6:结果部署
a)预测评估数据集 b)利用整个数据集生成模型 c)序列化模型
使用模板的小技巧
快速的将项目的所有步骤执行,加强对项目的理解
循环重复3-5步找到准确模型
对于不适用的步骤也要试做
每步骤都要正向影响模型精度或其他步骤
按需适用和修改六步骤
20回归项目实例
定义问题
定义特征属性和数据(度量)
['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] # 选定Boston House Price数据集,特征属性的描述
导入数据
导入类库和导入数据
子主题
import numpy as npfrom numpy import arangefrom matplotlib import pyplotfrom pandas import read_csv, set_option, plottingfrom sklearn import preprocessing, model_selection, linear_model, tree, neighbors, svm, pipeline, ensemble, metrics filename = 'housing.csv'names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']data = read_csv(filename, header=[0])
理解数据
# understand dataprint(data.shape)print(data.dtypes)set_option('display.width', 1200) # display.widthset_option('display.max_columns', None) # display data not hideprint(data.head(20))set_option('precision', 1)print(data.describe()) # count, mean, std, minset_option('precision', 2)print(data.corr(method='pearson')) # relationship
数据可视化
单一特征图表(每个数据特征的分布)
# histogram直方图,呈指数分布和双峰分布data.hist(sharex=False, sharey=False, xlabelsize=1, ylabelsize=1)# Density map密度图,比直方图平滑展示数据的特征data.plot(kind='density', subplots=True, layout=(4, 4), sharex=False, fontsize=1)# Box diagram箱线图,展示数据分布的偏态程度data.plot(kind='box', subplots=True, layout=(4, 4), sharex=False, sharey=False, fontsize=8)pyplot.show()
多重数据图表(不同数据特征间的影响关系)
# Scatter matrix散点矩阵图,不同数据特征间的影响关系plotting.scatter_matrix(data)# Correlation matrix相关矩阵图,数据特征属性间的两两相关性,后续应删除两两强相关的特征属性(提高算法准确度)fig = pyplot.figure()ax = fig.add_subplot(111)cax = ax.matshow(data.corr(), vmin=-1, vmax=1, interpolation='none')fig.colorbar(cax)ticks = np.arange(0, 14, 1)ax.set_xticks(ticks)ax.set_yticks(ticks)ax.set_xticklabels(names)ax.set_yticklabels(names)pyplot.show()
思路总结
通过特征选择减少大部分相关性高的特征
通过标准化数据降低不同数据度量单位的影响
通过正态化数据降低不同的数据分布结构,提高算法的准确度
分离评估数据集(保证训练数据集和评估数据集的隔离,评价模型的准确性)
array = data.valuesX = array[:, 0:13]Y = array[:, 13]validation_size = 0.2seed = 7X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)
评估算法
评估算法——原始数据(最优为LR)
由于部分数据的线性分布,线性回归算法和弹性网络回归算法可能有效
由于数据的离散化,决策树算法和支持向量机算法可以生成高准确度模型
采用10折交叉验证分离数据,通过均方误差比较算法准确度,作为后续算法优化的基准值# Algorithm evaluationmodels = {} # baselinemodels['LR'] = linear_model.LinearRegression()models['LASSO'] = linear_model.Lasso()models['EN'] = linear_model.ElasticNet()models['KNN'] = neighbors.KNeighborsRegressor()models['CART'] = tree.DecisionTreeRegressor()models['SVM'] = svm.SVR()results = []for key in models: kfold = model_selection.KFold(n_splits=10, random_state=7) cv_result = model_selection.cross_val_score(models[key], X_train, Y_train, cv=kfold, scoring='neg_mean_squared_error') results.append(cv_result) print('%s: %f (%f)' % (key, cv_result.mean(), cv_result.std())) # "Algorith comparison" Box diagramfig = pyplot.figure()fig.suptitle('Algorithm Comparison')ax = fig.add_subplot(111)pyplot.boxplot(results)ax.set_xticklabels(models.keys())pyplot.show()
K近邻算法和支持向量机的准确度可能是数据度量单位(指数据区间)的问题
评估算法——正态化数据(最优为ScalerKNN)
正态化数据为0~1区间,统一不同特征属性的度量单位,优化算法结果# Algorithm evaluation--Normalization datapipelines = {}pipelines['ScalerLR'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ("LR", linear_model.LinearRegression())])pipelines['ScalerLASSO'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('LASSO', linear_model.Lasso())])pipelines['ScalerEN'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('EN', linear_model.ElasticNet())])pipelines['ScalerKNN'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('KNN', neighbors.KNeighborsRegressor())])pipelines['ScalerCART'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('CART', tree.DecisionTreeRegressor())])pipelines['ScalerSVM'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('SVM', svm.SVR())])results = []for key in pipelines: kfold = model_selection.KFold(n_splits=10, random_state=7) cv_result = model_selection.cross_val_score(pipelines[key], X_train, Y_train, cv=kfold, scoring='neg_mean_squared_error') results.append(cv_result) print('Algorithm evaluation--Normalization data %s: %f (%f)' % (key, cv_result.mean(), cv_result.std())) # Algorith comparison--Normalization data Box diagramfig = pyplot.figure()fig.suptitle('Algorithm evaluation--Normalization data')ax = fig.add_subplot(111)pyplot.boxplot(results)ax.set_xticklabels(pipelines.keys())pyplot.show()
调参改善算法(选取最优算法KNN)应在大于默认值和小于默认值两个方向上找最优值,最优值位于取值范围边界时要继续找最优值
KNN默认参数近邻个数n_neighbors=5,在大于5和小于5两个方向上找最优值,最优值位于取值范围边界时要继续找最优值:# Parameter adjustment improvement algorithmscaler = preprocessing.StandardScaler().fit(X_train)rescaledX = scaler.transform(X_train)param_grid = {'n_neighbors':[1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21]}model = neighbors.KNeighborsRegressor()kfold = model_selection.KFold(n_splits=10, random_state=7)grid = model_selection.GridSearchCV(estimator=model, param_grid=param_grid, scoring='neg_mean_squared_error', cv=kfold)grid_result = grid.fit(X=rescaledX, y=Y_train)print('best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_))cv_results = zip(grid_result.cv_results_['mean_test_score'], grid_result.cv_results_['std_test_score'], grid_result.cv_results_['params'])for mean, std, param in cv_results: print('%f (%f) with %r' % (mean, std, param))
n_neighbors=3最优
集成算法(选取表现优秀的LR,KNN[n_neighbors=3]和CART)
# Integrated algorithmensembles = {}ensembles['ScaledAB'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('AB', ensemble.AdaBoostRegressor())])ensembles['ScaledAB-KNN'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('ABKNN', ensemble.AdaBoostRegressor(base_estimator=neighbors.KNeighborsRegressor(n_neighbors=3)))])ensembles['ScaledAB-LR'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('ABLR', ensemble.AdaBoostRegressor(linear_model.LinearRegression()))])ensembles['ScaledRFR'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('RFR', ensemble.RandomForestRegressor())])ensembles['ScaledETR'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('ETR', ensemble.ExtraTreesRegressor())])ensembles['ScaledGBR'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('GBR', ensemble.GradientBoostingRegressor())])results = []for key in ensembles: kfold = model_selection.KFold(n_splits=num_folds, random_state=seed) cv_result = model_selection.cross_val_score(ensembles[key], X_train, Y_train, cv=kfold, scoring='neg_mean_squared_error') results.append(cv_result) print('Integrated algorithm: \n %s: %f(%f)' % (key, cv_result.mean(), cv_result.std())) # Integrated algorithm Box diagramfig = pyplot.figure()fig.suptitle('Integrated algorithm')ax = fig.add_subplot(111)pyplot.boxplot(results)ax.set_xticklabels(ensembles.keys())pyplot.show()
ScaledETR和ScaledGBR最优
集成算法调参(选取集成算法最优的ScaledETR和ScaledGBR)经验:最优值位于边界时,也许要修改param_grid取值运行多次,最优结果会浮动
集成算法都有n__estimators可以调整,最优值位于取值范围边界时要继续找最优值:# Integrated algorithmGBM--Parameter adjustmentscaler = preprocessing.StandardScaler().fit(X_train)rescaledX = scaler.transform(X_train)param_grid = {'n_estimators': [10, 50, 100, 200, 300, 400, 500, 600, 700, 800, 900]}model = ensemble.GradientBoostingRegressor()kfold = model_selection.KFold(n_splits=num_folds, random_state=seed)grid = model_selection.GridSearchCV(estimator=model, param_grid=param_grid, scoring='neg_mean_squared_error', cv=kfold)grid_result = grid.fit(X=rescaledX, y=Y_train)print('Integrated algorithmGBM best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_)) # Integrated algorithmET--Parameter adjustmentscaler = preprocessing.StandardScaler().fit(X_train)rescaledX = scaler.transform(X_train)param_grid = {'n_estimators': [5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190]}model = ensemble.ExtraTreesRegressor()kfold = model_selection.KFold(n_splits=num_folds, random_state=seed)grid = model_selection.GridSearchCV(estimator=model, param_grid=param_grid, scoring='neg_mean_squared_error', cv=kfold)grid_result = grid.fit(X=rescaledX, y=Y_train)print('Integrated algorithmET best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_))
集成算法ScaledETR最优n__estimators=70(-8.79),ScaledGBR最优n__estimators=300(-9.61):根据best_score_选定集成算法ScaledETR(n__estimators=70)
确定最终模型(选定集成算法ScaledETR(n__estimators=70))
# train modelscaler = preprocessing.StandardScaler().fit(X_train)rescaledX = scaler.transform(X_train)gbr = ensemble.ExtraTreesRegressor(n_estimators=70)gbr.fit(X=rescaledX, y=Y_train) # test modelrescaledX_validation = scaler.transform(X_validation)predictions = gbr.predict(rescaledX_validation)print('result: %f' % metrics.mean_squared_error(Y_validation, predictions))
result: 14.228761
21二分类实例
问题定义
208 rows x 61 columns,60种声纳数据和矿石分类结果(石头R,金属M)
导入数据
import numpy as npfrom matplotlib import pyplotfrom pandas import read_csv, set_option, plottingfrom sklearn import preprocessing, model_selection, linear_model, tree, neighbors, svm, pipeline, ensemble, metrics, discriminant_analysis, naive_bayes filename = 'sonar.all-data.csv'dataset = read_csv(filename, header=None)
分析数据
描述性统计
# Analysis dataprint(dataset.shape)set_option('display.max_rows', 500)print(dataset.dtypes)# set_option('display.max_columns', None) # display data not hideprint(dataset.head(20))set_option('precision', 1)print(dataset.describe()) # count, mean, std, minset_option('precision', 2)print(dataset.corr(method='pearson')) # relationshipprint(dataset.groupby(60).size()) # Classification balance: M 111, R 97
数据可视化
# histogram直方图,大部分呈高斯分布或指数分布dataset.hist(sharex=False, sharey=False, xlabelsize=1, ylabelsize=1)# Density map密度图,大部分偏态分布,用Box-Cox转换连续不满足正态分布的变量为正态分布(减小误差,预测相关性)dataset.plot(kind='density', subplots=True, layout=(8, 8), sharex=False, legend=False, fontsize=1)# Correlation matrix关系矩阵图,有一定的负相关性fig = pyplot.figure()ax = fig.add_subplot(111)cax = ax.matshow(dataset.corr(), vmin=-1, vmax=1, interpolation='none')fig.colorbar(cax)pyplot.show()
分离评估数据集
# train_test_split,20%评估数据集:80%训练数据集,完全隔离评估数据集和训练模型,提高判断模型的准确度array = dataset.valuesX = array[:, 0:60].astype(float)Y = array[:, 60]validation_size = 0.2seed = 7X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)
评估算法
# Algorithm evaluationnum_folds = 10 # 10 kfolds evaluationscoring = 'accuracy' # cross_val_score(scoring) # raw data models = {} # baselinemodels['LR'] = linear_model.LogisticRegression()models['LDA'] = discriminant_analysis.LinearDiscriminantAnalysis()models['KNN'] = neighbors.KNeighborsClassifier()models['CART'] = tree.DecisionTreeClassifier()models['NB'] = naive_bayes.GaussianNB()models['SVM'] = svm.SVC()results = []for key in models: kfold = model_selection.KFold(n_splits=num_folds, random_state=seed) cv_result = model_selection.cross_val_score(models[key], X_train, Y_train, cv=kfold, scoring=scoring) results.append(cv_result) print('Algorithm evaluation %s: %f (%f)' % (key, cv_result.mean(), cv_result.std())) # Algorith comparison Box diagram 逻辑回归算法LR和K近邻算法KNN最优,KNN的结果紧凑fig = pyplot.figure()fig.suptitle('Algorithm Comparison')ax = fig.add_subplot(111)pyplot.boxplot(results)ax.set_xticklabels(models.keys())pyplot.show()
# 由于原始数据分布不均匀,可能影响SVM算法精度,应用pipeline正态化数据,重新评估算法# Algorithm evaluation--Normalization datapipelines = {}pipelines['ScalerLR'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ("LR", linear_model.LogisticRegression())])pipelines['ScalerLDA'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('LDA', discriminant_analysis.LinearDiscriminantAnalysis())])pipelines['ScalerKNN'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('KNN', neighbors.KNeighborsClassifier())])pipelines['ScalerCART'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('CART', tree.DecisionTreeClassifier())])pipelines['ScalerNB'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('NB', naive_bayes.GaussianNB())])pipelines['ScalerSVM'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('SVM', svm.SVC())])results = []for key in pipelines: kfold = model_selection.KFold(n_splits=num_folds, random_state=seed) cv_result = model_selection.cross_val_score(pipelines[key], X_train, Y_train, cv=kfold, scoring=scoring) results.append(cv_result) print('Algorithm evaluation--Normalization data %s: %f (%f)' % (key, cv_result.mean(), cv_result.std())) # Algorith comparison--Normalization data Box diagramfig = pyplot.figure()fig.suptitle('Algorithm evaluation--Normalization data')ax = fig.add_subplot(111)pyplot.boxplot(results)ax.set_xticklabels(pipelines.keys())# pyplot.show()
正态数据后,KNN算法结果依然很好还有提高,SVM算法最好
算法调参(选取正态数据后最优的KNN和SVM算法)
K近邻算法调参(n_neighbors默认5,选21以下调优)
# Parameter adjustment improvement KNN algorithm 调优结果:最优[n_neighbors=1]=0.85,在预测数据时要找最接近的结果scaler = preprocessing.StandardScaler().fit(X_train)rescaledX = scaler.transform(X_train)param_grid = {'n_neighbors': [1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21]}model = neighbors.KNeighborsClassifier()kfold = model_selection.KFold(n_splits=num_folds, random_state=seed)grid = model_selection.GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)grid_result = grid.fit(X=rescaledX, y=Y_train)print('Parameter adjustment improvement KNN algorithm best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_))cv_results = zip(grid_result.cv_results_['mean_test_score'], grid_result.cv_results_['std_test_score'], grid_result.cv_results_['params'])for mean, std, param in cv_results: print('%f (%f) with %r' % (mean, std, param))
支持向量机调参(惩罚系数C和径向基函数kernel默认为1.0和rbf)
# Parameter adjustment improvement SVC algorithm调优结果:最优{'C': 2.0, 'kernel': 'rbf'}=0.872,比KNN算法的结果好scaler = preprocessing.StandardScaler().fit(X_train)rescaledX = scaler.transform(X_train).astype(float)param_grid = {}param_grid['C'] = [0.1, 0.3, 0.5, 0.7, 0.9, 1.0, 1.3, 1.5, 1.7, 1.9, 2.0]param_grid['kernel'] = ['linear', 'poly', 'rbf', 'sigmoid'] # not 'precomputed',原因是kernel='precomputed’时,传入的矩阵不是普通的数据矩阵,而是表示样本成对相似性的数据矩阵model = svm.SVC()kfold = model_selection.KFold(n_splits=num_folds, random_state=seed)grid = model_selection.GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)grid_result = grid.fit(X=rescaledX, y=Y_train)print('Parameter adjustment improvement SVC algorithm best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_))cv_results = zip(grid_result.cv_results_['mean_test_score'], grid_result.cv_results_['std_test_score'], grid_result.cv_results_['params'])for mean, std, param in cv_results: print('%f (%f) with %r' % (mean, std, param))
集成算法(对比前面的非集成算法,集成算法的评估结果非常不稳定,选用:装袋算法[随机森林RF和极端随机树ET]、提升算法[AdaBoost AB和随机梯度上升GBM])
# Integrated algorithm,集成算法最优结果为ScaledET=0.848和ScaledGBM=0.8481ensembles = {}ensembles['ScaledAB'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('AB', ensemble.AdaBoostClassifier())]) # ensembles['ScaledAB-SVM'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('AB-SVM', ensemble.AdaBoostClassifier(base_estimator=svm.SVC(C=2.0, kernel='rbf')))]) # no running,AdaBoostClassifier不能基于SVC算法调优# ensembles['ScaledAB-KNN'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('AB-KNN', ensemble.AdaBoostClassifier(base_estimator=neighbors.KNeighborsClassifier(n_neighbors=1)))]) # no running,AdaBoostClassifier不能基于KNeighborsClassifier算法调优 ensembles['ScaledRF'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('RFR', ensemble.RandomForestClassifier())])ensembles['ScaledET'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('ETR', ensemble.ExtraTreesClassifier())])ensembles['ScaledGBM'] = pipeline.Pipeline([('Scaler', preprocessing.StandardScaler()), ('GBM', ensemble.GradientBoostingClassifier())])results = []for key in ensembles: kfold = model_selection.KFold(n_splits=num_folds, random_state=seed) cv_result = model_selection.cross_val_score(ensembles[key], X_train, Y_train, cv=kfold, scoring=scoring) results.append(cv_result) print('Integrated algorithm: %s: %f(%f)' % (key, cv_result.mean(), cv_result.std()))# Integrated algorithm Box diagramfig = pyplot.figure()fig.suptitle('Integrated algorithm')ax = fig.add_subplot(111)pyplot.boxplot(results)ax.set_xticklabels(ensembles.keys())pyplot.show()
选取集成算法GBM调参,最优GBM[n_estimators=500]=0.872058823,弱于SVM算法调参SVC{'C': 2.0, 'kernel': 'rbf'}=0.872426# Integrated algorithmGBM--Parameter adjustmentscaler = preprocessing.StandardScaler().fit(X_train)rescaledX = scaler.transform(X_train)param_grid = {'n_estimators': [10, 50, 100, 200, 300, 400, 500, 600, 700, 800, 900]}model = ensemble.GradientBoostingClassifier()kfold = model_selection.KFold(n_splits=num_folds, random_state=seed)grid = model_selection.GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)grid_result = grid.fit(X=rescaledX, y=Y_train)print('Integrated algorithmGBM best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_))
选取集成算法ETR调参,最优ETR[n_estimators=600]=0.890,优于SVM算法调参SVC{'C': 2.0, 'kernel': 'rbf'}=0.872426# Integrated algorithmETR--Parameter adjustmentscaler = preprocessing.StandardScaler().fit(X_train)rescaledX = scaler.transform(X_train)param_grid = {'n_estimators': [10, 50, 100, 200, 300, 400, 500, 600, 700, 800, 900]}model = ensemble.ExtraTreesClassifier()kfold = model_selection.KFold(n_splits=num_folds, random_state=seed)grid = model_selection.GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)grid_result = grid.fit(X=rescaledX, y=Y_train)print('Integrated algorithmETR best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_))
确定最终模型(选取算法调参的SVC算法,svm.SVC=0.872426优于所有的集成算法)(集成算法的精度结果非常不稳定,有时集成算法ETR的结果优于SVC{'C': 2.0, 'kernel': 'rbf'}算法)
# train modelscaler = preprocessing.StandardScaler().fit(X_train)rescaledX = scaler.transform(X_train)selected_model = svm.SVC(C=2.0, kernel='rbf')selected_model.fit(X=rescaledX, y=Y_train) # test modelrescaledX_validation = scaler.transform(X_validation)predictions = selected_model.predict(rescaledX_validation)print('result: %f' % metrics.accuracy_score(Y_validation, predictions))print(metrics.classification_report(Y_validation, predictions))# 选取的SVC评估模型结果为0.928571(选取集成算法ETR{'n_estimators': 600}时,结果也为0.928571)
22文本分类实例
问题定义
如何端到端完成文本分类模型如何利用文本特征提取生成数据特征如何利用调参提高模型准确度如何利用集成算法提高模型准确度
导入数据(文本分类和数值特征分类不同,要先提取文本特征,文本特征属性个数超过数据特征个数,文本特征会超过数万个)
from matplotlib import pyplot #绘图from sklearn.datasets import load_files #导入文档数据from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer #词频和TF-IDF(词频-逆向文件频率)文件特征提取算法from sklearn.linear_model import LogisticRegression #逻辑回归(LR)from sklearn.naive_bayes import MultinomialNB #朴素贝叶斯分类器(MNB)from sklearn.neighbors import KNeighborsClassifier #K近邻分类(KNN)from sklearn.svm import SVC #支持向量机(SVM)from sklearn.tree import DecisionTreeClassifier #分类与回归树(CART)from sklearn.metrics import classification_report, accuracy_score #最终数据评估报告(精度、召回率、F1值),最终评估模型的准确度from sklearn.model_selection import cross_val_score, KFold, GridSearchCV #计算算法准确度,K折交叉验证,算法调参-网格搜索优化参数算法from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier #集成算法:AdaBoost算法(AB),随机森林(RF) # import train data and test datacategories = ['alt.atheism', 'rec.sport.hockey', 'comp.graphics', 'sci.crypt', 'comp.os.ms-windows.misc', 'sci.electronics', 'comp.sys.ibm.pc.hardware', 'sci.med', 'comp.sys.mac.hardware', 'sci.space', 'comp.windows.x', 'soc.religion.christian', 'misc.forsale', 'talk.politics.guns', 'rec.autos', 'talk.politics.mideast', 'rec.motorcycles', 'talk.politics.misc', 'rec.sport.baseball', 'talk.religion.misc']train_path = '20news-bydate-train'dataset_train = load_files(container_path=train_path, categories=categories)test_path = '20news-bydate-test'dataset_test = load_files(container_path=test_path, categories=categories)
文本特征提取(文本分类中的步骤,中文要先分词,然后sklearn.datasets.base.Bunch将分词后的文件加载到scikit-learn中)
文本数据属于非结构化数据,要转成“文档-词项矩阵”(矩阵元素为词频/TF-IDF值[词频Term Frequency-逆向文件频率Inverse Document Frequency,如果一个词在一篇文章中出现的频率高,在其他文章中出现少,则此词具有很好的类别区分能力])
# Data preparation and understanding数据准备与理解 # Calculating word frequency计算词频count_vect = CountVectorizer(stop_words='english', decode_error='ignore')X_train_counts = count_vect.fit_transform(dataset_train.data) # format dataprint(X_train_counts.shape) # data dimension# Calculating TF-IDF计算TF-IDF值tf_transformer = TfidfVectorizer(stop_words='english', decode_error='ignore')X_train_counts_tf = tf_transformer.fit_transform(dataset_train.data) # format dataprint(X_train_counts_tf.shape) # data dimension
TF-IDF的数据维度很大,进一步数据分析的意义不大,只需数据维度信息,然后进行算法评估
评估算法
利用训练数据的TF-IDF值X_train_counts_tf和训练数据的分类结果dataset_train.target评估算法模型# Algorithm evaluationnum_folds = 10 # 10 kfolds evaluationseed = 7scoring = 'accuracy'models = {}models['LR'] = LogisticRegression()models['KNN'] = KNeighborsClassifier()models['CART'] = DecisionTreeClassifier()models['MNB'] = MultinomialNB()models['SVM'] = SVC()results = []for key in models: kfold = KFold(n_splits=num_folds, random_state=seed) cv_result = cross_val_score(models[key], X_train_counts_tf, dataset_train.target, cv=kfold, scoring=scoring) results.append(cv_result) print('Algorithm evaluation %s: %f (%f)' % (key, cv_result.mean(), cv_result.std())) # Algorithm comparison Box diagram箱线图比较算法fig = pyplot.figure()fig.suptitle('Algorithm Comparison')ax = fig.add_subplot(111)pyplot.boxplot(results)ax.set_xticklabels(models.keys())pyplot.show() # MNB的数据离散程度小而好[离散程度小表示算法对数据适应性好],逻辑回归(LR)的偏度较大
由cross_val_score计算值:逻辑回归(LR)准确度最好0.905,MNB=0.884和KNN=0.800也值得继续研究
由Box diagram箱线图:MNB的数据离散程度小而好[离散程度小表示算法对数据适应性好],逻辑回归(LR)的偏度较大
算法调参(选取评估值好的LR和离散程度小的MNB算法进行调参)
逻辑回归调参
逻辑回归(LR)的超参数是目标约束函数C,C值越小则正则化强度越大# Parameter adjustment improvement LR algorithmparam_grid = {}param_grid['C'] = [1, 5, 13, 15]model = LogisticRegression()kfold = KFold(n_splits=num_folds, random_state=seed)grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)grid_result = grid.fit(X=X_train_counts_tf, y=dataset_train.target)print('Parameter adjustment improvement LR algorithm best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_))cv_results = zip(grid_result.cv_results_['mean_test_score'], grid_result.cv_results_['std_test_score'], grid_result.cv_results_['params'])for mean, std, param in cv_results: print('%f (%f) with %r' % (mean, std, param))
约束函数C取13时最优,LR=0.925
朴素贝叶斯分类器调参
朴素贝叶斯分类器(MNB)的平滑参数alpha默认1.0# Parameter adjustment improvement MNB algorithmparam_grid = {}param_grid['alpha'] = [0.001, 0.01, 0.1, 1.5]model = MultinomialNB()kfold = KFold(n_splits=num_folds, random_state=seed)grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)grid_result = grid.fit(X=X_train_counts_tf, y=dataset_train.target)print('Parameter adjustment improvement MNB algorithm best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_))cv_results = zip(grid_result.cv_results_['mean_test_score'], grid_result.cv_results_['std_test_score'], grid_result.cv_results_['params'])for mean, std, param in cv_results: print('%f (%f) with %r' % (mean, std, param))
alpha取0.01时最优,MNB=0.917
集成算法(使用集成算法RF和AB比较前面算法调参的结果)
# Integrated algorithm RF & ABensembles = {}ensembles['RF'] = RandomForestClassifier()ensembles['AB'] = AdaBoostClassifier() results = []for key in ensembles: kfold = KFold(n_splits=num_folds, random_state=seed) cv_result = cross_val_score(ensembles[key], X_train_counts_tf, dataset_train.target, cv=kfold, scoring=scoring) results.append(cv_result) print('Integrated algorithm: %s: %f(%f)' % (key, cv_result.mean(), cv_result.std()))# Integrated algorithm Box diagramfig = pyplot.figure()fig.suptitle('Integrated algorithm')ax = fig.add_subplot(111)pyplot.boxplot(results)ax.set_xticklabels(ensembles.keys())pyplot.show()
随机森林(RF)的算法结果分布均匀,适用性较好,RF=0.73
集成算法调参(使用集成算法RF和AB中较优秀的RF算法调参优化)
# Integrated algorithm RF--Parameter adjustmentparam_grid = {}param_grid['n_estimators'] = [10, 100, 150, 200]model = RandomForestClassifier()kfold = KFold(n_splits=num_folds, random_state=seed)grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)grid_result = grid.fit(X=X_train_counts_tf, y=dataset_train.target)print('Integrated algorithm RF best: %s with: %s' % (grid_result.best_score_, grid_result.best_params_))
n_estimators=150时,RF=0.87结果最好
确定最终模型(综上,逻辑回归(LR)调参的准确度最高0.925,评估数据集验证其模型的准确度)
# 为保持数据特征的一致性,对新数据进行文本特征提取时应进行特征扩充。使用之前得到的tf_transformer的transform方法处理评估数据集。# Generation modelmodel = LogisticRegression(C=13)model.fit(X_train_counts_tf, dataset_train.target)X_test_counts = tf_transformer.transform(dataset_test.data)predictions = model.predict(X_test_counts)print(accuracy_score(dataset_test.target, predictions))print(classification_report(dataset_test.target, predictions))