GBDT+LR

策略分享
标签: #<Tag:0x00007f259af426f0>

(XiaoyuDu) #1

目录

  • GBDT+LR模型理论和思路介绍

  • 策略模块介绍和输出结果展示

  • 策略代码分析

正文

一、GBDT+LR策略理论和思路介绍

GBDT(Gradient Boosting Decision Tree) 又叫 MART(Multiple Additive Regression Tree),是一种迭代的决策树算法,该算法由多棵决策树组成,所有树的结论累加起来做最终答案。它在被提出之初就和SVM一起被认为是泛化能力(generalization)较强的算法,它每棵树进行训练的对象都是上一颗树的残差。GBDT模型是非线性关系。
每一颗决策树都是由你输入一个样本开始,根据一定的准则(枝丫节点)将输入的样本给出最后的预测结果,而GBDT由多棵树组成,你输入一个样本,就会产生多组预测结果,这多组结果在GBDT+LR模型中就被用于作为一组新的解释变量。

LR(Logistic Regression)算法解决主要解决分类问题的判别概率问题。现在常用在垃圾邮件判别、推荐系统、疾病预测等场景中。一般用来判决某件 事情属于某个分类的概率来确定类别,但作为一个回归模型,它模型中的因变量和自变量之间的关系是线性的。

GBDT + LR 就是将通过GBDT的非线性模型组合生成的新的变量,再通过LR线性模型进行参数估计训练得出LR模型,然后用于预测;

具体示例如下:
下图是一个GBDT+LR 模型结构,设GBDT有两个弱分类器,分别以蓝色和红色部分表示,其中蓝色弱分类器的叶子结点个数为3,红色弱分类器的叶子结点个数为2,并且蓝色弱分类器中对0-1 的预测结果落到了第二个叶子结点上,红色弱分类器中对0-1 的预测结果也落到了第二个叶子结点上。那么我们就记蓝色弱分类器的预测结果为[0 1 0],红色弱分类器的预测结果为[0 1],综合起来看,GBDT的输出为这些弱分类器的组合[0 1 0 0 1] ,或者一个稀疏向量(数组)。

二、策略模块介绍和输出结果展示

如何将GBDT+LR 模型应用到股票价格预测上呢?股票模型往往有大量的特征,我们先用GBDT模型对特征进行筛选,转换后变成哑变量,然后通过这些哑变量来进行预测股票是涨是跌。

1) 构建特征和数据标注

所谓特征的构建就是解释变量的选取,数据的标注就是被解释变量,在这里我们的解释变量均存储在m3输入特征列表中,我们通过m2自动标注数据进行数据标注,最后将两部分数据进行合并连接并对缺失值进行处理后我们得到了原始数据;
image

对于训练集数据我们这样进行数据准备,对于测试集数据我们准备得方法是同步的:


只不过在这里我们并没有将特征和标签按股票链接在一起;

2) 模型的训练预测和评估

m4 m5 m6 三个自定义模块是用于模型的训练,预测和评估
如图所示,我们通过输入的左侧的训练集数据输入m4自定义模块进行模型的训练,训练出GBDT+LR模型的参数和决策树的权重节点;
将训练出的模型和预测集的特征数据分别输入到m5自定义模块的左右并通过模型进行预测
将预测出来的结果和m10数据输出的预测集的标签进行比对,以获得模型的评估;


模型评估部分我们输出如下数据,总的准确率0.4930
image

我们画出混淆矩阵和ROC Curve,用于对涨跌的精准度预测做更精细的展示,

以上模型的评估告诉我们我们的模型并不具有预测涨跌的效力,这主要是由输入的特征质量决定的。

3) 交易与回测

根据该算法进行交易的收益与回测结果如下:尽管模型预测结果不好,但是预测成功的收益比预测失败的成本高,所以总的收益>0

三、策略代码分析/b>

在最后一部分,我们对代码做出解析,便于大家的改写和优化:

(有兴趣看代码和中间结果展示的用户可以阅读以下内容) 模型训练部分


点击查看
def bigquant_run(input_1, input_2, input_3):

    # 包的加载
    # Package loaded

    import numpy as np
    np.random.seed(10)

    from sklearn.linear_model import LogisticRegression  # 逻辑回归包
    from sklearn.ensemble import GradientBoostingClassifier # GBDT包
    from sklearn.preprocessing import OneHotEncoder   # 存储分类结果的编辑器
    from sklearn.model_selection import train_test_split  # 划分预测集和训练集
    from sklearn.pipeline import make_pipeline   #做模型之间的链接

    ## 设置机器学习的参数,区分预测集和训练集
    ## Set parameters and load data
    Data = input_1.read_df()  #获取全部数据

    # X 是我们的训练集特征,y是我们的顺序一致的标签,被解释变量
    X = Data[input_2.read_pickle()]
    y = pd.DataFrame(Data['label'])
    
    #print(y.columns)
    #print(X.columns)
    n_estimator = 5

    # 在这里我们实际上不需要做测试集和训练集的区分,因为本部分本来就是训练的部分
    X_train = X
    y_train = y

    # 需要将对 LR和GBDT的训练集给区分开来,因为两个模型尽量不用交叉训练
    # It is important to train the ensemble of trees on a different subset of the training data than the linear regression model to avoid overfitting, in particular if the total number of leaves is similar to the number of training samples
    X_train, X_train_lr, y_train, y_train_lr = train_test_split(X_train, y_train, test_size=0.7)

    # Supervised transformation based on gradient boosted trees
    # 这里是训练好的模型,GBDT模型,编码模型和逻辑回归模型,建立三个模型对象
    grd = GradientBoostingClassifier(n_estimators=n_estimator)   
    grd_enc = OneHotEncoder(categories='auto')
    grd_lm = LogisticRegression(solver='lbfgs', max_iter=1000)

    # 拟合数据,获取模型参数
    grd.fit(X_train, y_train)
    grd_enc.fit(grd.apply(X_train)[:, :, 0])
    grd_lm.fit(grd_enc.transform(grd.apply(X_train_lr)[:, :, 0]), y_train_lr)
    

   # 创建一个字典用来打包存储模型
    Model = dict()
    Model['grd'] = grd
    Model['grd_enc'] = grd_enc
    Model['grd_lm'] = grd_lm
    
    T = DataSource.write_pickle(Model)
    return Outputs(data_1=T, data_2=None, data_3=None)

这里创建了m2对象,存储着需要抓取的股票数据的信息,读取m2数据的结果如下图
image

(有兴趣看代码和中间结果展示的用户可以阅读以下内容 模型预测部分)


点击查看
def bigquant_run(input_1, input_2, input_3):
    
    # 三个模型从这个字典结构里面拿出来

    Model = input_1.read_pickle()
    
    # 获取已经训练好的模型
    grd = Model['grd']
    grd_enc = Model['grd_enc'] 
    grd_lm = Model['grd_lm']
    
    
    # 获取测试集的特征数据并预测
    X_test = input_2.read_df()
    X_test1 = X_test[input_3.read_pickle()]
    y_pred_grd_lm = grd_lm.predict_proba(grd_enc.transform(grd.apply(X_test1)[:, :, 0]))[:, 1]
    
    
    Y = pd.DataFrame(y_pred_grd_lm,columns=['prediction'])
    
    Y['date'] = X_test['date']
    Y['instrument'] = X_test['instrument']
    
    Y = DataSource.write_df(Y)
    return Outputs(data_1=Y, data_2=None, data_3=None)

(有兴趣看代码和中间结果展示的用户可以阅读以下内容 模型评价部分)


点击查看
def bigquant_run(input_1, input_2, input_3):
    
    import matplotlib.pyplot as plt
    from sklearn.metrics import confusion_matrix
    # 本部分的主要功能是对模型的预测效果进行分析


   # 模型的真实值和模型的预测值分别读取
    Data_pre = input_1.read_df()
    Data_real = input_2.read_df()
    
    # 将两个数据合并,并根据预测结果是否>0.5来判断涨跌标签
    Data = pd.merge(Data_pre,Data_real,how='inner',on = ['date','instrument'])
    Pred = np.where(Data['prediction']>0.5,1,0)
    Real = np.array(Data['label'])
    
    cm = confusion_matrix(Real, Pred)
    
    cm_normalized = cm.astype('float')/cm.sum(axis=1)[:, np.newaxis]
    print(cm_normalized)
    
    
    import seaborn as sn
    

    # 画出混淆矩阵
    df_cm = pd.DataFrame(cm_normalized)
    plt.figure(figsize = (15,10))
    sn.heatmap(df_cm, annot=True)
    print('准确率')
    c = (Real == Pred)
    print(len(c[c])/len(c))
    print('预测涨结果涨')
    P = Real[c]
    L11 = len(P[P==1])
    print(L11)
    print('预测跌结果跌')
    L00 = len(P[P==0])
    print(L00)
    print('预测涨结果跌')
    L10 = len(Real[Real==0])-len(P[P==0])
    print(L10)
    print('预测跌结果涨')
    L01 = len(Real[Real==1])-len(P[P==1])
    print(L01)
    print('\n')
    print('查准率\n')
    print('预测涨的准确率\n')
    print(L11/(L11+L10))
    print('预测跌的准确率\n')
    print(L00/(L01+L00))
    print('\n')
    
    print('查全率\n')
    print('涨的股票中预测准确率\n')
    print(L11/(L11+L01))
    print('跌的股票中预测准确率\n')
    print(L00/(L00+L10))
    
    from sklearn.metrics import roc_curve
    fpr_grd_lm, tpr_grd_lm, _ = roc_curve(Real, Pred)
    
    plt.figure()
    plt.plot(fpr_grd_lm, tpr_grd_lm, label='GBT')
    plt.xlabel('False positive rate')
    plt.ylabel('True positive rate')
    plt.title('ROC curve')
    plt.legend(loc='best')
    plt.show()
    return Outputs(data_1=DataSource.write_pickle(Real), data_2=DataSource.write_pickle(Pred), data_3=None)

策略链接分享如下:

克隆策略

模型优化提升思路:

1) 添加对训练集的预测,用于判断是否是过拟合还是欠拟合,看效果 2) 添加中间展示和调优的部分,用于改进模型和判断程序是否正常运行 3) 进行一个完整的训练

In [13]:
m13.data.read_df().head()
Out[13]:
date instrument rank_avg_amount_5 rank_avg_turn_5 rank_volatility_5_0 rank_swing_volatility_5_0 rank_avg_mf_net_amount_5 rank_beta_industry_5_0 rank_return_5 rank_return_2 ... std(close_0,50)/std(close_0,100)-1 shift(mf_net_amount_s_0,3) shift(mf_net_amount_m_0,3) shift(mf_net_amount_l_0,3) m:amount m:high m:low m:close m:open label
0 2014-06-03 000001.SZA 0.980061 0.300824 0.023407 0.097096 0.982661 0.395217 0.351105 0.594278 ... -0.336486 -5978800.0 7045200.0 3442100.0 322371200.0 682.544983 671.451416 672.035278 671.451416 1
1 2014-06-04 000001.SZA 0.982266 0.309256 0.106834 0.130190 0.172578 0.676790 0.422145 0.406574 ... -0.393031 2724300.0 -10362600.0 -35613000.0 367011936.0 672.619141 656.854614 661.525635 672.035278 1
2 2014-06-05 000001.SZA 0.982197 0.304820 0.142423 0.173252 0.963526 0.590335 0.307859 0.369518 ... -0.387902 9667800.0 6191200.0 -36925300.0 280049632.0 667.948181 659.190125 667.364319 659.773987 1
3 2014-06-06 000001.SZA 0.975684 0.269214 0.181937 0.160226 0.468085 0.478450 0.403821 0.317846 ... -0.390327 -4289000.0 -13013800.0 13580000.0 212129088.0 670.283691 659.190125 664.444946 667.948181 1
4 2014-06-09 000001.SZA 0.977836 0.269013 0.258583 0.142547 0.079096 0.523747 0.565841 0.696219 ... -0.405316 28573700.0 -14623100.0 13507700.0 359580576.0 673.786865 660.941711 670.283691 662.693359 1

5 rows × 71 columns

    {"Description":"实验创建于2017/8/26","Summary":"","Graph":{"EdgesInternal":[{"DestinationInputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-15:instruments","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-8:data"},{"DestinationInputPortId":"-106:instruments","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-8:data"},{"DestinationInputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-53:data1","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-15:data"},{"DestinationInputPortId":"-106:features","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-24:data"},{"DestinationInputPortId":"-113:features","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-24:data"},{"DestinationInputPortId":"-122:features","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-24:data"},{"DestinationInputPortId":"-129:features","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-24:data"},{"DestinationInputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-84:input_data","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-53:data"},{"DestinationInputPortId":"-122:instruments","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-62:data"},{"DestinationInputPortId":"-126:instruments","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-62:data"},{"DestinationInputPortId":"-117:input_1","SourceOutputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-84:data"},{"DestinationInputPortId":"-113:input_data","SourceOutputPortId":"-106:data"},{"DestinationInputPortId":"287d2cb0-f53c-4101-bdf8-104b137c8601-53:data2","SourceOutputPortId":"-113:data"},{"DestinationInputPortId":"-129:input_data","SourceOutputPortId":"-122:data"},{"DestinationInputPortId":"-86:input_data","SourceOutputPortId":"-129:data"},{"DestinationInputPortId":"-388:input_1","SourceOutputPortId":"-117:data_1"},{"DestinationInputPortId":"-3954:input_1","SourceOutputPortId":"-388:data_1"},{"DestinationInputPortId":"-141:options_data","SourceOutputPortId":"-388:data_1"},{"DestinationInputPortId":"-1689:input_data","SourceOutputPortId":"-126:data"},{"DestinationInputPortId":"-388:input_2","SourceOutputPortId":"-86:data"},{"DestinationInputPortId":"-3954:input_2","SourceOutputPortId":"-1689:data"},{"DestinationInputPortId":"-141:instruments","SourceOutputPortId":"-3472:data"},{"DestinationInputPortId":"-117:input_2","SourceOutputPortId":"-170:data_1"},{"DestinationInputPortId":"-388:input_3","SourceOutputPortId":"-170:data_1"}],"ModuleNodes":[{"Id":"287d2cb0-f53c-4101-bdf8-104b137c8601-8","ModuleId":"BigQuantSpace.instruments.instruments-v2","ModuleParameters":[{"Name":"start_date","Value":"2010-01-01","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"end_date","Value":"2016-01-01","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"market","Value":"CN_STOCK_A","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"instrument_list","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"max_count","Value":"0","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"rolling_conf","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-8"}],"OutputPortsInternal":[{"Name":"data","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-8","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":1,"Comment":"","CommentCollapsed":true},{"Id":"287d2cb0-f53c-4101-bdf8-104b137c8601-15","ModuleId":"BigQuantSpace.advanced_auto_labeler.advanced_auto_labeler-v2","ModuleParameters":[{"Name":"label_expr","Value":"# #号开始的表示注释\n# 0. 每行一个,顺序执行,从第二个开始,可以使用label字段\n# 1. 可用数据字段见 https://bigquant.com/docs/data_history_data.html\n# 添加benchmark_前缀,可使用对应的benchmark数据\n# 2. 可用操作符和函数见 `表达式引擎 <https://bigquant.com/docs/big_expr.html>`_\n\n# 计算收益:5日收盘价(作为卖出价格)除以明日开盘价(作为买入价格), 五日收益率为正数\nwhere(shift(close, -5) / shift(open, -1)>1.001,1,0)\n\n# 极值处理:用1%和99%分位的值做clip\n#clip(label, all_quantile(label, 0.01), all_quantile(label, 0.99))\n\n# 将分数映射到分类,这里使用20个分类\n#all_wbins(label, 20)\n\n# 过滤掉一字涨跌停的情况 (设置label为NaN,在后续处理和训练中会忽略NaN的label)\nwhere(shift(high, -1) == shift(low, -1), NaN, label) # 一开盘就到了10%那里,既是high也是low\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"start_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"end_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"benchmark","Value":"000300.SHA","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"drop_na_label","Value":"True","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"cast_label_int","Value":"True","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"user_functions","Value":"","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"instruments","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-15"}],"OutputPortsInternal":[{"Name":"data","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-15","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":2,"Comment":"","CommentCollapsed":true},{"Id":"287d2cb0-f53c-4101-bdf8-104b137c8601-24","ModuleId":"BigQuantSpace.input_features.input_features-v1","ModuleParameters":[{"Name":"features","Value":"# #号开始的表示注释\n# 多个特征,每行一个,可以包含基础特征和衍生特征\n\nreturn_5-1\nreturn_10-1\nreturn_20-1\navg_amount_0/avg_amount_5-1\navg_amount_5/avg_amount_20-1\nrank_avg_amount_0-rank_avg_amount_5\nrank_avg_amount_5-rank_avg_amount_10\nrank_return_0-rank_return_5\nrank_return_5-rank_return_10\nbeta_csi300_30_0/10\nbeta_csi300_60_0/10\nswing_volatility_5_0/swing_volatility_30_0-1\nswing_volatility_30_0/swing_volatility_60_0-1\nta_atr_14_0/ta_atr_28_0-1\nta_sma_5_0/ta_sma_20_0-1\nta_sma_10_0/ta_sma_20_0-1\nta_sma_20_0/ta_sma_30_0-1\nta_sma_30_0/ta_sma_60_0-1\nta_rsi_14_0/100\nta_rsi_28_0/100\nta_cci_14_0/500\nta_cci_28_0/500\nbeta_industry_30_0/10\nbeta_industry_60_0/10\nta_sma(amount_0, 10)/ta_sma(amount_0, 20)-1\nta_sma(amount_0, 20)/ta_sma(amount_0, 30)-1\nta_sma(amount_0, 30)/ta_sma(amount_0, 60)-1\nta_sma(amount_0, 50)/ta_sma(amount_0, 100)-1\nta_sma(turn_0, 10)/ta_sma(turn_0, 20)-1\nta_sma(turn_0, 20)/ta_sma(turn_0, 30)-1\nta_sma(turn_0, 30)/ta_sma(turn_0, 60)-1\nta_sma(turn_0, 50)/ta_sma(turn_0, 100)-1\nhigh_0/low_0-1\nclose_0/open_0-1\nshift(close_0,1)/close_0-1\nshift(close_0,2)/close_0-1\nshift(close_0,3)/close_0-1\nshift(close_0,4)/close_0-1\nshift(close_0,5)/close_0-1\nshift(close_0,10)/close_0-1\nshift(close_0,20)/close_0-1\nta_sma(high_0-low_0, 5)/ta_sma(high_0-low_0, 20)-1\nta_sma(high_0-low_0, 10)/ta_sma(high_0-low_0, 20)-1\nta_sma(high_0-low_0, 20)/ta_sma(high_0-low_0, 30)-1\nta_sma(high_0-low_0, 30)/ta_sma(high_0-low_0, 60)-1\nta_sma(high_0-low_0, 50)/ta_sma(high_0-low_0, 100)-1\nrank_avg_amount_5\nrank_avg_turn_5\nrank_volatility_5_0\nrank_swing_volatility_5_0\nrank_avg_mf_net_amount_5\nrank_beta_industry_5_0\nrank_return_5\nrank_return_2\nstd(close_0,5)/std(close_0,20)-1\nstd(close_0,10)/std(close_0,20)-1\nstd(close_0,20)/std(close_0,30)-1\nstd(close_0,30)/std(close_0,60)-1\nstd(close_0,50)/std(close_0,100)-1\nmf_net_amount_1\nshift(mf_net_amount_s_0,3)\nshift(mf_net_amount_m_0,3)\nshift(mf_net_amount_l_0,3)","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"features_ds","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-24"}],"OutputPortsInternal":[{"Name":"data","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-24","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":3,"Comment":"","CommentCollapsed":true},{"Id":"287d2cb0-f53c-4101-bdf8-104b137c8601-53","ModuleId":"BigQuantSpace.join.join-v3","ModuleParameters":[{"Name":"on","Value":"date,instrument","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"how","Value":"inner","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"sort","Value":"False","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"data1","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-53"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"data2","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-53"}],"OutputPortsInternal":[{"Name":"data","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-53","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":7,"Comment":"","CommentCollapsed":true},{"Id":"287d2cb0-f53c-4101-bdf8-104b137c8601-62","ModuleId":"BigQuantSpace.instruments.instruments-v2","ModuleParameters":[{"Name":"start_date","Value":"2016-01-01","ValueType":"Literal","LinkedGlobalParameter":"交易日期"},{"Name":"end_date","Value":"2017-01-01","ValueType":"Literal","LinkedGlobalParameter":"交易日期"},{"Name":"market","Value":"CN_STOCK_A","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"instrument_list","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"max_count","Value":"0","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"rolling_conf","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-62"}],"OutputPortsInternal":[{"Name":"data","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-62","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":9,"Comment":"预测数据,用于回测和模拟","CommentCollapsed":true},{"Id":"287d2cb0-f53c-4101-bdf8-104b137c8601-84","ModuleId":"BigQuantSpace.dropnan.dropnan-v1","ModuleParameters":[],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_data","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-84"}],"OutputPortsInternal":[{"Name":"data","NodeId":"287d2cb0-f53c-4101-bdf8-104b137c8601-84","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":13,"Comment":"","CommentCollapsed":true},{"Id":"-106","ModuleId":"BigQuantSpace.general_feature_extractor.general_feature_extractor-v7","ModuleParameters":[{"Name":"start_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"end_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"before_start_days","Value":0,"ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"instruments","NodeId":"-106"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"features","NodeId":"-106"}],"OutputPortsInternal":[{"Name":"data","NodeId":"-106","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":15,"Comment":"","CommentCollapsed":true},{"Id":"-113","ModuleId":"BigQuantSpace.derived_feature_extractor.derived_feature_extractor-v3","ModuleParameters":[{"Name":"date_col","Value":"date","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"instrument_col","Value":"instrument","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"drop_na","Value":"True","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"remove_extra_columns","Value":"True","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"user_functions","Value":"","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_data","NodeId":"-113"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"features","NodeId":"-113"}],"OutputPortsInternal":[{"Name":"data","NodeId":"-113","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":16,"Comment":"","CommentCollapsed":true},{"Id":"-122","ModuleId":"BigQuantSpace.general_feature_extractor.general_feature_extractor-v7","ModuleParameters":[{"Name":"start_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"end_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"before_start_days","Value":0,"ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"instruments","NodeId":"-122"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"features","NodeId":"-122"}],"OutputPortsInternal":[{"Name":"data","NodeId":"-122","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":17,"Comment":"","CommentCollapsed":true},{"Id":"-129","ModuleId":"BigQuantSpace.derived_feature_extractor.derived_feature_extractor-v3","ModuleParameters":[{"Name":"date_col","Value":"date","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"instrument_col","Value":"instrument","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"drop_na","Value":"False","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"remove_extra_columns","Value":"False","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"user_functions","Value":"","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_data","NodeId":"-129"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"features","NodeId":"-129"}],"OutputPortsInternal":[{"Name":"data","NodeId":"-129","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":18,"Comment":"","CommentCollapsed":true},{"Id":"-141","ModuleId":"BigQuantSpace.trade.trade-v4","ModuleParameters":[{"Name":"start_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"end_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"handle_data","Value":"# 回测引擎:每日数据处理函数,每天执行一次\ndef bigquant_run(context, data):\n # 按日期过滤得到今日的预测数据\n ranker_prediction = context.ranker_prediction[\n context.ranker_prediction.date == data.current_dt.strftime('%Y-%m-%d')]\n\n # 1. 资金分配\n # 平均持仓时间是hold_days,每日都将买入股票,每日预期使用 1/hold_days 的资金\n # 实际操作中,会存在一定的买入误差,所以在前hold_days天,等量使用资金;之后,尽量使用剩余资金(这里设置最多用等量的1.5倍)\n is_staging = context.trading_day_index < context.options['hold_days'] # 是否在建仓期间(前 hold_days 天)\n cash_avg = context.portfolio.portfolio_value / context.options['hold_days']\n cash_for_buy = min(context.portfolio.cash, (1 if is_staging else 1.5) * cash_avg)\n cash_for_sell = cash_avg - (context.portfolio.cash - cash_for_buy)\n positions = {e.symbol: p.amount * p.last_sale_price\n for e, p in context.perf_tracker.position_tracker.positions.items()}\n\n # 2. 生成卖出订单:hold_days天之后才开始卖出;对持仓的股票,按机器学习算法预测的排序末位淘汰\n if not is_staging and cash_for_sell > 0:\n equities = {e.symbol: e for e, p in context.perf_tracker.position_tracker.positions.items()}\n instruments = [m for m in list(ranker_prediction[ranker_prediction.prediction<0.42].instrument) if m in equities]\n # print('rank order for sell %s' % instruments)\n for instrument in instruments:\n context.order_target(context.symbol(instrument), 0)\n cash_for_sell -= positions[instrument]\n if cash_for_sell <= 0:\n break\n\n # 3. 生成买入订单:按机器学习算法预测的排序,买入前面的stock_count只股票\n buy_instruments = list(ranker_prediction[ranker_prediction.prediction>0.66].instrument) #\n buy_cash_weights = [1/len(buy_instruments) for k in range(len(buy_instruments))] \n max_cash_per_instrument = context.portfolio.portfolio_value * context.max_cash_per_instrument\n for i, instrument in enumerate(buy_instruments):\n cash = cash_for_buy * buy_cash_weights[i]\n if cash > max_cash_per_instrument - positions.get(instrument, 0):\n # 确保股票持仓量不会超过每次股票最大的占用资金量\n cash = max_cash_per_instrument - positions.get(instrument, 0)\n if cash > 0:\n context.order_value(context.symbol(instrument), cash)\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"prepare","Value":"# 回测引擎:准备数据,只执行一次\ndef bigquant_run(context):\n pass\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"initialize","Value":"# 回测引擎:初始化函数,只执行一次\ndef bigquant_run(context):\n # 加载预测数据\n context.ranker_prediction = context.options['data'].read_df()\n\n # 系统已经设置了默认的交易手续费和滑点,要修改手续费可使用如下函数\n context.set_commission(PerOrder(buy_cost=0.0003, sell_cost=0.0013, min_cost=5))\n # 预测数据,通过options传入进来,使用 read_df 函数,加载到内存 (DataFrame)\n # 设置买入的股票数量,这里买入预测股票列表排名靠前的5只\n stock_count = 5\n # 每只的股票的权重,如下的权重分配会使得靠前的股票分配多一点的资金,[0.339160, 0.213986, 0.169580, ..]\n context.stock_weights = T.norm([1 / math.log(i + 2) for i in range(0, stock_count)])\n # 设置每只股票占用的最大资金比例\n context.max_cash_per_instrument = 0.2\n context.options['hold_days'] = 5\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"before_trading_start","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"volume_limit","Value":0.025,"ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"order_price_field_buy","Value":"open","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"order_price_field_sell","Value":"close","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"capital_base","Value":1000000,"ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"auto_cancel_non_tradable_orders","Value":"True","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"data_frequency","Value":"daily","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"price_type","Value":"后复权","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"product_type","Value":"股票","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"plot_charts","Value":"True","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"backtest_only","Value":"False","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"benchmark","Value":"000300.SHA","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"instruments","NodeId":"-141"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"options_data","NodeId":"-141"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"history_ds","NodeId":"-141"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"benchmark_ds","NodeId":"-141"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"trading_calendar","NodeId":"-141"}],"OutputPortsInternal":[{"Name":"raw_perf","NodeId":"-141","OutputType":null}],"UsePreviousResults":false,"moduleIdForCode":19,"Comment":"","CommentCollapsed":true},{"Id":"-117","ModuleId":"BigQuantSpace.cached.cached-v3","ModuleParameters":[{"Name":"run","Value":"# 特征提取与转换\n# Feature selection and transformation\n\ndef bigquant_run(input_1, input_2, input_3):\n\n \n # 包的加载\n # Package loaded\n # 这里有很多包其实在这个方法里是用不着的,比如我们仅仅用了GBDT,我们暂时也不需要做训练集和测试集的划分\n \n import numpy as np\n np.random.seed(10)\n\n import matplotlib.pyplot as plt\n\n from sklearn.datasets import make_classification\n from sklearn.linear_model import LogisticRegression\n from sklearn.ensemble import (RandomTreesEmbedding, RandomForestClassifier,\n GradientBoostingClassifier)\n from sklearn.preprocessing import OneHotEncoder\n from sklearn.model_selection import train_test_split\n from sklearn.metrics import roc_curve\n from sklearn.pipeline import make_pipeline #做模型之间的管子链接\n\n ## 设置机器学习的参数,区分预测集和训练集\n ## Set parameters and load data\n Data = input_1.read_df() #获取全部数据\n X = Data[input_2.read_pickle()]\n y = pd.DataFrame(Data['label'])\n \n #print(y.columns)\n #print(X.columns)\n n_estimator = 5\n\n # 在这里我们实际上不需要做测试集和训练集的区分,因为本部分本来就是训练的部分\n X_train = X\n y_train = y\n\n # 需要将对 LR和GBDT的训练集给区分开来\n # It is important to train the ensemble of trees on a different subset of the training data than the linear regression model to avoid overfitting, in particular if the total number of leaves is similar to the number of training samples\n X_train, X_train_lr, y_train, y_train_lr = train_test_split(X_train, y_train, test_size=0.7)\n\n # Supervised transformation based on gradient boosted trees\n # 这里是训练好的模型,GBDT模型,编码模型和逻辑回归模型\n grd = GradientBoostingClassifier(n_estimators=n_estimator) \n grd_enc = OneHotEncoder(categories='auto')\n grd_lm = LogisticRegression(solver='lbfgs', max_iter=1000)\n grd.fit(X_train, y_train)\n grd_enc.fit(grd.apply(X_train)[:, :, 0])\n grd_lm.fit(grd_enc.transform(grd.apply(X_train_lr)[:, :, 0]), y_train_lr)\n \n Model = dict()\n Model['grd'] = grd\n Model['grd_enc'] = grd_enc\n Model['grd_lm'] = grd_lm\n \n T = DataSource.write_pickle(Model)\n return Outputs(data_1=T, data_2=None, data_3=None)\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"post_run","Value":"# 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。\ndef bigquant_run(outputs):\n return outputs\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"input_ports","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"params","Value":"{}","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"output_ports","Value":"","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_1","NodeId":"-117"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_2","NodeId":"-117"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_3","NodeId":"-117"}],"OutputPortsInternal":[{"Name":"data_1","NodeId":"-117","OutputType":null},{"Name":"data_2","NodeId":"-117","OutputType":null},{"Name":"data_3","NodeId":"-117","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":4,"Comment":"","CommentCollapsed":true},{"Id":"-388","ModuleId":"BigQuantSpace.cached.cached-v3","ModuleParameters":[{"Name":"run","Value":"# 输入的是模型,输出的是预测部分\n\ndef bigquant_run(input_1, input_2, input_3):\n \n # 三个模型从这个字典结构里面拿出来\n\n Model = input_1.read_pickle()\n \n grd = Model['grd']\n grd_enc = Model['grd_enc'] \n grd_lm = Model['grd_lm']\n \n \n \n X_test = input_2.read_df()\n X_test1 = X_test[input_3.read_pickle()]\n y_pred_grd_lm = grd_lm.predict_proba(grd_enc.transform(grd.apply(X_test1)[:, :, 0]))[:, 1]\n \n \n Y = pd.DataFrame(y_pred_grd_lm,columns=['prediction'])\n \n Y['date'] = X_test['date']\n Y['instrument'] = X_test['instrument']\n \n Y = DataSource.write_df(Y)\n return Outputs(data_1=Y, data_2=None, data_3=None)\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"post_run","Value":"# 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。\ndef bigquant_run(outputs):\n return outputs\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"input_ports","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"params","Value":"{}","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"output_ports","Value":"","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_1","NodeId":"-388"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_2","NodeId":"-388"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_3","NodeId":"-388"}],"OutputPortsInternal":[{"Name":"data_1","NodeId":"-388","OutputType":null},{"Name":"data_2","NodeId":"-388","OutputType":null},{"Name":"data_3","NodeId":"-388","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":5,"Comment":"","CommentCollapsed":true},{"Id":"-3954","ModuleId":"BigQuantSpace.cached.cached-v3","ModuleParameters":[{"Name":"run","Value":"# Python 代码入口函数,input_1/2/3 对应三个输入端,data_1/2/3 对应三个输出端\ndef bigquant_run(input_1, input_2, input_3):\n \n import matplotlib.pyplot as plt\n from sklearn.metrics import confusion_matrix\n # 本部分的主要功能是对模型的预测效果进行分析\n\n Data_pre = input_1.read_df()\n Data_real = input_2.read_df()\n \n Data = pd.merge(Data_pre,Data_real,how='inner',on = ['date','instrument'])\n Pred = np.where(Data['prediction']>0.5,1,0)\n Real = np.array(Data['label'])\n \n cm = confusion_matrix(Real, Pred)\n \n cm_normalized = cm.astype('float')/cm.sum(axis=1)[:, np.newaxis]\n print(cm_normalized)\n \n \n import seaborn as sn\n \n df_cm = pd.DataFrame(cm_normalized)\n plt.figure(figsize = (15,10))\n sn.heatmap(df_cm, annot=True)\n print('准确率')\n c = (Real == Pred)\n print(len(c[c])/len(c))\n print('预测涨结果涨')\n P = Real[c]\n L11 = len(P[P==1])\n print(L11)\n print('预测跌结果跌')\n L00 = len(P[P==0])\n print(L00)\n print('预测涨结果跌')\n L10 = len(Real[Real==0])-len(P[P==0])\n print(L10)\n print('预测跌结果涨')\n L01 = len(Real[Real==1])-len(P[P==1])\n print(L01)\n print('\\n')\n print('查准率\\n')\n print('预测涨的准确率\\n')\n print(L11/(L11+L10))\n print('预测跌的准确率\\n')\n print(L00/(L01+L00))\n print('\\n')\n \n print('查全率\\n')\n print('涨的股票中预测准确率\\n')\n print(L11/(L11+L01))\n print('跌的股票中预测准确率\\n')\n print(L00/(L00+L10))\n \n from sklearn.metrics import roc_curve\n fpr_grd_lm, tpr_grd_lm, _ = roc_curve(Real, Pred)\n \n plt.figure()\n plt.plot(fpr_grd_lm, tpr_grd_lm, label='GBT')\n plt.xlabel('False positive rate')\n plt.ylabel('True positive rate')\n plt.title('ROC curve')\n plt.legend(loc='best')\n plt.show()\n return Outputs(data_1=DataSource.write_pickle(Real), data_2=DataSource.write_pickle(Pred), data_3=None)\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"post_run","Value":"# 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。\ndef bigquant_run(outputs):\n return outputs\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"input_ports","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"params","Value":"{}","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"output_ports","Value":"","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_1","NodeId":"-3954"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_2","NodeId":"-3954"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_3","NodeId":"-3954"}],"OutputPortsInternal":[{"Name":"data_1","NodeId":"-3954","OutputType":null},{"Name":"data_2","NodeId":"-3954","OutputType":null},{"Name":"data_3","NodeId":"-3954","OutputType":null}],"UsePreviousResults":false,"moduleIdForCode":6,"Comment":"","CommentCollapsed":true},{"Id":"-126","ModuleId":"BigQuantSpace.advanced_auto_labeler.advanced_auto_labeler-v2","ModuleParameters":[{"Name":"label_expr","Value":"# #号开始的表示注释\n# 0. 每行一个,顺序执行,从第二个开始,可以使用label字段\n# 1. 可用数据字段见 https://bigquant.com/docs/data_history_data.html\n# 添加benchmark_前缀,可使用对应的benchmark数据\n# 2. 可用操作符和函数见 `表达式引擎 <https://bigquant.com/docs/big_expr.html>`_\n\n# 计算收益:5日收盘价(作为卖出价格)除以明日开盘价(作为买入价格)\nwhere(shift(close, -5) / shift(open, -1)>1,1,0)\n\n# 极值处理:用1%和99%分位的值做clip\n#clip(label, all_quantile(label, 0.01), all_quantile(label, 0.99))\n\n# 将分数映射到分类,这里使用20个分类\n#all_wbins(label, 20)\n\n# 过滤掉一字涨停的情况 (设置label为NaN,在后续处理和训练中会忽略NaN的label)\nwhere(shift(high, -1) == shift(low, -1), NaN, label)\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"start_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"end_date","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"benchmark","Value":"000300.SHA","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"drop_na_label","Value":"True","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"cast_label_int","Value":"True","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"user_functions","Value":"","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"instruments","NodeId":"-126"}],"OutputPortsInternal":[{"Name":"data","NodeId":"-126","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":8,"Comment":"","CommentCollapsed":true},{"Id":"-86","ModuleId":"BigQuantSpace.dropnan.dropnan-v1","ModuleParameters":[],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_data","NodeId":"-86"}],"OutputPortsInternal":[{"Name":"data","NodeId":"-86","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":14,"Comment":"","CommentCollapsed":true},{"Id":"-1689","ModuleId":"BigQuantSpace.dropnan.dropnan-v1","ModuleParameters":[],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_data","NodeId":"-1689"}],"OutputPortsInternal":[{"Name":"data","NodeId":"-1689","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":10,"Comment":"","CommentCollapsed":true},{"Id":"-3472","ModuleId":"BigQuantSpace.instruments.instruments-v2","ModuleParameters":[{"Name":"start_date","Value":"2016-01-01","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"end_date","Value":"2017-01-01","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"market","Value":"CN_STOCK_A","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"instrument_list","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"max_count","Value":0,"ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"rolling_conf","NodeId":"-3472"}],"OutputPortsInternal":[{"Name":"data","NodeId":"-3472","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":11,"Comment":"","CommentCollapsed":true},{"Id":"-170","ModuleId":"BigQuantSpace.cached.cached-v3","ModuleParameters":[{"Name":"run","Value":"# Python 代码入口函数,input_1/2/3 对应三个输入端,data_1/2/3 对应三个输出端\ndef bigquant_run(input_1, input_2, input_3):\n \n # 输入我们需要的特征名\n # Input the features we need\n \n Columns = ['rank_avg_amount_5', 'rank_avg_turn_5',\n 'rank_volatility_5_0', 'rank_swing_volatility_5_0',\n 'rank_avg_mf_net_amount_5', 'rank_beta_industry_5_0', 'rank_return_5',\n 'rank_return_2', 'mf_net_amount_1', 'return_5-1', 'return_10-1',\n 'return_20-1', 'avg_amount_0/avg_amount_5-1',\n 'avg_amount_5/avg_amount_20-1', 'rank_avg_amount_0-rank_avg_amount_5',\n 'rank_avg_amount_5-rank_avg_amount_10', 'rank_return_0-rank_return_5',\n 'rank_return_5-rank_return_10', 'beta_csi300_30_0/10',\n 'beta_csi300_60_0/10', 'swing_volatility_5_0/swing_volatility_30_0-1',\n 'swing_volatility_30_0/swing_volatility_60_0-1',\n 'ta_atr_14_0/ta_atr_28_0-1', 'ta_sma_5_0/ta_sma_20_0-1',\n 'ta_sma_10_0/ta_sma_20_0-1', 'ta_sma_20_0/ta_sma_30_0-1',\n 'ta_sma_30_0/ta_sma_60_0-1', 'ta_rsi_14_0/100', 'ta_rsi_28_0/100',\n 'ta_cci_14_0/500', 'ta_cci_28_0/500', 'beta_industry_30_0/10',\n 'beta_industry_60_0/10', 'ta_sma(amount_0, 10)/ta_sma(amount_0, 20)-1',\n 'ta_sma(amount_0, 20)/ta_sma(amount_0, 30)-1',\n 'ta_sma(amount_0, 30)/ta_sma(amount_0, 60)-1',\n 'ta_sma(amount_0, 50)/ta_sma(amount_0, 100)-1',\n 'ta_sma(turn_0, 10)/ta_sma(turn_0, 20)-1',\n 'ta_sma(turn_0, 20)/ta_sma(turn_0, 30)-1',\n 'ta_sma(turn_0, 30)/ta_sma(turn_0, 60)-1',\n 'ta_sma(turn_0, 50)/ta_sma(turn_0, 100)-1', 'high_0/low_0-1',\n 'close_0/open_0-1', 'shift(close_0,1)/close_0-1',\n 'shift(close_0,2)/close_0-1', 'shift(close_0,3)/close_0-1',\n 'shift(close_0,4)/close_0-1', 'shift(close_0,5)/close_0-1',\n 'shift(close_0,10)/close_0-1', 'shift(close_0,20)/close_0-1',\n 'ta_sma(high_0-low_0, 5)/ta_sma(high_0-low_0, 20)-1',\n 'ta_sma(high_0-low_0, 10)/ta_sma(high_0-low_0, 20)-1',\n 'ta_sma(high_0-low_0, 20)/ta_sma(high_0-low_0, 30)-1',\n 'ta_sma(high_0-low_0, 30)/ta_sma(high_0-low_0, 60)-1',\n 'ta_sma(high_0-low_0, 50)/ta_sma(high_0-low_0, 100)-1',\n 'std(close_0,5)/std(close_0,20)-1', 'std(close_0,10)/std(close_0,20)-1',\n 'std(close_0,20)/std(close_0,30)-1',\n 'std(close_0,30)/std(close_0,60)-1',\n 'std(close_0,50)/std(close_0,100)-1', 'shift(mf_net_amount_s_0,3)',\n 'shift(mf_net_amount_m_0,3)', 'shift(mf_net_amount_l_0,3)']\n \n C = DataSource.write_pickle(Columns)\n \n return Outputs(data_1=C, data_2=None, data_3=None)\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"post_run","Value":"# 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。\ndef bigquant_run(outputs):\n return outputs\n","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"input_ports","Value":"","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"params","Value":"{}","ValueType":"Literal","LinkedGlobalParameter":null},{"Name":"output_ports","Value":"","ValueType":"Literal","LinkedGlobalParameter":null}],"InputPortsInternal":[{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_1","NodeId":"-170"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_2","NodeId":"-170"},{"DataSourceId":null,"TrainedModelId":null,"TransformModuleId":null,"Name":"input_3","NodeId":"-170"}],"OutputPortsInternal":[{"Name":"data_1","NodeId":"-170","OutputType":null},{"Name":"data_2","NodeId":"-170","OutputType":null},{"Name":"data_3","NodeId":"-170","OutputType":null}],"UsePreviousResults":true,"moduleIdForCode":21,"Comment":"","CommentCollapsed":true}],"SerializedClientData":"<?xml version='1.0' encoding='utf-16'?><DataV1 xmlns:xsd='http://www.w3.org/2001/XMLSchema' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'><Meta /><NodePositions><NodePosition Node='287d2cb0-f53c-4101-bdf8-104b137c8601-8' Position='-77.36975479125977,-509.498348236084,200,200'/><NodePosition Node='287d2cb0-f53c-4101-bdf8-104b137c8601-15' Position='-119.75482940673828,-257.4370346069336,200,200'/><NodePosition Node='287d2cb0-f53c-4101-bdf8-104b137c8601-24' Position='551.20849609375,-489.6822052001953,200,200'/><NodePosition Node='287d2cb0-f53c-4101-bdf8-104b137c8601-53' Position='-40.264495849609375,-57.605634689331055,200,200'/><NodePosition Node='287d2cb0-f53c-4101-bdf8-104b137c8601-62' Position='1579,-210,200,200'/><NodePosition Node='287d2cb0-f53c-4101-bdf8-104b137c8601-84' Position='25.62823486328125,76.8867416381836,200,200'/><NodePosition Node='-106' Position='124.0726318359375,-345.6668701171875,200,200'/><NodePosition Node='-113' Position='254.18389892578125,-196.6975326538086,200,200'/><NodePosition Node='-122' Position='1230,-50,200,200'/><NodePosition Node='-129' Position='1126,171,200,200'/><NodePosition Node='-141' Position='1719,939,200,200'/><NodePosition Node='-117' Position='333.0050354003906,426.8447570800781,200,200'/><NodePosition Node='-388' Position='438.78753662109375,677,200,200'/><NodePosition Node='-3954' Position='412,978,200,200'/><NodePosition Node='-126' Position='1485,186,200,200'/><NodePosition Node='-86' Position='871,347,200,200'/><NodePosition Node='-1689' Position='1234,383,200,200'/><NodePosition Node='-3472' Position='1765.5711669921875,51,200,200'/><NodePosition Node='-170' Position='545.5289916992188,-263.16461181640625,200,200'/></NodePositions><NodeGroups /></DataV1>"},"IsDraft":true,"ParentExperimentId":null,"WebService":{"IsWebServiceExperiment":false,"Inputs":[],"Outputs":[],"Parameters":[{"Name":"交易日期","Value":"","ParameterDefinition":{"Name":"交易日期","FriendlyName":"交易日期","DefaultValue":"","ParameterType":"String","HasDefaultValue":true,"IsOptional":true,"ParameterRules":[],"HasRules":false,"MarkupType":0,"CredentialDescriptor":null}}],"WebServiceGroupId":null,"SerializedClientData":"<?xml version='1.0' encoding='utf-16'?><DataV1 xmlns:xsd='http://www.w3.org/2001/XMLSchema' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'><Meta /><NodePositions></NodePositions><NodeGroups /></DataV1>"},"DisableNodesUpdate":false,"Category":"user","Tags":[],"IsPartialRun":true}
    In [13]:
    # 本代码由可视化策略环境自动生成 2019年2月13日 20:42
    # 本代码单元只能在可视化模式下编辑。您也可以拷贝代码,粘贴到新建的代码单元或者策略,然后修改。
    
    
    # Python 代码入口函数,input_1/2/3 对应三个输入端,data_1/2/3 对应三个输出端
    def m21_run_bigquant_run(input_1, input_2, input_3):
        
        # 输入我们需要的特征名
        # Input the features we need
        
        Columns = ['rank_avg_amount_5', 'rank_avg_turn_5',
           'rank_volatility_5_0', 'rank_swing_volatility_5_0',
           'rank_avg_mf_net_amount_5', 'rank_beta_industry_5_0', 'rank_return_5',
           'rank_return_2', 'mf_net_amount_1', 'return_5-1', 'return_10-1',
           'return_20-1', 'avg_amount_0/avg_amount_5-1',
           'avg_amount_5/avg_amount_20-1', 'rank_avg_amount_0-rank_avg_amount_5',
           'rank_avg_amount_5-rank_avg_amount_10', 'rank_return_0-rank_return_5',
           'rank_return_5-rank_return_10', 'beta_csi300_30_0/10',
           'beta_csi300_60_0/10', 'swing_volatility_5_0/swing_volatility_30_0-1',
           'swing_volatility_30_0/swing_volatility_60_0-1',
           'ta_atr_14_0/ta_atr_28_0-1', 'ta_sma_5_0/ta_sma_20_0-1',
           'ta_sma_10_0/ta_sma_20_0-1', 'ta_sma_20_0/ta_sma_30_0-1',
           'ta_sma_30_0/ta_sma_60_0-1', 'ta_rsi_14_0/100', 'ta_rsi_28_0/100',
           'ta_cci_14_0/500', 'ta_cci_28_0/500', 'beta_industry_30_0/10',
           'beta_industry_60_0/10', 'ta_sma(amount_0, 10)/ta_sma(amount_0, 20)-1',
           'ta_sma(amount_0, 20)/ta_sma(amount_0, 30)-1',
           'ta_sma(amount_0, 30)/ta_sma(amount_0, 60)-1',
           'ta_sma(amount_0, 50)/ta_sma(amount_0, 100)-1',
           'ta_sma(turn_0, 10)/ta_sma(turn_0, 20)-1',
           'ta_sma(turn_0, 20)/ta_sma(turn_0, 30)-1',
           'ta_sma(turn_0, 30)/ta_sma(turn_0, 60)-1',
           'ta_sma(turn_0, 50)/ta_sma(turn_0, 100)-1', 'high_0/low_0-1',
           'close_0/open_0-1', 'shift(close_0,1)/close_0-1',
           'shift(close_0,2)/close_0-1', 'shift(close_0,3)/close_0-1',
           'shift(close_0,4)/close_0-1', 'shift(close_0,5)/close_0-1',
           'shift(close_0,10)/close_0-1', 'shift(close_0,20)/close_0-1',
           'ta_sma(high_0-low_0, 5)/ta_sma(high_0-low_0, 20)-1',
           'ta_sma(high_0-low_0, 10)/ta_sma(high_0-low_0, 20)-1',
           'ta_sma(high_0-low_0, 20)/ta_sma(high_0-low_0, 30)-1',
           'ta_sma(high_0-low_0, 30)/ta_sma(high_0-low_0, 60)-1',
           'ta_sma(high_0-low_0, 50)/ta_sma(high_0-low_0, 100)-1',
           'std(close_0,5)/std(close_0,20)-1', 'std(close_0,10)/std(close_0,20)-1',
           'std(close_0,20)/std(close_0,30)-1',
           'std(close_0,30)/std(close_0,60)-1',
           'std(close_0,50)/std(close_0,100)-1', 'shift(mf_net_amount_s_0,3)',
           'shift(mf_net_amount_m_0,3)', 'shift(mf_net_amount_l_0,3)']
        
        C = DataSource.write_pickle(Columns)
        
        return Outputs(data_1=C, data_2=None, data_3=None)
    
    # 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。
    def m21_post_run_bigquant_run(outputs):
        return outputs
    
    # 特征提取与转换
    # Feature selection and transformation
    
    def m4_run_bigquant_run(input_1, input_2, input_3):
    
        
        # 包的加载
        # Package loaded
        # 这里有很多包其实在这个方法里是用不着的,比如我们仅仅用了GBDT,我们暂时也不需要做训练集和测试集的划分
        
        import numpy as np
        np.random.seed(10)
    
        import matplotlib.pyplot as plt
    
        from sklearn.datasets import make_classification
        from sklearn.linear_model import LogisticRegression
        from sklearn.ensemble import (RandomTreesEmbedding, RandomForestClassifier,
                                  GradientBoostingClassifier)
        from sklearn.preprocessing import OneHotEncoder
        from sklearn.model_selection import train_test_split
        from sklearn.metrics import roc_curve
        from sklearn.pipeline import make_pipeline   #做模型之间的管子链接
    
        ## 设置机器学习的参数,区分预测集和训练集
        ## Set parameters and load data
        Data = input_1.read_df()  #获取全部数据
        X = Data[input_2.read_pickle()]
        y = pd.DataFrame(Data['label'])
        
        #print(y.columns)
        #print(X.columns)
        n_estimator = 5
    
        # 在这里我们实际上不需要做测试集和训练集的区分,因为本部分本来就是训练的部分
        X_train = X
        y_train = y
    
        # 需要将对 LR和GBDT的训练集给区分开来
        # It is important to train the ensemble of trees on a different subset of the training data than the linear regression model to avoid overfitting, in particular if the total number of leaves is similar to the number of training samples
        X_train, X_train_lr, y_train, y_train_lr = train_test_split(X_train, y_train, test_size=0.7)
    
        # Supervised transformation based on gradient boosted trees
        # 这里是训练好的模型,GBDT模型,编码模型和逻辑回归模型
        grd = GradientBoostingClassifier(n_estimators=n_estimator)   
        grd_enc = OneHotEncoder(categories='auto')
        grd_lm = LogisticRegression(solver='lbfgs', max_iter=1000)
        grd.fit(X_train, y_train)
        grd_enc.fit(grd.apply(X_train)[:, :, 0])
        grd_lm.fit(grd_enc.transform(grd.apply(X_train_lr)[:, :, 0]), y_train_lr)
        
        Model = dict()
        Model['grd'] = grd
        Model['grd_enc'] = grd_enc
        Model['grd_lm'] = grd_lm
        
        T = DataSource.write_pickle(Model)
        return Outputs(data_1=T, data_2=None, data_3=None)
    
    # 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。
    def m4_post_run_bigquant_run(outputs):
        return outputs
    
    # 输入的是模型,输出的是预测部分
    
    def m5_run_bigquant_run(input_1, input_2, input_3):
        
        # 三个模型从这个字典结构里面拿出来
    
        Model = input_1.read_pickle()
        
        grd = Model['grd']
        grd_enc = Model['grd_enc'] 
        grd_lm = Model['grd_lm']
        
        
        
        X_test = input_2.read_df()
        X_test1 = X_test[input_3.read_pickle()]
        y_pred_grd_lm = grd_lm.predict_proba(grd_enc.transform(grd.apply(X_test1)[:, :, 0]))[:, 1]
        
        
        Y = pd.DataFrame(y_pred_grd_lm,columns=['prediction'])
        
        Y['date'] = X_test['date']
        Y['instrument'] = X_test['instrument']
        
        Y = DataSource.write_df(Y)
        return Outputs(data_1=Y, data_2=None, data_3=None)
    
    # 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。
    def m5_post_run_bigquant_run(outputs):
        return outputs
    
    # Python 代码入口函数,input_1/2/3 对应三个输入端,data_1/2/3 对应三个输出端
    def m6_run_bigquant_run(input_1, input_2, input_3):
        
        import matplotlib.pyplot as plt
        from sklearn.metrics import confusion_matrix
        # 本部分的主要功能是对模型的预测效果进行分析
    
        Data_pre = input_1.read_df()
        Data_real = input_2.read_df()
        
        Data = pd.merge(Data_pre,Data_real,how='inner',on = ['date','instrument'])
        Pred = np.where(Data['prediction']>0.5,1,0)
        Real = np.array(Data['label'])
        
        cm = confusion_matrix(Real, Pred)
        
        cm_normalized = cm.astype('float')/cm.sum(axis=1)[:, np.newaxis]
        print(cm_normalized)
        
        
        import seaborn as sn
        
        df_cm = pd.DataFrame(cm_normalized)
        plt.figure(figsize = (15,10))
        sn.heatmap(df_cm, annot=True)
        print('准确率')
        c = (Real == Pred)
        print(len(c[c])/len(c))
        print('预测涨结果涨')
        P = Real[c]
        L11 = len(P[P==1])
        print(L11)
        print('预测跌结果跌')
        L00 = len(P[P==0])
        print(L00)
        print('预测涨结果跌')
        L10 = len(Real[Real==0])-len(P[P==0])
        print(L10)
        print('预测跌结果涨')
        L01 = len(Real[Real==1])-len(P[P==1])
        print(L01)
        print('\n')
        print('查准率\n')
        print('预测涨的准确率\n')
        print(L11/(L11+L10))
        print('预测跌的准确率\n')
        print(L00/(L01+L00))
        print('\n')
        
        print('查全率\n')
        print('涨的股票中预测准确率\n')
        print(L11/(L11+L01))
        print('跌的股票中预测准确率\n')
        print(L00/(L00+L10))
        
        from sklearn.metrics import roc_curve
        fpr_grd_lm, tpr_grd_lm, _ = roc_curve(Real, Pred)
        
        plt.figure()
        plt.plot(fpr_grd_lm, tpr_grd_lm, label='GBT')
        plt.xlabel('False positive rate')
        plt.ylabel('True positive rate')
        plt.title('ROC curve')
        plt.legend(loc='best')
        plt.show()
        return Outputs(data_1=DataSource.write_pickle(Real), data_2=DataSource.write_pickle(Pred), data_3=None)
    
    # 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。
    def m6_post_run_bigquant_run(outputs):
        return outputs
    
    # 回测引擎:每日数据处理函数,每天执行一次
    def m19_handle_data_bigquant_run(context, data):
        # 按日期过滤得到今日的预测数据
        ranker_prediction = context.ranker_prediction[
            context.ranker_prediction.date == data.current_dt.strftime('%Y-%m-%d')]
    
        # 1. 资金分配
        # 平均持仓时间是hold_days,每日都将买入股票,每日预期使用 1/hold_days 的资金
        # 实际操作中,会存在一定的买入误差,所以在前hold_days天,等量使用资金;之后,尽量使用剩余资金(这里设置最多用等量的1.5倍)
        is_staging = context.trading_day_index < context.options['hold_days'] # 是否在建仓期间(前 hold_days 天)
        cash_avg = context.portfolio.portfolio_value / context.options['hold_days']
        cash_for_buy = min(context.portfolio.cash, (1 if is_staging else 1.5) * cash_avg)
        cash_for_sell = cash_avg - (context.portfolio.cash - cash_for_buy)
        positions = {e.symbol: p.amount * p.last_sale_price
                     for e, p in context.perf_tracker.position_tracker.positions.items()}
    
        # 2. 生成卖出订单:hold_days天之后才开始卖出;对持仓的股票,按机器学习算法预测的排序末位淘汰
        if not is_staging and cash_for_sell > 0:
            equities = {e.symbol: e for e, p in context.perf_tracker.position_tracker.positions.items()}
            instruments = [m for m in list(ranker_prediction[ranker_prediction.prediction<0.42].instrument) if m in equities]
            # print('rank order for sell %s' % instruments)
            for instrument in instruments:
                context.order_target(context.symbol(instrument), 0)
                cash_for_sell -= positions[instrument]
                if cash_for_sell <= 0:
                    break
    
        # 3. 生成买入订单:按机器学习算法预测的排序,买入前面的stock_count只股票
        buy_instruments = list(ranker_prediction[ranker_prediction.prediction>0.66].instrument)  #
        buy_cash_weights = [1/len(buy_instruments) for k in range(len(buy_instruments))]  
        max_cash_per_instrument = context.portfolio.portfolio_value * context.max_cash_per_instrument
        for i, instrument in enumerate(buy_instruments):
            cash = cash_for_buy * buy_cash_weights[i]
            if cash > max_cash_per_instrument - positions.get(instrument, 0):
                # 确保股票持仓量不会超过每次股票最大的占用资金量
                cash = max_cash_per_instrument - positions.get(instrument, 0)
            if cash > 0:
                context.order_value(context.symbol(instrument), cash)
    
    # 回测引擎:准备数据,只执行一次
    def m19_prepare_bigquant_run(context):
        pass
    
    # 回测引擎:初始化函数,只执行一次
    def m19_initialize_bigquant_run(context):
        # 加载预测数据
        context.ranker_prediction = context.options['data'].read_df()
    
        # 系统已经设置了默认的交易手续费和滑点,要修改手续费可使用如下函数
        context.set_commission(PerOrder(buy_cost=0.0003, sell_cost=0.0013, min_cost=5))
        # 预测数据,通过options传入进来,使用 read_df 函数,加载到内存 (DataFrame)
        # 设置买入的股票数量,这里买入预测股票列表排名靠前的5只
        stock_count = 5
        # 每只的股票的权重,如下的权重分配会使得靠前的股票分配多一点的资金,[0.339160, 0.213986, 0.169580, ..]
        context.stock_weights = T.norm([1 / math.log(i + 2) for i in range(0, stock_count)])
        # 设置每只股票占用的最大资金比例
        context.max_cash_per_instrument = 0.2
        context.options['hold_days'] = 5
    
    
    m1 = M.instruments.v2(
        start_date='2010-01-01',
        end_date='2016-01-01',
        market='CN_STOCK_A',
        instrument_list='',
        max_count=0
    )
    
    m2 = M.advanced_auto_labeler.v2(
        instruments=m1.data,
        label_expr="""# #号开始的表示注释
    # 0. 每行一个,顺序执行,从第二个开始,可以使用label字段
    # 1. 可用数据字段见 https://bigquant.com/docs/data_history_data.html
    #   添加benchmark_前缀,可使用对应的benchmark数据
    # 2. 可用操作符和函数见 `表达式引擎 <https://bigquant.com/docs/big_expr.html>`_
    
    # 计算收益:5日收盘价(作为卖出价格)除以明日开盘价(作为买入价格), 五日收益率为正数
    where(shift(close, -5) / shift(open, -1)>1.001,1,0)
    
    # 极值处理:用1%和99%分位的值做clip
    #clip(label, all_quantile(label, 0.01), all_quantile(label, 0.99))
    
    # 将分数映射到分类,这里使用20个分类
    #all_wbins(label, 20)
    
    # 过滤掉一字涨跌停的情况 (设置label为NaN,在后续处理和训练中会忽略NaN的label)
    where(shift(high, -1) == shift(low, -1), NaN, label)   # 一开盘就到了10%那里,既是high也是low
    """,
        start_date='',
        end_date='',
        benchmark='000300.SHA',
        drop_na_label=True,
        cast_label_int=True
    )
    
    m3 = M.input_features.v1(
        features="""# #号开始的表示注释
    # 多个特征,每行一个,可以包含基础特征和衍生特征
    
    return_5-1
    return_10-1
    return_20-1
    avg_amount_0/avg_amount_5-1
    avg_amount_5/avg_amount_20-1
    rank_avg_amount_0-rank_avg_amount_5
    rank_avg_amount_5-rank_avg_amount_10
    rank_return_0-rank_return_5
    rank_return_5-rank_return_10
    beta_csi300_30_0/10
    beta_csi300_60_0/10
    swing_volatility_5_0/swing_volatility_30_0-1
    swing_volatility_30_0/swing_volatility_60_0-1
    ta_atr_14_0/ta_atr_28_0-1
    ta_sma_5_0/ta_sma_20_0-1
    ta_sma_10_0/ta_sma_20_0-1
    ta_sma_20_0/ta_sma_30_0-1
    ta_sma_30_0/ta_sma_60_0-1
    ta_rsi_14_0/100
    ta_rsi_28_0/100
    ta_cci_14_0/500
    ta_cci_28_0/500
    beta_industry_30_0/10
    beta_industry_60_0/10
    ta_sma(amount_0, 10)/ta_sma(amount_0, 20)-1
    ta_sma(amount_0, 20)/ta_sma(amount_0, 30)-1
    ta_sma(amount_0, 30)/ta_sma(amount_0, 60)-1
    ta_sma(amount_0, 50)/ta_sma(amount_0, 100)-1
    ta_sma(turn_0, 10)/ta_sma(turn_0, 20)-1
    ta_sma(turn_0, 20)/ta_sma(turn_0, 30)-1
    ta_sma(turn_0, 30)/ta_sma(turn_0, 60)-1
    ta_sma(turn_0, 50)/ta_sma(turn_0, 100)-1
    high_0/low_0-1
    close_0/open_0-1
    shift(close_0,1)/close_0-1
    shift(close_0,2)/close_0-1
    shift(close_0,3)/close_0-1
    shift(close_0,4)/close_0-1
    shift(close_0,5)/close_0-1
    shift(close_0,10)/close_0-1
    shift(close_0,20)/close_0-1
    ta_sma(high_0-low_0, 5)/ta_sma(high_0-low_0, 20)-1
    ta_sma(high_0-low_0, 10)/ta_sma(high_0-low_0, 20)-1
    ta_sma(high_0-low_0, 20)/ta_sma(high_0-low_0, 30)-1
    ta_sma(high_0-low_0, 30)/ta_sma(high_0-low_0, 60)-1
    ta_sma(high_0-low_0, 50)/ta_sma(high_0-low_0, 100)-1
    rank_avg_amount_5
    rank_avg_turn_5
    rank_volatility_5_0
    rank_swing_volatility_5_0
    rank_avg_mf_net_amount_5
    rank_beta_industry_5_0
    rank_return_5
    rank_return_2
    std(close_0,5)/std(close_0,20)-1
    std(close_0,10)/std(close_0,20)-1
    std(close_0,20)/std(close_0,30)-1
    std(close_0,30)/std(close_0,60)-1
    std(close_0,50)/std(close_0,100)-1
    mf_net_amount_1
    shift(mf_net_amount_s_0,3)
    shift(mf_net_amount_m_0,3)
    shift(mf_net_amount_l_0,3)"""
    )
    
    m15 = M.general_feature_extractor.v7(
        instruments=m1.data,
        features=m3.data,
        start_date='',
        end_date='',
        before_start_days=0
    )
    
    m16 = M.derived_feature_extractor.v3(
        input_data=m15.data,
        features=m3.data,
        date_col='date',
        instrument_col='instrument',
        drop_na=True,
        remove_extra_columns=True
    )
    
    m7 = M.join.v3(
        data1=m2.data,
        data2=m16.data,
        on='date,instrument',
        how='inner',
        sort=False
    )
    
    m13 = M.dropnan.v1(
        input_data=m7.data
    )
    
    m9 = M.instruments.v2(
        start_date=T.live_run_param('trading_date', '2016-01-01'),
        end_date=T.live_run_param('trading_date', '2017-01-01'),
        market='CN_STOCK_A',
        instrument_list='',
        max_count=0
    )
    
    m17 = M.general_feature_extractor.v7(
        instruments=m9.data,
        features=m3.data,
        start_date='',
        end_date='',
        before_start_days=0
    )
    
    m18 = M.derived_feature_extractor.v3(
        input_data=m17.data,
        features=m3.data,
        date_col='date',
        instrument_col='instrument',
        drop_na=False,
        remove_extra_columns=False
    )
    
    m14 = M.dropnan.v1(
        input_data=m18.data
    )
    
    m8 = M.advanced_auto_labeler.v2(
        instruments=m9.data,
        label_expr="""# #号开始的表示注释
    # 0. 每行一个,顺序执行,从第二个开始,可以使用label字段
    # 1. 可用数据字段见 https://bigquant.com/docs/data_history_data.html
    #   添加benchmark_前缀,可使用对应的benchmark数据
    # 2. 可用操作符和函数见 `表达式引擎 <https://bigquant.com/docs/big_expr.html>`_
    
    # 计算收益:5日收盘价(作为卖出价格)除以明日开盘价(作为买入价格)
    where(shift(close, -5) / shift(open, -1)>1,1,0)
    
    # 极值处理:用1%和99%分位的值做clip
    #clip(label, all_quantile(label, 0.01), all_quantile(label, 0.99))
    
    # 将分数映射到分类,这里使用20个分类
    #all_wbins(label, 20)
    
    # 过滤掉一字涨停的情况 (设置label为NaN,在后续处理和训练中会忽略NaN的label)
    where(shift(high, -1) == shift(low, -1), NaN, label)
    """,
        start_date='',
        end_date='',
        benchmark='000300.SHA',
        drop_na_label=True,
        cast_label_int=True
    )
    
    m10 = M.dropnan.v1(
        input_data=m8.data
    )
    
    m11 = M.instruments.v2(
        start_date='2016-01-01',
        end_date='2017-01-01',
        market='CN_STOCK_A',
        instrument_list='',
        max_count=0
    )
    
    m21 = M.cached.v3(
        run=m21_run_bigquant_run,
        post_run=m21_post_run_bigquant_run,
        input_ports='',
        params='{}',
        output_ports=''
    )
    
    m4 = M.cached.v3(
        input_1=m13.data,
        input_2=m21.data_1,
        run=m4_run_bigquant_run,
        post_run=m4_post_run_bigquant_run,
        input_ports='',
        params='{}',
        output_ports=''
    )
    
    m5 = M.cached.v3(
        input_1=m4.data_1,
        input_2=m14.data,
        input_3=m21.data_1,
        run=m5_run_bigquant_run,
        post_run=m5_post_run_bigquant_run,
        input_ports='',
        params='{}',
        output_ports=''
    )
    
    m6 = M.cached.v3(
        input_1=m5.data_1,
        input_2=m10.data,
        run=m6_run_bigquant_run,
        post_run=m6_post_run_bigquant_run,
        input_ports='',
        params='{}',
        output_ports='',
        m_cached=False
    )
    
    m19 = M.trade.v4(
        instruments=m11.data,
        options_data=m5.data_1,
        start_date='',
        end_date='',
        handle_data=m19_handle_data_bigquant_run,
        prepare=m19_prepare_bigquant_run,
        initialize=m19_initialize_bigquant_run,
        volume_limit=0.025,
        order_price_field_buy='open',
        order_price_field_sell='close',
        capital_base=1000000,
        auto_cancel_non_tradable_orders=True,
        data_frequency='daily',
        price_type='后复权',
        product_type='股票',
        plot_charts=True,
        backtest_only=False,
        benchmark='000300.SHA'
    )
    
    [2019-02-13 20:41:47.543928] INFO: bigquant: instruments.v2 开始运行..
    [2019-02-13 20:41:47.577504] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.578650] INFO: bigquant: instruments.v2 运行完成[0.037267s].
    [2019-02-13 20:41:47.628413] INFO: bigquant: advanced_auto_labeler.v2 开始运行..
    [2019-02-13 20:41:47.635561] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.636858] INFO: bigquant: advanced_auto_labeler.v2 运行完成[0.008471s].
    [2019-02-13 20:41:47.644969] INFO: bigquant: input_features.v1 开始运行..
    [2019-02-13 20:41:47.651169] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.652076] INFO: bigquant: input_features.v1 运行完成[0.007085s].
    [2019-02-13 20:41:47.729900] INFO: bigquant: general_feature_extractor.v7 开始运行..
    [2019-02-13 20:41:47.739535] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.740634] INFO: bigquant: general_feature_extractor.v7 运行完成[0.01077s].
    [2019-02-13 20:41:47.766689] INFO: bigquant: derived_feature_extractor.v3 开始运行..
    [2019-02-13 20:41:47.773053] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.773947] INFO: bigquant: derived_feature_extractor.v3 运行完成[0.007289s].
    [2019-02-13 20:41:47.790962] INFO: bigquant: join.v3 开始运行..
    [2019-02-13 20:41:47.798270] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.799239] INFO: bigquant: join.v3 运行完成[0.008285s].
    [2019-02-13 20:41:47.805784] INFO: bigquant: dropnan.v1 开始运行..
    [2019-02-13 20:41:47.811272] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.812077] INFO: bigquant: dropnan.v1 运行完成[0.0063s].
    [2019-02-13 20:41:47.814385] INFO: bigquant: instruments.v2 开始运行..
    [2019-02-13 20:41:47.839293] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.840330] INFO: bigquant: instruments.v2 运行完成[0.025929s].
    [2019-02-13 20:41:47.849597] INFO: bigquant: general_feature_extractor.v7 开始运行..
    [2019-02-13 20:41:47.853792] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.854524] INFO: bigquant: general_feature_extractor.v7 运行完成[0.004943s].
    [2019-02-13 20:41:47.857230] INFO: bigquant: derived_feature_extractor.v3 开始运行..
    [2019-02-13 20:41:47.862620] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.863430] INFO: bigquant: derived_feature_extractor.v3 运行完成[0.006192s].
    [2019-02-13 20:41:47.866313] INFO: bigquant: dropnan.v1 开始运行..
    [2019-02-13 20:41:47.871364] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.872733] INFO: bigquant: dropnan.v1 运行完成[0.006412s].
    [2019-02-13 20:41:47.876678] INFO: bigquant: advanced_auto_labeler.v2 开始运行..
    [2019-02-13 20:41:47.882081] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.883264] INFO: bigquant: advanced_auto_labeler.v2 运行完成[0.006583s].
    [2019-02-13 20:41:47.886686] INFO: bigquant: dropnan.v1 开始运行..
    [2019-02-13 20:41:47.890932] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.891870] INFO: bigquant: dropnan.v1 运行完成[0.005199s].
    [2019-02-13 20:41:47.958704] INFO: bigquant: cached.v3 开始运行..
    [2019-02-13 20:41:47.964891] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.965918] INFO: bigquant: cached.v3 运行完成[0.007245s].
    [2019-02-13 20:41:47.973313] INFO: bigquant: cached.v3 开始运行..
    [2019-02-13 20:41:47.979353] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.980276] INFO: bigquant: cached.v3 运行完成[0.006976s].
    [2019-02-13 20:41:47.983717] INFO: bigquant: cached.v3 开始运行..
    [2019-02-13 20:41:47.988952] INFO: bigquant: 命中缓存
    [2019-02-13 20:41:47.989799] INFO: bigquant: cached.v3 运行完成[0.006084s].
    [2019-02-13 20:41:47.994179] INFO: bigquant: cached.v3 开始运行..
    [[0.27836177 0.72163823]
     [0.2823822  0.7176178 ]]
    准确率
    0.49305548309002306
    预测涨结果涨
    67225
    预测跌结果跌
    27275
    预测涨结果跌
    70709
    预测跌结果涨
    26453
    
    
    查准率
    
    预测涨的准确率
    
    0.48737077152841213
    预测跌的准确率
    
    0.5076496426444312
    
    
    查全率
    
    涨的股票中预测准确率
    
    0.7176177971348663
    跌的股票中预测准确率
    
    0.27836177335075113
    
    [2019-02-13 20:41:56.761774] INFO: bigquant: cached.v3 运行完成[8.76753s].