复制链接
克隆策略

问题:平台上算法那么多,我用哪个算法做模型 做出来的策略效果最好?

答:没有最好,只有更好。 这个问题的答案取决于许多因素,例如股票市场的条件,数据集的质量和特征工程的有效性等。 接下来 我们来看看这些算法的优势和劣势。

  1. 神经网络:适用于复杂的非线性问题,可以有效地捕捉市场的非线性特征和复杂关系。

  2. 决策树:适用于数据量较小、特征维度较少的情况,可以很好地解释模型的决策过程。

  3. 随机森林:适用于处理高维度、复杂数据集,具有很好的鲁棒性和准确性。

  4. 支持向量机:适用于数据量较小、特征维度较高的情况,可以有效地处理非线性和线性可分问题。

¥因此,正常情况下,在处理少量的股票量价数据的时候,stockranker排序算法就已经有很好的表现,初步制定策略的时候不妨先考虑从stockranker下手。

然而,一般来说,深度学习算法比机器学习算法可能会获得更好的收益和效果。 原因如下:

  1. 深度学习算法对于非线性模型的拟合效果更好。在选股策略中,非线性模型更符合实际情况。

  2. 深度学习算法可以处理更复杂的数据结构。 对于股票数据,深度学习算法可以更好地挖掘和处理时间序列数据,自然语言处理和图像数据等多种数据结构。

  3. 深度学习算法可以进行端到端学习。深度学习算法可以直接从原始数据中进行学习,而不需要进行手动特征工程,这样可以更好地挖掘数据的潜在信息。

ps:但是,在实际场景中,对于某些较为单一的选股条件,或者特征因子,没有经过大量细节优化修正的神经网络模型,效果有可能不及预期。(回测乃至实盘的绩效甚至不如普通的机器学习模型,模型对收益率的可解释性不强,难以验证)

我们来看看一些主流的机器学习算法 对相同的特征因子训练后的收益表现图

训练集:14-2018年-01-14 测试集: 18-2019-01-10 日频调仓 每天1只股票半仓轮动 各种算法收益如图: 收益图

我们可以看到下面的stockranker回测图,stockranker算法 的回测净值收益达到 1.3 高于 svm算法净值 1.2

于此同时,我们同样可以看出在相同的因子和训练数据中,未经过深度调整的DNN神经网络模型表现不佳。回测收益为负数。

从对比结果上来看:理论上 深度学习算法的上限很高,同时下限也很低。如果从稳定性和实用性来讲,我们较多会选用stockranker这样的算法来制作选股策略。

综上所述,深度学习算法可能会获得更好的收益和效果。但是,需要注意的是,深度学习算法的学习过程通常需要较长时间,并且需要更多的计算资源。因此,在实际应用中,需要评估复杂性和可行性,并根据具体情况选择适当的算法。

    {"description":"实验创建于2017/8/26","graph":{"edges":[{"to_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-15:instruments","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-8:data"},{"to_node_id":"-215:instruments","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-8:data"},{"to_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-53:data1","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-15:data"},{"to_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43:features","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-24:data"},{"to_node_id":"-215:features","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-24:data"},{"to_node_id":"-222:features","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-24:data"},{"to_node_id":"-231:features","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-24:data"},{"to_node_id":"-238:features","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-24:data"},{"to_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-60:model","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43:model"},{"to_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-84:input_data","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-53:data"},{"to_node_id":"-4129:options_data","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-60:predictions"},{"to_node_id":"-231:instruments","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-62:data"},{"to_node_id":"-4129:instruments","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-62:data"},{"to_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43:training_ds","from_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-84:data"},{"to_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-60:data","from_node_id":"-86:data"},{"to_node_id":"-222:input_data","from_node_id":"-215:data"},{"to_node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-53:data2","from_node_id":"-222:data"},{"to_node_id":"-238:input_data","from_node_id":"-231:data"},{"to_node_id":"-86:input_data","from_node_id":"-238:data"}],"nodes":[{"node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-8","module_id":"BigQuantSpace.instruments.instruments-v2","parameters":[{"name":"start_date","value":"2014-01-01","type":"Literal","bound_global_parameter":null},{"name":"end_date","value":"2018-01-14","type":"Literal","bound_global_parameter":null},{"name":"market","value":"CN_STOCK_A","type":"Literal","bound_global_parameter":null},{"name":"instrument_list","value":"","type":"Literal","bound_global_parameter":null},{"name":"max_count","value":"0","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"rolling_conf","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-8"}],"output_ports":[{"name":"data","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-8"}],"cacheable":true,"seq_num":1,"comment":"","comment_collapsed":true},{"node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-15","module_id":"BigQuantSpace.advanced_auto_labeler.advanced_auto_labeler-v2","parameters":[{"name":"label_expr","value":"# #号开始的表示注释\n# 0. 每行一个,顺序执行,从第二个开始,可以使用label字段\n# 1. 可用数据字段见 https://bigquant.com/docs/develop/datasource/deprecated/history_data.html\n# 添加benchmark_前缀,可使用对应的benchmark数据\n# 2. 可用操作符和函数见 `表达式引擎 <https://bigquant.com/docs/develop/bigexpr/usage.html>`_\n\n# 计算收益:5日收盘价(作为卖出价格)除以明日开盘价(作为买入价格)\nshift(close, -2) / shift(open, -1)\n\n# 极值处理:用1%和99%分位的值做clip\nclip(label, all_quantile(label, 0.01), all_quantile(label, 0.99))\n\n# 将分数映射到分类,这里使用20个分类\nall_wbins(label, 20)\n\n# 过滤掉一字涨停的情况 (设置label为NaN,在后续处理和训练中会忽略NaN的label)\nwhere(shift(high, -1) == shift(low, -1), NaN, label)\n","type":"Literal","bound_global_parameter":null},{"name":"start_date","value":"","type":"Literal","bound_global_parameter":null},{"name":"end_date","value":"","type":"Literal","bound_global_parameter":null},{"name":"benchmark","value":"000300.HIX","type":"Literal","bound_global_parameter":null},{"name":"drop_na_label","value":"True","type":"Literal","bound_global_parameter":null},{"name":"cast_label_int","value":"True","type":"Literal","bound_global_parameter":null},{"name":"user_functions","value":"","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"instruments","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-15"}],"output_ports":[{"name":"data","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-15"}],"cacheable":true,"seq_num":2,"comment":"","comment_collapsed":true},{"node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-24","module_id":"BigQuantSpace.input_features.input_features-v1","parameters":[{"name":"features","value":"# #号开始的表示注释\n# 多个特征,每行一个,可以包含基础特征和衍生特征\nreturn_5\nreturn_10\nreturn_20\navg_amount_0/avg_amount_5\navg_amount_5/avg_amount_20\nrank_avg_amount_0/rank_avg_amount_5\nrank_avg_amount_5/rank_avg_amount_10\nrank_return_0\nrank_return_5\nrank_return_10\nrank_return_0/rank_return_5\nrank_return_5/rank_return_10\npe_ttm_0\n#主力净流入净额\n#mf_net_amount_main_0\n","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"features_ds","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-24"}],"output_ports":[{"name":"data","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-24"}],"cacheable":true,"seq_num":3,"comment":"","comment_collapsed":true},{"node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43","module_id":"BigQuantSpace.stock_ranker_train.stock_ranker_train-v6","parameters":[{"name":"learning_algorithm","value":"排序","type":"Literal","bound_global_parameter":null},{"name":"number_of_leaves","value":30,"type":"Literal","bound_global_parameter":null},{"name":"minimum_docs_per_leaf","value":1000,"type":"Literal","bound_global_parameter":null},{"name":"number_of_trees","value":20,"type":"Literal","bound_global_parameter":null},{"name":"learning_rate","value":0.1,"type":"Literal","bound_global_parameter":null},{"name":"max_bins","value":1023,"type":"Literal","bound_global_parameter":null},{"name":"feature_fraction","value":1,"type":"Literal","bound_global_parameter":null},{"name":"data_row_fraction","value":1,"type":"Literal","bound_global_parameter":null},{"name":"plot_charts","value":"True","type":"Literal","bound_global_parameter":null},{"name":"ndcg_discount_base","value":1,"type":"Literal","bound_global_parameter":null},{"name":"m_lazy_run","value":"False","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"training_ds","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43"},{"name":"features","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43"},{"name":"test_ds","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43"},{"name":"base_model","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43"}],"output_ports":[{"name":"model","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43"},{"name":"feature_gains","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43"},{"name":"m_lazy_run","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-43"}],"cacheable":true,"seq_num":6,"comment":"","comment_collapsed":true},{"node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-53","module_id":"BigQuantSpace.join.join-v3","parameters":[{"name":"on","value":"date,instrument","type":"Literal","bound_global_parameter":null},{"name":"how","value":"inner","type":"Literal","bound_global_parameter":null},{"name":"sort","value":"False","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"data1","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-53"},{"name":"data2","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-53"}],"output_ports":[{"name":"data","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-53"}],"cacheable":true,"seq_num":7,"comment":"","comment_collapsed":true},{"node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-60","module_id":"BigQuantSpace.stock_ranker_predict.stock_ranker_predict-v5","parameters":[{"name":"m_lazy_run","value":"False","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"model","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-60"},{"name":"data","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-60"}],"output_ports":[{"name":"predictions","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-60"},{"name":"m_lazy_run","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-60"}],"cacheable":true,"seq_num":8,"comment":"","comment_collapsed":true},{"node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-62","module_id":"BigQuantSpace.instruments.instruments-v2","parameters":[{"name":"start_date","value":"2018-01-15","type":"Literal","bound_global_parameter":"交易日期"},{"name":"end_date","value":"2019-01-10","type":"Literal","bound_global_parameter":"交易日期"},{"name":"market","value":"CN_STOCK_A","type":"Literal","bound_global_parameter":null},{"name":"instrument_list","value":"","type":"Literal","bound_global_parameter":null},{"name":"max_count","value":"0","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"rolling_conf","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-62"}],"output_ports":[{"name":"data","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-62"}],"cacheable":true,"seq_num":9,"comment":"预测数据,用于回测和模拟","comment_collapsed":false},{"node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-84","module_id":"BigQuantSpace.dropnan.dropnan-v1","parameters":[],"input_ports":[{"name":"input_data","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-84"}],"output_ports":[{"name":"data","node_id":"287d2cb0-f53c-4101-bdf8-104b137c8601-84"}],"cacheable":true,"seq_num":13,"comment":"","comment_collapsed":true},{"node_id":"-86","module_id":"BigQuantSpace.dropnan.dropnan-v1","parameters":[],"input_ports":[{"name":"input_data","node_id":"-86"}],"output_ports":[{"name":"data","node_id":"-86"}],"cacheable":true,"seq_num":14,"comment":"","comment_collapsed":true},{"node_id":"-215","module_id":"BigQuantSpace.general_feature_extractor.general_feature_extractor-v7","parameters":[{"name":"start_date","value":"","type":"Literal","bound_global_parameter":null},{"name":"end_date","value":"","type":"Literal","bound_global_parameter":null},{"name":"before_start_days","value":90,"type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"instruments","node_id":"-215"},{"name":"features","node_id":"-215"}],"output_ports":[{"name":"data","node_id":"-215"}],"cacheable":true,"seq_num":15,"comment":"","comment_collapsed":true},{"node_id":"-222","module_id":"BigQuantSpace.derived_feature_extractor.derived_feature_extractor-v3","parameters":[{"name":"date_col","value":"date","type":"Literal","bound_global_parameter":null},{"name":"instrument_col","value":"instrument","type":"Literal","bound_global_parameter":null},{"name":"drop_na","value":"False","type":"Literal","bound_global_parameter":null},{"name":"remove_extra_columns","value":"False","type":"Literal","bound_global_parameter":null},{"name":"user_functions","value":"","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"input_data","node_id":"-222"},{"name":"features","node_id":"-222"}],"output_ports":[{"name":"data","node_id":"-222"}],"cacheable":true,"seq_num":16,"comment":"","comment_collapsed":true},{"node_id":"-231","module_id":"BigQuantSpace.general_feature_extractor.general_feature_extractor-v7","parameters":[{"name":"start_date","value":"","type":"Literal","bound_global_parameter":null},{"name":"end_date","value":"","type":"Literal","bound_global_parameter":null},{"name":"before_start_days","value":90,"type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"instruments","node_id":"-231"},{"name":"features","node_id":"-231"}],"output_ports":[{"name":"data","node_id":"-231"}],"cacheable":true,"seq_num":17,"comment":"","comment_collapsed":true},{"node_id":"-238","module_id":"BigQuantSpace.derived_feature_extractor.derived_feature_extractor-v3","parameters":[{"name":"date_col","value":"date","type":"Literal","bound_global_parameter":null},{"name":"instrument_col","value":"instrument","type":"Literal","bound_global_parameter":null},{"name":"drop_na","value":"False","type":"Literal","bound_global_parameter":null},{"name":"remove_extra_columns","value":"False","type":"Literal","bound_global_parameter":null},{"name":"user_functions","value":"","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"input_data","node_id":"-238"},{"name":"features","node_id":"-238"}],"output_ports":[{"name":"data","node_id":"-238"}],"cacheable":true,"seq_num":18,"comment":"","comment_collapsed":true},{"node_id":"-827","module_id":"BigQuantSpace.random_forest_regressor.random_forest_regressor-v1","parameters":[{"name":"iterations","value":10,"type":"Literal","bound_global_parameter":null},{"name":"feature_fraction","value":1,"type":"Literal","bound_global_parameter":null},{"name":"max_depth","value":30,"type":"Literal","bound_global_parameter":null},{"name":"min_samples_per_leaf","value":200,"type":"Literal","bound_global_parameter":null},{"name":"key_cols","value":"date,instrument","type":"Literal","bound_global_parameter":null},{"name":"workers","value":1,"type":"Literal","bound_global_parameter":null},{"name":"random_state","value":0,"type":"Literal","bound_global_parameter":null},{"name":"other_train_parameters","value":"{}","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"training_ds","node_id":"-827"},{"name":"features","node_id":"-827"},{"name":"model","node_id":"-827"},{"name":"predict_ds","node_id":"-827"}],"output_ports":[{"name":"output_model","node_id":"-827"},{"name":"predictions","node_id":"-827"}],"cacheable":true,"seq_num":4,"comment":"","comment_collapsed":true},{"node_id":"-4129","module_id":"BigQuantSpace.trade.trade-v4","parameters":[{"name":"start_date","value":"","type":"Literal","bound_global_parameter":null},{"name":"end_date","value":"","type":"Literal","bound_global_parameter":null},{"name":"initialize","value":"# 回测引擎:初始化函数,只执行一次\ndef bigquant_run(context):\n # 加载预测数据\n context.ranker_prediction = context.options['data'].read_df()\n\n # 系统已经设置了默认的交易手续费和滑点,要修改手续费可使用如下函数\n context.set_commission(PerOrder(buy_cost=0.0003, sell_cost=0.0013, min_cost=5))\n # 预测数据,通过options传入进来,使用 read_df 函数,加载到内存 (DataFrame)\n # 设置买入的股票数量,这里买入预测股票列表排名靠前的5只\n stock_count = 1\n # 每只的股票的权重,如下的权重分配会使得靠前的股票分配多一点的资金,[0.339160, 0.213986, 0.169580, ..]\n context.stock_weights = [1]\n # 设置每只股票占用的最大资金比例\n context.max_cash_per_instrument = 1\n context.options['hold_days'] = 1\n","type":"Literal","bound_global_parameter":null},{"name":"handle_data","value":"# 回测引擎:每日数据处理函数,每天执行一次\ndef bigquant_run(context, data):\n # 按日期过滤得到今日的预测数据\n ranker_prediction = context.ranker_prediction[\n context.ranker_prediction.date == data.current_dt.strftime('%Y-%m-%d')]\n cash_for_buy = min(context.portfolio.portfolio_value/1,context.portfolio.cash)\n #cash_for_buy = context.portfolio.portfolio_value\n #print(ranker_prediction)\n #cash_for_buy = context.portfolio.portfolio_value\n #cash_for_buy = context.portfolio.cash\n buy_instruments = list(ranker_prediction.instrument)\n sell_instruments = [instrument.symbol for instrument in context.portfolio.positions.keys()]\n to_buy = set(buy_instruments[:1]) - set(sell_instruments) \n to_sell = set(sell_instruments) - set(buy_instruments[:1])\n \n \n for instrument in to_sell:\n context.order_target(context.symbol(instrument), 0)\n for instrument in to_buy:\n context.order_value(context.symbol(instrument), cash_for_buy)\n","type":"Literal","bound_global_parameter":null},{"name":"prepare","value":"def bigquant_run(context):\n\n\n # 获取st状态和涨跌停状态\n \n context.status_df = D.features(instruments =context.instruments,start_date = context.start_date, end_date = context.end_date, \n fields=['st_status_0','price_limit_status_0','price_limit_status_1'])","type":"Literal","bound_global_parameter":null},{"name":"before_trading_start","value":"def bigquant_run(context, data):\n pass \n # 获取涨跌停状态数据\n# df_price_limit_status=context.status_df.set_index('date')\n# today=data.current_dt.strftime('%Y-%m-%d')\n# # 得到当前未完成订单\n# for orders in get_open_orders().values():\n# # 循环,撤销订单\n# for _order in orders:\n# ins=str(_order.sid.symbol)\n# try:\n# #判断一下如果当日涨停,则取消卖单\n# if df_price_limit_status[df_price_limit_status.instrument==ins].price_limit_status_0.loc[today]>2 and _order.amount<0:\n# cancel_order(_order)\n# print(today,'尾盘涨停取消卖单',ins) \n# except:\n# continue\n ","type":"Literal","bound_global_parameter":null},{"name":"volume_limit","value":"0","type":"Literal","bound_global_parameter":null},{"name":"order_price_field_buy","value":"open","type":"Literal","bound_global_parameter":null},{"name":"order_price_field_sell","value":"close","type":"Literal","bound_global_parameter":null},{"name":"capital_base","value":"100000","type":"Literal","bound_global_parameter":null},{"name":"auto_cancel_non_tradable_orders","value":"True","type":"Literal","bound_global_parameter":null},{"name":"data_frequency","value":"daily","type":"Literal","bound_global_parameter":null},{"name":"price_type","value":"真实价格","type":"Literal","bound_global_parameter":null},{"name":"product_type","value":"股票","type":"Literal","bound_global_parameter":null},{"name":"plot_charts","value":"True","type":"Literal","bound_global_parameter":null},{"name":"backtest_only","value":"False","type":"Literal","bound_global_parameter":null},{"name":"benchmark","value":"000300.SHA","type":"Literal","bound_global_parameter":null}],"input_ports":[{"name":"instruments","node_id":"-4129"},{"name":"options_data","node_id":"-4129"},{"name":"history_ds","node_id":"-4129"},{"name":"benchmark_ds","node_id":"-4129"},{"name":"trading_calendar","node_id":"-4129"}],"output_ports":[{"name":"raw_perf","node_id":"-4129"}],"cacheable":false,"seq_num":5,"comment":"降序","comment_collapsed":true}],"node_layout":"<node_postions><node_position Node='287d2cb0-f53c-4101-bdf8-104b137c8601-8' Position='211,64,200,200'/><node_position Node='287d2cb0-f53c-4101-bdf8-104b137c8601-15' Position='2,199,200,200'/><node_position Node='287d2cb0-f53c-4101-bdf8-104b137c8601-24' Position='765,21,200,200'/><node_position Node='287d2cb0-f53c-4101-bdf8-104b137c8601-43' Position='485,615,200,200'/><node_position Node='287d2cb0-f53c-4101-bdf8-104b137c8601-53' Position='249,375,200,200'/><node_position Node='287d2cb0-f53c-4101-bdf8-104b137c8601-60' Position='1033,641,200,200'/><node_position Node='287d2cb0-f53c-4101-bdf8-104b137c8601-62' Position='1066,83,200,200'/><node_position Node='287d2cb0-f53c-4101-bdf8-104b137c8601-84' Position='376,467,200,200'/><node_position Node='-86' Position='1078,418,200,200'/><node_position Node='-215' Position='381,188,200,200'/><node_position Node='-222' Position='385,280,200,200'/><node_position Node='-231' Position='1078,236,200,200'/><node_position Node='-238' Position='1081,327,200,200'/><node_position Node='-827' Position='484,680,200,200'/><node_position Node='-4129' Position='933,869,200,200'/></node_postions>"},"nodes_readonly":false,"studio_version":"v2"}
    In [2]:
    # 本代码由可视化策略环境自动生成 2023年4月7日 13:29
    # 本代码单元只能在可视化模式下编辑。您也可以拷贝代码,粘贴到新建的代码单元或者策略,然后修改。
    
    
    # 回测引擎:初始化函数,只执行一次
    def m5_initialize_bigquant_run(context):
        # 加载预测数据
        context.ranker_prediction = context.options['data'].read_df()
    
        # 系统已经设置了默认的交易手续费和滑点,要修改手续费可使用如下函数
        context.set_commission(PerOrder(buy_cost=0.0003, sell_cost=0.0013, min_cost=5))
        # 预测数据,通过options传入进来,使用 read_df 函数,加载到内存 (DataFrame)
        # 设置买入的股票数量,这里买入预测股票列表排名靠前的5只
        stock_count = 1
        # 每只的股票的权重,如下的权重分配会使得靠前的股票分配多一点的资金,[0.339160, 0.213986, 0.169580, ..]
        context.stock_weights = [1]
        # 设置每只股票占用的最大资金比例
        context.max_cash_per_instrument = 1
        context.options['hold_days'] = 1
    
    # 回测引擎:每日数据处理函数,每天执行一次
    def m5_handle_data_bigquant_run(context, data):
        # 按日期过滤得到今日的预测数据
        ranker_prediction = context.ranker_prediction[
            context.ranker_prediction.date == data.current_dt.strftime('%Y-%m-%d')]
        cash_for_buy = min(context.portfolio.portfolio_value/1,context.portfolio.cash)
        #cash_for_buy = context.portfolio.portfolio_value
        #print(ranker_prediction)
        #cash_for_buy = context.portfolio.portfolio_value
        #cash_for_buy = context.portfolio.cash
        buy_instruments = list(ranker_prediction.instrument)
        sell_instruments = [instrument.symbol for instrument in context.portfolio.positions.keys()]
        to_buy = set(buy_instruments[:1]) - set(sell_instruments) 
        to_sell = set(sell_instruments) -  set(buy_instruments[:1])
       
        
        for instrument in to_sell:
            context.order_target(context.symbol(instrument), 0)
        for instrument in to_buy:
            context.order_value(context.symbol(instrument), cash_for_buy)
    
    def m5_prepare_bigquant_run(context):
    
    
         # 获取st状态和涨跌停状态
        
        context.status_df = D.features(instruments =context.instruments,start_date = context.start_date, end_date = context.end_date, 
                               fields=['st_status_0','price_limit_status_0','price_limit_status_1'])
    def m5_before_trading_start_bigquant_run(context, data):
        pass     
        # 获取涨跌停状态数据
    #     df_price_limit_status=context.status_df.set_index('date')
    #     today=data.current_dt.strftime('%Y-%m-%d')
    #     # 得到当前未完成订单
    #     for orders in get_open_orders().values():
    #         # 循环,撤销订单
    #         for _order in orders:
    #             ins=str(_order.sid.symbol)
    #             try:
    #                 #判断一下如果当日涨停,则取消卖单
    #                 if  df_price_limit_status[df_price_limit_status.instrument==ins].price_limit_status_0.loc[today]>2 and _order.amount<0:
    #                     cancel_order(_order)
    #                     print(today,'尾盘涨停取消卖单',ins) 
    #             except:
    #                 continue
      
    
    m1 = M.instruments.v2(
        start_date='2014-01-01',
        end_date='2018-01-14',
        market='CN_STOCK_A',
        instrument_list='',
        max_count=0
    )
    
    m2 = M.advanced_auto_labeler.v2(
        instruments=m1.data,
        label_expr="""# #号开始的表示注释
    # 0. 每行一个,顺序执行,从第二个开始,可以使用label字段
    # 1. 可用数据字段见 https://bigquant.com/docs/develop/datasource/deprecated/history_data.html
    #   添加benchmark_前缀,可使用对应的benchmark数据
    # 2. 可用操作符和函数见 `表达式引擎 <https://bigquant.com/docs/develop/bigexpr/usage.html>`_
    
    # 计算收益:5日收盘价(作为卖出价格)除以明日开盘价(作为买入价格)
    shift(close, -2) / shift(open, -1)
    
    # 极值处理:用1%和99%分位的值做clip
    clip(label, all_quantile(label, 0.01), all_quantile(label, 0.99))
    
    # 将分数映射到分类,这里使用20个分类
    all_wbins(label, 20)
    
    # 过滤掉一字涨停的情况 (设置label为NaN,在后续处理和训练中会忽略NaN的label)
    where(shift(high, -1) == shift(low, -1), NaN, label)
    """,
        start_date='',
        end_date='',
        benchmark='000300.HIX',
        drop_na_label=True,
        cast_label_int=True
    )
    
    m3 = M.input_features.v1(
        features="""# #号开始的表示注释
    # 多个特征,每行一个,可以包含基础特征和衍生特征
    return_5
    return_10
    return_20
    avg_amount_0/avg_amount_5
    avg_amount_5/avg_amount_20
    rank_avg_amount_0/rank_avg_amount_5
    rank_avg_amount_5/rank_avg_amount_10
    rank_return_0
    rank_return_5
    rank_return_10
    rank_return_0/rank_return_5
    rank_return_5/rank_return_10
    pe_ttm_0
    #主力净流入净额
    #mf_net_amount_main_0
    """
    )
    
    m15 = M.general_feature_extractor.v7(
        instruments=m1.data,
        features=m3.data,
        start_date='',
        end_date='',
        before_start_days=90
    )
    
    m16 = M.derived_feature_extractor.v3(
        input_data=m15.data,
        features=m3.data,
        date_col='date',
        instrument_col='instrument',
        drop_na=False,
        remove_extra_columns=False
    )
    
    m7 = M.join.v3(
        data1=m2.data,
        data2=m16.data,
        on='date,instrument',
        how='inner',
        sort=False
    )
    
    m13 = M.dropnan.v1(
        input_data=m7.data
    )
    
    m6 = M.stock_ranker_train.v6(
        training_ds=m13.data,
        features=m3.data,
        learning_algorithm='排序',
        number_of_leaves=30,
        minimum_docs_per_leaf=1000,
        number_of_trees=20,
        learning_rate=0.1,
        max_bins=1023,
        feature_fraction=1,
        data_row_fraction=1,
        plot_charts=True,
        ndcg_discount_base=1,
        m_lazy_run=False
    )
    
    m9 = M.instruments.v2(
        start_date=T.live_run_param('trading_date', '2018-01-15'),
        end_date=T.live_run_param('trading_date', '2019-01-10'),
        market='CN_STOCK_A',
        instrument_list='',
        max_count=0
    )
    
    m17 = M.general_feature_extractor.v7(
        instruments=m9.data,
        features=m3.data,
        start_date='',
        end_date='',
        before_start_days=90
    )
    
    m18 = M.derived_feature_extractor.v3(
        input_data=m17.data,
        features=m3.data,
        date_col='date',
        instrument_col='instrument',
        drop_na=False,
        remove_extra_columns=False
    )
    
    m14 = M.dropnan.v1(
        input_data=m18.data
    )
    
    m8 = M.stock_ranker_predict.v5(
        model=m6.model,
        data=m14.data,
        m_lazy_run=False
    )
    
    m5 = M.trade.v4(
        instruments=m9.data,
        options_data=m8.predictions,
        start_date='',
        end_date='',
        initialize=m5_initialize_bigquant_run,
        handle_data=m5_handle_data_bigquant_run,
        prepare=m5_prepare_bigquant_run,
        before_trading_start=m5_before_trading_start_bigquant_run,
        volume_limit=0,
        order_price_field_buy='open',
        order_price_field_sell='close',
        capital_base=100000,
        auto_cancel_non_tradable_orders=True,
        data_frequency='daily',
        price_type='真实价格',
        product_type='股票',
        plot_charts=True,
        backtest_only=False,
        benchmark='000300.SHA'
    )
    
    m4 = M.random_forest_regressor.v1(
        iterations=10,
        feature_fraction=1,
        max_depth=30,
        min_samples_per_leaf=200,
        key_cols='date,instrument',
        workers=1,
        random_state=0,
        other_train_parameters={}
    )
    
    设置评估测试数据集,查看训练曲线
    [视频教程]StockRanker训练曲线
    bigcharts-data-start/{"__type":"tabs","__id":"bigchart-0403e78b521b40d7ad8dce745c568da6"}/bigcharts-data-end
    • 收益率31.26%
    • 年化收益率32.9%
    • 基准收益率-27.27%
    • 阿尔法0.95
    • 贝塔0.79
    • 夏普比率0.74
    • 胜率0.52
    • 盈亏比1.14
    • 收益波动率53.67%
    • 信息比率0.09
    • 最大回撤30.33%
    bigcharts-data-start/{"__type":"tabs","__id":"bigchart-31107d364de24b1c91a301abde2e9f3b"}/bigcharts-data-end

    从上面的回测可以看到 stockranker算法的性能还是很强大的,充分挖掘到了股票的alpha,收益率在大部分机器学习算法里面排第一

    接下来我们看看dnn网络效果如何 (为了避免误差,我这里同时还回测了多只股票的情况,看看DNN网络预测出来的predit得分是否具备区分股票收益的能力)

    In [5]:
    # Python 代码入口函数,input_1/2/3 对应三个输入端,data_1/2/3 对应三个输出端
    def m10_run_bigquant_run(input_1, input_2, input_3):
        # 示例代码如下。在这里编写您的代码
        from sklearn.model_selection import train_test_split
        data = input_1.read()
        x_train, x_val, y_train, y_val = train_test_split(data["x"], data['y'], random_state=5)
        data_1 = DataSource.write_pickle({'x': x_train, 'y': y_train})
        data_2 = DataSource.write_pickle({'x': x_val, 'y': y_val})
        return Outputs(data_1=data_1, data_2=data_2, data_3=None)
    
    # 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。
    def m10_post_run_bigquant_run(outputs):
        return outputs
    
    from tensorflow.keras.callbacks import EarlyStopping
    m5_earlystop_bigquant_run=EarlyStopping(monitor='val_mse', min_delta=0.0001, patience=5)
    # 用户的自定义层需要写到字典中,比如
    # {
    #   "MyLayer": MyLayer
    # }
    m5_custom_objects_bigquant_run = {
     
    }
    
    # Python 代码入口函数,input_1/2/3 对应三个输入端,data_1/2/3 对应三个输出端
    def m24_run_bigquant_run(input_1, input_2, input_3):
        # 示例代码如下。在这里编写您的代码
        pred_label = input_1.read_pickle()
        df = input_2.read_df()
        df = pd.DataFrame({'pred_label':pred_label[:,0], 'instrument':df.instrument, 'date':df.date})
        df.sort_values(['date','pred_label'],inplace=True, ascending=[True,False])
        return Outputs(data_1=DataSource.write_df(df), data_2=None, data_3=None)
    
    # 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。
    def m24_post_run_bigquant_run(outputs):
        return outputs
    
    # 回测引擎:初始化函数,只执行一次
    def m19_initialize_bigquant_run(context):
        # 加载预测数据
        context.ranker_prediction = context.options['data'].read_df()
    
        # 系统已经设置了默认的交易手续费和滑点,要修改手续费可使用如下函数
        context.set_commission(PerOrder(buy_cost=0.0003, sell_cost=0.0013, min_cost=5))
        # 预测数据,通过options传入进来,使用 read_df 函数,加载到内存 (DataFrame)
        # 设置买入的股票数量,这里买入预测股票列表排名靠前的5只
        stock_count = 20
        # 每只的股票的权重,如下的权重分配会使得靠前的股票分配多一点的资金,[0.339160, 0.213986, 0.169580, ..]
        context.stock_weights = T.norm([1 / math.log(i + 2) for i in range(0, stock_count)])
        # 设置每只股票占用的最大资金比例
        context.max_cash_per_instrument = 0.2
        context.options['hold_days'] = 5
    
    # 回测引擎:每日数据处理函数,每天执行一次
    def m19_handle_data_bigquant_run(context, data):
        # 按日期过滤得到今日的预测数据
        ranker_prediction = context.ranker_prediction[
            context.ranker_prediction.date == data.current_dt.strftime('%Y-%m-%d')]
    
        # 1. 资金分配
        # 平均持仓时间是hold_days,每日都将买入股票,每日预期使用 1/hold_days 的资金
        # 实际操作中,会存在一定的买入误差,所以在前hold_days天,等量使用资金;之后,尽量使用剩余资金(这里设置最多用等量的1.5倍)
        is_staging = context.trading_day_index < context.options['hold_days'] # 是否在建仓期间(前 hold_days 天)
        cash_avg = context.portfolio.portfolio_value / context.options['hold_days']
        cash_for_buy = min(context.portfolio.cash, (1 if is_staging else 1.5) * cash_avg)
        cash_for_sell = cash_avg - (context.portfolio.cash - cash_for_buy)
        positions = {e.symbol: p.amount * p.last_sale_price
                     for e, p in context.perf_tracker.position_tracker.positions.items()}
    
        # 2. 生成卖出订单:hold_days天之后才开始卖出;对持仓的股票,按机器学习算法预测的排序末位淘汰
        if not is_staging and cash_for_sell > 0:
            equities = {e.symbol: e for e, p in context.perf_tracker.position_tracker.positions.items()}
            instruments = list(reversed(list(ranker_prediction.instrument[ranker_prediction.instrument.apply(
                    lambda x: x in equities and not context.has_unfinished_sell_order(equities[x]))])))
            # print('rank order for sell %s' % instruments)
            for instrument in instruments:
                context.order_target(context.symbol(instrument), 0)
                cash_for_sell -= positions[instrument]
                if cash_for_sell <= 0:
                    break
    
        # 3. 生成买入订单:按机器学习算法预测的排序,买入前面的stock_count只股票
        buy_cash_weights = context.stock_weights
        buy_instruments = list(ranker_prediction.instrument[:len(buy_cash_weights)])
        max_cash_per_instrument = context.portfolio.portfolio_value * context.max_cash_per_instrument
        for i, instrument in enumerate(buy_instruments):
            cash = cash_for_buy * buy_cash_weights[i]
            if cash > max_cash_per_instrument - positions.get(instrument, 0):
                # 确保股票持仓量不会超过每次股票最大的占用资金量
                cash = max_cash_per_instrument - positions.get(instrument, 0)
            if cash > 0:
                context.order_value(context.symbol(instrument), cash)
    
    # 回测引擎:准备数据,只执行一次
    def m19_prepare_bigquant_run(context):
        pass
    
    # 回测引擎:初始化函数,只执行一次
    def m12_initialize_bigquant_run(context):
        # 加载预测数据
        context.ranker_prediction = context.options['data'].read_df()
    
        # 系统已经设置了默认的交易手续费和滑点,要修改手续费可使用如下函数
        context.set_commission(PerOrder(buy_cost=0.0003, sell_cost=0.0013, min_cost=5))
        # 预测数据,通过options传入进来,使用 read_df 函数,加载到内存 (DataFrame)
        # 设置买入的股票数量,这里买入预测股票列表排名靠前的5只
        stock_count = 1
        # 每只的股票的权重,如下的权重分配会使得靠前的股票分配多一点的资金,[0.339160, 0.213986, 0.169580, ..]
        context.stock_weights = [1]
        # 设置每只股票占用的最大资金比例
        context.max_cash_per_instrument = 1
        context.options['hold_days'] = 1
    
    # 回测引擎:每日数据处理函数,每天执行一次
    def m12_handle_data_bigquant_run(context, data):
        # 按日期过滤得到今日的预测数据
        ranker_prediction = context.ranker_prediction[
            context.ranker_prediction.date == data.current_dt.strftime('%Y-%m-%d')]
        cash_for_buy = min(context.portfolio.portfolio_value/1,context.portfolio.cash)
        #cash_for_buy = context.portfolio.portfolio_value
        #print(ranker_prediction)
        #cash_for_buy = context.portfolio.portfolio_value
        #cash_for_buy = context.portfolio.cash
        buy_instruments = list(ranker_prediction.instrument)
        sell_instruments = [instrument.symbol for instrument in context.portfolio.positions.keys()]
        to_buy = set(buy_instruments[:1]) - set(sell_instruments) 
        to_sell = set(sell_instruments) -  set(buy_instruments[:1])
       
        
        for instrument in to_sell:
            context.order_target(context.symbol(instrument), 0)
        for instrument in to_buy:
            context.order_value(context.symbol(instrument), cash_for_buy)
    
    def m12_prepare_bigquant_run(context):
    
    
         # 获取st状态和涨跌停状态
        
        context.status_df = D.features(instruments =context.instruments,start_date = context.start_date, end_date = context.end_date, 
                               fields=['st_status_0','price_limit_status_0','price_limit_status_1'])
    def m12_before_trading_start_bigquant_run(context, data):
        pass     
        # 获取涨跌停状态数据
    #     df_price_limit_status=context.status_df.set_index('date')
    #     today=data.current_dt.strftime('%Y-%m-%d')
    #     # 得到当前未完成订单
    #     for orders in get_open_orders().values():
    #         # 循环,撤销订单
    #         for _order in orders:
    #             ins=str(_order.sid.symbol)
    #             try:
    #                 #判断一下如果当日涨停,则取消卖单
    #                 if  df_price_limit_status[df_price_limit_status.instrument==ins].price_limit_status_0.loc[today]>2 and _order.amount<0:
    #                     cancel_order(_order)
    #                     print(today,'尾盘涨停取消卖单',ins) 
    #             except:
    #                 continue
      
    
    m1 = M.instruments.v2(
        start_date='2014-01-01',
        end_date='2018-01-14',
        market='CN_STOCK_A',
        instrument_list='',
        max_count=0
    )
    
    m2 = M.advanced_auto_labeler.v2(
        instruments=m1.data,
        label_expr="""# #号开始的表示注释
    # 0. 每行一个,顺序执行,从第二个开始,可以使用label字段
    # 1. 可用数据字段见 https://bigquant.com/docs/data_history_data.html
    #   添加benchmark_前缀,可使用对应的benchmark数据
    # 2. 可用操作符和函数见 `表达式引擎 <https://bigquant.com/docs/big_expr.html>`_
    
    # 计算收益:5日收盘价(作为卖出价格)除以明日开盘价(作为买入价格)
    shift(close, -2) / shift(open, -1)-1
    
    # 极值处理:用1%和99%分位的值做clip
    clip(label, all_quantile(label, 0.01), all_quantile(label, 0.99))
    
    # 过滤掉一字涨停的情况 (设置label为NaN,在后续处理和训练中会忽略NaN的label)
    where(shift(high, -1) == shift(low, -1), NaN, label)
    """,
        start_date='',
        end_date='',
        benchmark='000300.SHA',
        drop_na_label=True,
        cast_label_int=False
    )
    
    m29 = M.standardlize.v8(
        input_1=m2.data,
        columns_input='label'
    )
    
    m3 = M.input_features.v1(
        features="""return_5
    return_10
    return_20
    avg_amount_0/avg_amount_5
    avg_amount_5/avg_amount_20
    rank_avg_amount_0/rank_avg_amount_5
    rank_avg_amount_5/rank_avg_amount_10
    rank_return_0
    rank_return_5
    rank_return_10
    rank_return_0/rank_return_5
    rank_return_5/rank_return_10
    pe_ttm_0"""
    )
    
    m15 = M.general_feature_extractor.v7(
        instruments=m1.data,
        features=m3.data,
        start_date='',
        end_date='',
        before_start_days=0
    )
    
    m16 = M.derived_feature_extractor.v3(
        input_data=m15.data,
        features=m3.data,
        date_col='date',
        instrument_col='instrument',
        drop_na=True,
        remove_extra_columns=False
    )
    
    m28 = M.standardlize.v8(
        input_1=m16.data,
        input_2=m3.data,
        columns_input='[]'
    )
    
    m13 = M.fillnan.v1(
        input_data=m28.data,
        features=m3.data,
        fill_value='0.0'
    )
    
    m7 = M.join.v3(
        data1=m29.data,
        data2=m13.data,
        on='date,instrument',
        how='inner',
        sort=False
    )
    
    m26 = M.dl_convert_to_bin.v2(
        input_data=m7.data,
        features=m3.data,
        window_size=2,
        feature_clip=3,
        flatten=True,
        window_along_col='instrument'
    )
    
    m10 = M.cached.v3(
        input_1=m26.data,
        run=m10_run_bigquant_run,
        post_run=m10_post_run_bigquant_run,
        input_ports='',
        params='{}',
        output_ports=''
    )
    
    m9 = M.instruments.v2(
        start_date=T.live_run_param('trading_date', '2018-01-15'),
        end_date=T.live_run_param('trading_date', '2019-01-10'),
        market='CN_STOCK_A',
        instrument_list='',
        max_count=0
    )
    
    m17 = M.general_feature_extractor.v7(
        instruments=m9.data,
        features=m3.data,
        start_date='',
        end_date='',
        before_start_days=0
    )
    
    m18 = M.derived_feature_extractor.v3(
        input_data=m17.data,
        features=m3.data,
        date_col='date',
        instrument_col='instrument',
        drop_na=True,
        remove_extra_columns=False
    )
    
    m25 = M.standardlize.v8(
        input_1=m18.data,
        input_2=m3.data,
        columns_input='[]'
    )
    
    m14 = M.fillnan.v1(
        input_data=m25.data,
        features=m3.data,
        fill_value='0.0'
    )
    
    m27 = M.dl_convert_to_bin.v2(
        input_data=m14.data,
        features=m3.data,
        window_size=2,
        feature_clip=3,
        flatten=True,
        window_along_col='instrument'
    )
    
    m6 = M.dl_layer_input.v1(
        shape='26',
        batch_shape='',
        dtype='float32',
        sparse=False,
        name=''
    )
    
    m8 = M.dl_layer_dense.v1(
        inputs=m6.data,
        units=256,
        activation='relu',
        use_bias=True,
        kernel_initializer='glorot_uniform',
        bias_initializer='Zeros',
        kernel_regularizer='None',
        kernel_regularizer_l1=0,
        kernel_regularizer_l2=0,
        bias_regularizer='None',
        bias_regularizer_l1=0,
        bias_regularizer_l2=0,
        activity_regularizer='None',
        activity_regularizer_l1=0,
        activity_regularizer_l2=0,
        kernel_constraint='None',
        bias_constraint='None',
        name=''
    )
    
    m21 = M.dl_layer_dropout.v1(
        inputs=m8.data,
        rate=0.1,
        noise_shape='',
        name=''
    )
    
    m20 = M.dl_layer_dense.v1(
        inputs=m21.data,
        units=128,
        activation='relu',
        use_bias=True,
        kernel_initializer='glorot_uniform',
        bias_initializer='Zeros',
        kernel_regularizer='None',
        kernel_regularizer_l1=0,
        kernel_regularizer_l2=0,
        bias_regularizer='None',
        bias_regularizer_l1=0,
        bias_regularizer_l2=0,
        activity_regularizer='None',
        activity_regularizer_l1=0,
        activity_regularizer_l2=0,
        kernel_constraint='None',
        bias_constraint='None',
        name=''
    )
    
    m22 = M.dl_layer_dropout.v1(
        inputs=m20.data,
        rate=0.1,
        noise_shape='',
        name=''
    )
    
    m23 = M.dl_layer_dense.v1(
        inputs=m22.data,
        units=1,
        activation='linear',
        use_bias=True,
        kernel_initializer='glorot_uniform',
        bias_initializer='Zeros',
        kernel_regularizer='None',
        kernel_regularizer_l1=0,
        kernel_regularizer_l2=0,
        bias_regularizer='None',
        bias_regularizer_l1=0,
        bias_regularizer_l2=0,
        activity_regularizer='None',
        activity_regularizer_l1=0,
        activity_regularizer_l2=0,
        kernel_constraint='None',
        bias_constraint='None',
        name=''
    )
    
    m4 = M.dl_model_init.v1(
        inputs=m6.data,
        outputs=m23.data
    )
    
    m5 = M.dl_model_train.v1(
        input_model=m4.data,
        training_data=m10.data_1,
        validation_data=m10.data_2,
        optimizer='Adam',
        loss='mean_squared_error',
        metrics='mse',
        batch_size=1024,
        epochs=30,
        earlystop=m5_earlystop_bigquant_run,
        custom_objects=m5_custom_objects_bigquant_run,
        n_gpus=0,
        verbose='2:每个epoch输出一行记录',
        m_cached=False
    )
    
    m11 = M.dl_model_predict.v1(
        trained_model=m5.data,
        input_data=m27.data,
        batch_size=1024,
        n_gpus=0,
        verbose='2:每个epoch输出一行记录'
    )
    
    m24 = M.cached.v3(
        input_1=m11.data,
        input_2=m18.data,
        run=m24_run_bigquant_run,
        post_run=m24_post_run_bigquant_run,
        input_ports='',
        params='{}',
        output_ports=''
    )
    
    m19 = M.trade.v4(
        instruments=m9.data,
        options_data=m24.data_1,
        start_date='',
        end_date='',
        initialize=m19_initialize_bigquant_run,
        handle_data=m19_handle_data_bigquant_run,
        prepare=m19_prepare_bigquant_run,
        volume_limit=0.025,
        order_price_field_buy='open',
        order_price_field_sell='close',
        capital_base=1000000,
        auto_cancel_non_tradable_orders=True,
        data_frequency='daily',
        price_type='后复权',
        product_type='股票',
        plot_charts=True,
        backtest_only=False,
        benchmark='000300.SHA'
    )
    
    m12 = M.trade.v4(
        instruments=m9.data,
        options_data=m24.data_1,
        start_date='',
        end_date='',
        initialize=m12_initialize_bigquant_run,
        handle_data=m12_handle_data_bigquant_run,
        prepare=m12_prepare_bigquant_run,
        before_trading_start=m12_before_trading_start_bigquant_run,
        volume_limit=0,
        order_price_field_buy='open',
        order_price_field_sell='close',
        capital_base=100000,
        auto_cancel_non_tradable_orders=True,
        data_frequency='daily',
        price_type='真实价格',
        product_type='股票',
        plot_charts=True,
        backtest_only=False,
        benchmark='000300.SHA'
    )
    
    Epoch 1/30
    1842/1842 - 9s - loss: 0.9840 - mse: 0.9840 - val_loss: 0.9856 - val_mse: 0.9856
    Epoch 2/30
    1842/1842 - 8s - loss: 0.9816 - mse: 0.9816 - val_loss: 0.9849 - val_mse: 0.9849
    Epoch 3/30
    1842/1842 - 8s - loss: 0.9809 - mse: 0.9809 - val_loss: 0.9853 - val_mse: 0.9853
    Epoch 4/30
    1842/1842 - 8s - loss: 0.9804 - mse: 0.9804 - val_loss: 0.9847 - val_mse: 0.9847
    Epoch 5/30
    1842/1842 - 8s - loss: 0.9800 - mse: 0.9800 - val_loss: 0.9843 - val_mse: 0.9843
    Epoch 6/30
    1842/1842 - 8s - loss: 0.9795 - mse: 0.9795 - val_loss: 0.9838 - val_mse: 0.9838
    Epoch 7/30
    1842/1842 - 8s - loss: 0.9790 - mse: 0.9790 - val_loss: 0.9834 - val_mse: 0.9834
    Epoch 8/30
    1842/1842 - 8s - loss: 0.9786 - mse: 0.9786 - val_loss: 0.9842 - val_mse: 0.9842
    Epoch 9/30
    1842/1842 - 8s - loss: 0.9782 - mse: 0.9782 - val_loss: 0.9837 - val_mse: 0.9837
    Epoch 10/30
    1842/1842 - 8s - loss: 0.9779 - mse: 0.9779 - val_loss: 0.9828 - val_mse: 0.9828
    Epoch 11/30
    1842/1842 - 8s - loss: 0.9775 - mse: 0.9775 - val_loss: 0.9828 - val_mse: 0.9828
    Epoch 12/30
    1842/1842 - 8s - loss: 0.9772 - mse: 0.9772 - val_loss: 0.9823 - val_mse: 0.9823
    Epoch 13/30
    1842/1842 - 8s - loss: 0.9769 - mse: 0.9769 - val_loss: 0.9827 - val_mse: 0.9827
    Epoch 14/30
    1842/1842 - 8s - loss: 0.9766 - mse: 0.9766 - val_loss: 0.9827 - val_mse: 0.9827
    Epoch 15/30
    1842/1842 - 8s - loss: 0.9761 - mse: 0.9761 - val_loss: 0.9827 - val_mse: 0.9827
    Epoch 16/30
    1842/1842 - 8s - loss: 0.9760 - mse: 0.9760 - val_loss: 0.9823 - val_mse: 0.9823
    Epoch 17/30
    1842/1842 - 8s - loss: 0.9758 - mse: 0.9758 - val_loss: 0.9825 - val_mse: 0.9825
    
    792/792 - 1s
    DataSource(b4154c7b6aed4779a36c8c5867a488f1T)
    
    • 收益率-50.62%
    • 年化收益率-52.19%
    • 基准收益率-27.27%
    • 阿尔法-0.41
    • 贝塔0.64
    • 夏普比率-3.2
    • 胜率0.5
    • 盈亏比0.68
    • 收益波动率23.13%
    • 信息比率-0.13
    • 最大回撤52.76%
    bigcharts-data-start/{"__type":"tabs","__id":"bigchart-87ce7c744bcc44f6b0edef0009e8b08e"}/bigcharts-data-end
    • 收益率-87.28%
    • 年化收益率-88.43%
    • 基准收益率-27.27%
    • 阿尔法-0.85
    • 贝塔0.51
    • 夏普比率-5.15
    • 胜率0.47
    • 盈亏比0.38
    • 收益波动率40.66%
    • 信息比率-0.27
    • 最大回撤89.54%
    bigcharts-data-start/{"__type":"tabs","__id":"bigchart-d46da17649d045c387172ca69f3eda3c"}/bigcharts-data-end
    In [ ]:
    #多因子模型分回归和排序两类,其中回归重在解释,而排序旨在选股收益。
    startdate = '20140101'
    enddate = '20190123'
    
    In [17]:
    data=m13.data.read()
    data
    
    Out[17]:
    date instrument mf_net_amount_main_0 m:open m:high m:close m:amount m:low label
    0 2018-01-02 600519.SHA -8.600280e+07 4977.603516 5049.849609 5004.979980 3.482408e+09 4905.712402 16
    0 2019-01-02 600519.SHA 1.030323e+07 4399.037109 4413.604980 4319.708008 3.754388e+09 4291.077148 9
    0 2020-01-02 600519.SHA -1.810346e+09 8255.352539 8380.207031 8269.990234 1.669684e+10 8167.529785 3
    1 2020-01-03 600519.SHA -1.490982e+09 8174.848145 8174.848145 7893.522461 1.426638e+10 7881.373535 12
    1 2018-01-03 600519.SHA 1.461945e+08 4988.269531 5129.775879 5090.381836 3.713524e+09 4975.754395 12
    ... ... ... ... ... ... ... ... ... ...
    240 2020-12-29 600519.SHA -1.662676e+08 13867.764648 13969.940430 13823.340820 4.275995e+09 13734.492188 19
    241 2019-12-27 600519.SHA 4.049262e+08 8416.361328 8577.370117 8511.502930 5.486453e+09 8416.361328 10
    241 2018-12-28 600519.SHA 2.671233e+08 4062.391846 4301.101562 4255.018066 3.705150e+09 4038.592773 3
    242 2019-12-30 600519.SHA 2.916858e+08 8564.196289 8749.356445 8678.366211 4.827682e+09 8564.196289 1
    243 2019-12-31 600519.SHA -6.143291e+07 8657.874023 8694.466797 8657.874023 2.666705e+09 8610.376953 1

    727 rows × 9 columns

    In [3]:
    # import pandas as pd
    # from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score
    # from sklearn.linear_model import LogisticRegression
    # from sklearn.tree import DecisionTreeClassifier
    # from sklearn.neighbors import KNeighborsClassifier
    
    # # 读取数据
    
    # data = m13.data.read()
    
    # # 独热编码,将“instrument”列转换为数值类型的特征
    # instrument_col = data['instrument']
    # instrument_df = pd.get_dummies(instrument_col, drop_first=True)
    
    # # 合并独热编码后的 DataFrame 和原始 DataFrame
    # data = pd.concat([data.drop('instrument', axis=1), instrument_df], axis=1)
    
    # # 分割数据集
    # X = data.drop('label', axis=1)
    # y = data['label']
    # train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.3, random_state=1)
    
    # # 定义模型和模型参数
    # models = [('LR', LogisticRegression(solver='liblinear', class_weight='balanced', random_state=1)),
    # ('KNN', KNeighborsClassifier()),
    # ('CART', DecisionTreeClassifier())]
    
    
    
    # from sklearn.preprocessing import LabelEncoder
    
    # le = LabelEncoder()
    # train_y = le.fit_transform(train_y)
    
    
    # from sklearn.preprocessing import OneHotEncoder
    
    # # 对离散特征进行one-hot编码
    # encoder = OneHotEncoder(categories='auto', sparse=False)
    # train_X_encoded = encoder.fit_transform(train_X[discrete_features])
    
    # # 对数值特征进行标准化
    # scaler = StandardScaler()
    # train_X_scaled = scaler.fit_transform(train_X[numerical_features])
    
    # # 将编码后的特征和标准化后的特征合并起来
    # train_X = np.hstack((train_X_scaled, train_X_encoded))
    
    # # 交叉验证
    # for name, model in models:
    # # 模型训练
    #     print(name, model)
    #     model.fit(train_X, train_y)
    #     # 评估指标
    #     kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True)
    #     scores = cross_val_score(model, train_X, train_y, scoring='accuracy', cv=kfold)
    #     print(f'{name}: {scores.mean()}, {scores.std()}')