版本 v1.0
### 深度学习策略的交易规则
### 策略构建步骤
### 策略的实现
在画布左侧模块列表中依次拖入输入层模块、Reshape层模块、Conv2D层模块、Reshape层模块、LSTM层模块、Dropout层模块和全连接层模块(两组),构成深度学习网络构架,
最后通过“构建(深度学习)”模块组装各层。这里需要注意:
输入层的shape参数是 窗口滚动数据集的大小 X 因子数量 , 本例为 50 行 X 5个因子
ReShape层的参数是 窗口滚动数据集的大小 X 因子数量 X 1 ,本例为 50 行 X 5个因子 X1
Conv2D层中的 kernel_size参数是滑动窗口的尺寸,本例中使用 3行 X 5列 的窗口, 每次滑动的步长为 1行 X 1列 , 卷积核数目为32,这里的窗口设置决定了后面ReShape层的参数
ReShape层中的target_shape 参数,这是由 窗口滚动数据集 X 因子数量 和 Conv2D层中设置的窗口尺寸以及步长决定的。本例中 50行 X 5因子 的输入数据,使用 3行 X5列 的窗口滑动取数据,
每次移动1行,共计可以得到48次数据(即可以通过滑动3行 X 5列的窗口48次来获取完整的数据),因此target_shape= 48 X 卷积核数32
LSTM层的输出空间维度设置为卷积核数32,并设置激活函数
Dropout层是防止过度拟合采用的主动裁剪数据技术,这里设置rate 为0.8
全连接层共两层,第一层的输出空间维度与LSTM的输出维度保持一致为32,第二层将第一层的32维数据转变为1维数据输出,即获取预测的label值,此例为0到1之间的连续值,可以认为是上涨的概率。
如果当日预测的上涨概率大于0.5,则保持持仓或买入
如果当日预测的上涨概率小于0.5,则卖出股票或保持空仓。
通过 trade 模块中的初始化函数定义交易手续费和滑点,通过 context.prediction 获取每日的上涨概率预测结果;
通过 trade 模块中的主函数(handle函数)查看每日的买卖交易信号,按照买卖原则执行相应的买入/卖出操作。
可视化策略实现如下:
# 本代码由可视化策略环境自动生成 2019年4月3日 15:45
# 本代码单元只能在可视化模式下编辑。您也可以拷贝代码,粘贴到新建的代码单元或者策略,然后修改。
# Python 代码入口函数,input_1/2/3 对应三个输入端,data_1/2/3 对应三个输出端
def m23_run_bigquant_run(input_1, input_2, input_3):
fields = ['open','high','low','close','volume']
input_1_df = input_1.read_pickle()
ins = input_1_df['instruments']
start_date = input_1_df['start_date']
end_date = input_1_df['end_date']
df = D.history_data(ins, start_date, end_date, fields)
data_1 = DataSource.write_df(df)
return Outputs(data_1=data_1, data_2=None, data_3=None)
# 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。
def m23_post_run_bigquant_run(outputs):
return outputs
# Python 代码入口函数,input_1/2/3 对应三个输入端,data_1/2/3 对应三个输出端
def m16_run_bigquant_run(input_1, input_2, input_3):
input_ds = input_1
df = input_ds.read_df()
df['return'] = (df.close.shift(-10)/df.close - 1)
df['label'] = np.where(df['return'] > 0, 1, 0)
ds = DataSource.write_df(df[['date','instrument','label']])
return Outputs(data_1=ds)
# 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。
def m16_post_run_bigquant_run(outputs):
return outputs
# Python 代码入口函数,input_1/2/3 对应三个输入端,data_1/2/3 对应三个输出端
def m2_run_bigquant_run(input_1, input_2, input_3):
input_series = input_1
input_df = input_2
test_data = input_df.read_pickle()
pred_label = input_series.read_pickle()
pred_result = pred_label.reshape(pred_label.shape[0])
dt = input_3.read_df()['date'][-1*len(pred_result):]
pred_df = pd.Series(pred_result, index=dt)
ds = DataSource.write_df(pred_df)
pred_label = np.where(pred_label>0.5,1,0)
labels = test_data['y']
print('准确率%s'%(np.mean(pred_label==labels)))
return Outputs(data_1=ds)
# 后处理函数,可选。输入是主函数的输出,可以在这里对数据做处理,或者返回更友好的outputs数据格式。此函数输出不会被缓存。
def m2_post_run_bigquant_run(outputs):
return outputs
# 回测引擎:每日数据处理函数,每天执行一次
def m1_handle_data_bigquant_run(context, data):
# 按日期过滤得到今日的预测数据
try:
prediction = context.prediction[data.current_dt.strftime('%Y-%m-%d')]
except KeyError as e:
return
instrument = context.instruments[0]
sid = context.symbol(instrument)
cur_position = context.portfolio.positions[sid].amount
# 交易逻辑
if prediction > 0.5 and cur_position == 0:
context.order_target_percent(context.symbol(instrument), 1)
print(data.current_dt, '买入!')
elif prediction < 0.5 and cur_position > 0:
context.order_target_percent(context.symbol(instrument), 0)
print(data.current_dt, '卖出!')
# 回测引擎:准备数据,只执行一次
def m1_prepare_bigquant_run(context):
pass
# 回测引擎:初始化函数,只执行一次
def m1_initialize_bigquant_run(context):
# 加载预测数据
context.prediction = context.options['data'].read_df()
# 系统已经设置了默认的交易手续费和滑点,要修改手续费可使用如下函数
context.set_commission(PerOrder(buy_cost=0.0003, sell_cost=0.0013, min_cost=5))
# 回测引擎:每个单位时间开始前调用一次,即每日开盘前调用一次。
def m1_before_trading_start_bigquant_run(context, data):
pass
m3 = M.dl_layer_input.v1(
shape='50,5',
batch_shape='',
dtype='float32',
sparse=False,
name=''
)
m13 = M.dl_layer_reshape.v1(
inputs=m3.data,
target_shape='50,5,1',
name=''
)
m14 = M.dl_layer_conv2d.v1(
inputs=m13.data,
filters=32,
kernel_size='3,5',
strides='1,1',
padding='valid',
data_format='channels_last',
dilation_rate='1,1',
activation='relu',
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='Zeros',
kernel_regularizer='None',
kernel_regularizer_l1=0,
kernel_regularizer_l2=0,
bias_regularizer='None',
bias_regularizer_l1=0,
bias_regularizer_l2=0,
activity_regularizer='None',
activity_regularizer_l1=0,
activity_regularizer_l2=0,
kernel_constraint='None',
bias_constraint='None',
name=''
)
m15 = M.dl_layer_reshape.v1(
inputs=m14.data,
target_shape='48,32',
name=''
)
m4 = M.dl_layer_lstm.v1(
inputs=m15.data,
units=32,
activation='tanh',
recurrent_activation='hard_sigmoid',
use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='Orthogonal',
bias_initializer='Ones',
unit_forget_bias=True,
kernel_regularizer='None',
kernel_regularizer_l1=0,
kernel_regularizer_l2=0,
recurrent_regularizer='None',
recurrent_regularizer_l1=0,
recurrent_regularizer_l2=0,
bias_regularizer='None',
bias_regularizer_l1=0,
bias_regularizer_l2=0,
activity_regularizer='None',
activity_regularizer_l1=0,
activity_regularizer_l2=0,
kernel_constraint='None',
recurrent_constraint='None',
bias_constraint='None',
dropout=0,
recurrent_dropout=0,
return_sequences=False,
implementation='0',
name=''
)
m11 = M.dl_layer_dropout.v1(
inputs=m4.data,
rate=0.8,
noise_shape='',
name=''
)
m10 = M.dl_layer_dense.v1(
inputs=m11.data,
units=32,
activation='tanh',
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='Zeros',
kernel_regularizer='None',
kernel_regularizer_l1=0,
kernel_regularizer_l2=0,
bias_regularizer='None',
bias_regularizer_l1=0,
bias_regularizer_l2=0,
activity_regularizer='None',
activity_regularizer_l1=0,
activity_regularizer_l2=0,
kernel_constraint='None',
bias_constraint='None',
name=''
)
m12 = M.dl_layer_dropout.v1(
inputs=m10.data,
rate=0.8,
noise_shape='',
name=''
)
m9 = M.dl_layer_dense.v1(
inputs=m12.data,
units=1,
activation='sigmoid',
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='Zeros',
kernel_regularizer='None',
kernel_regularizer_l1=0,
kernel_regularizer_l2=0,
bias_regularizer='None',
bias_regularizer_l1=0,
bias_regularizer_l2=0,
activity_regularizer='None',
activity_regularizer_l1=0,
activity_regularizer_l2=0,
kernel_constraint='None',
bias_constraint='None',
name=''
)
m5 = M.dl_model_init.v1(
inputs=m3.data,
outputs=m9.data
)
m8 = M.input_features.v1(
features="""(close/shift(close,1)-1)*10
(high/shift(high,1)-1)*10
(low/shift(low,1)-1)*10
(open/shift(open,1)-1)*10
(volume/shift(volume,1)-1)*10"""
)
m24 = M.instruments.v2(
start_date='2015-01-01',
end_date='2018-10-30',
market='CN_STOCK_A',
instrument_list='600009.SHA',
max_count=0
)
m23 = M.cached.v3(
input_1=m24.data,
run=m23_run_bigquant_run,
post_run=m23_post_run_bigquant_run,
input_ports='',
params='{}',
output_ports=''
)
m16 = M.cached.v3(
input_1=m23.data_1,
run=m16_run_bigquant_run,
post_run=m16_post_run_bigquant_run,
input_ports='',
params='{}',
output_ports=''
)
m26 = M.derived_feature_extractor.v3(
input_data=m23.data_1,
features=m8.data,
date_col='date',
instrument_col='instrument',
drop_na=False,
remove_extra_columns=False,
user_functions={}
)
m17 = M.join.v3(
data1=m16.data_1,
data2=m26.data,
on='date,instrument',
how='inner',
sort=True
)
m18 = M.dropnan.v1(
input_data=m17.data
)
m19 = M.filter.v3(
input_data=m18.data,
expr='date<\'2017-03-01\'',
output_left_data=False
)
m25 = M.dl_convert_to_bin.v2(
input_data=m19.data,
features=m8.data,
window_size=50,
feature_clip=5,
flatten=False,
window_along_col='instrument'
)
m6 = M.dl_model_train.v1(
input_model=m5.data,
training_data=m25.data,
optimizer='Adam',
loss='binary_crossentropy',
metrics='accuracy',
batch_size=2048,
epochs=10,
n_gpus=1,
verbose='1:输出进度条记录'
)
m20 = M.filter.v3(
input_data=m18.data,
expr='date>\'2017-03-01\'',
output_left_data=False
)
m27 = M.dl_convert_to_bin.v2(
input_data=m20.data,
features=m8.data,
window_size=50,
feature_clip=5,
flatten=False,
window_along_col='instrument'
)
m7 = M.dl_model_predict.v1(
trained_model=m6.data,
input_data=m27.data,
batch_size=10240,
n_gpus=2,
verbose='2:每个epoch输出一行记录'
)
m2 = M.cached.v3(
input_1=m7.data,
input_2=m27.data,
input_3=m20.data,
run=m2_run_bigquant_run,
post_run=m2_post_run_bigquant_run,
input_ports='',
params='{}',
output_ports=''
)
m1 = M.trade.v4(
instruments=m24.data,
options_data=m2.data_1,
start_date='2017-04-01',
end_date='',
handle_data=m1_handle_data_bigquant_run,
prepare=m1_prepare_bigquant_run,
initialize=m1_initialize_bigquant_run,
before_trading_start=m1_before_trading_start_bigquant_run,
volume_limit=0.025,
order_price_field_buy='open',
order_price_field_sell='close',
capital_base=1000000,
auto_cancel_non_tradable_orders=True,
data_frequency='daily',
price_type='真实价格',
product_type='股票',
plot_charts=True,
backtest_only=False,
benchmark=''
)
[2019-04-03 15:44:14.407894] INFO: bigquant: cached.v3 开始运行..
[2019-04-03 15:44:14.417255] INFO: bigquant: 命中缓存
[2019-04-03 15:44:14.418980] INFO: bigquant: cached.v3 运行完成[0.01109s].
[2019-04-03 15:44:14.422109] INFO: bigquant: input_features.v1 开始运行..
[2019-04-03 15:44:14.426917] INFO: bigquant: 命中缓存
[2019-04-03 15:44:14.428335] INFO: bigquant: input_features.v1 运行完成[0.006221s].
[2019-04-03 15:44:14.431304] INFO: bigquant: instruments.v2 开始运行..
[2019-04-03 15:44:14.435869] INFO: bigquant: 命中缓存
[2019-04-03 15:44:14.437343] INFO: bigquant: instruments.v2 运行完成[0.006021s].
[2019-04-03 15:44:14.441735] INFO: bigquant: cached.v3 开始运行..
[2019-04-03 15:44:14.447139] INFO: bigquant: 命中缓存
[2019-04-03 15:44:14.448712] INFO: bigquant: cached.v3 运行完成[0.006973s].
[2019-04-03 15:44:14.453391] INFO: bigquant: cached.v3 开始运行..
[2019-04-03 15:44:14.458779] INFO: bigquant: 命中缓存
[2019-04-03 15:44:14.460327] INFO: bigquant: cached.v3 运行完成[0.006925s].
[2019-04-03 15:44:14.463434] INFO: bigquant: derived_feature_extractor.v3 开始运行..
[2019-04-03 15:44:14.468879] INFO: bigquant: 命中缓存
[2019-04-03 15:44:14.470505] INFO: bigquant: derived_feature_extractor.v3 运行完成[0.007068s].
[2019-04-03 15:44:14.473430] INFO: bigquant: join.v3 开始运行..
[2019-04-03 15:44:14.478229] INFO: bigquant: 命中缓存
[2019-04-03 15:44:14.479756] INFO: bigquant: join.v3 运行完成[0.006323s].
[2019-04-03 15:44:14.482677] INFO: bigquant: dropnan.v1 开始运行..
[2019-04-03 15:44:14.487429] INFO: bigquant: 命中缓存
[2019-04-03 15:44:14.488713] INFO: bigquant: dropnan.v1 运行完成[0.006032s].
[2019-04-03 15:44:14.491138] INFO: bigquant: filter.v3 开始运行..
[2019-04-03 15:44:14.495289] INFO: bigquant: 命中缓存
[2019-04-03 15:44:14.496613] INFO: bigquant: filter.v3 运行完成[0.00547s].
[2019-04-03 15:44:14.501947] INFO: bigquant: dl_convert_to_bin.v2 开始运行..
[2019-04-03 15:44:14.724117] INFO: bigquant: dl_convert_to_bin.v2 运行完成[0.222167s].
[2019-04-03 15:44:14.727410] INFO: bigquant: dl_model_train.v1 开始运行..
[2019-04-03 15:44:15.485686] INFO: dl_model_train: 准备训练,训练样本个数:523,迭代次数:10
[2019-04-03 15:44:22.498110] INFO: dl_model_train: 训练结束,耗时:7.01s
[2019-04-03 15:44:22.591624] INFO: bigquant: dl_model_train.v1 运行完成[7.864199s].
[2019-04-03 15:44:22.595198] INFO: bigquant: filter.v3 开始运行..
[2019-04-03 15:44:22.863877] INFO: filter: 使用表达式 date>'2017-03-01' 过滤
[2019-04-03 15:44:23.161430] INFO: filter: 过滤 /data, 407/0/931
[2019-04-03 15:44:23.170852] INFO: bigquant: filter.v3 运行完成[0.575637s].
[2019-04-03 15:44:23.177018] INFO: bigquant: dl_convert_to_bin.v2 开始运行..
[2019-04-03 15:44:23.355143] INFO: bigquant: dl_convert_to_bin.v2 运行完成[0.178113s].
[2019-04-03 15:44:23.360747] INFO: bigquant: dl_model_predict.v1 开始运行..
[2019-04-03 15:44:24.966724] INFO: bigquant: dl_model_predict.v1 运行完成[1.605968s].
[2019-04-03 15:44:24.973690] INFO: bigquant: cached.v3 开始运行..
[2019-04-03 15:44:25.272001] INFO: bigquant: cached.v3 运行完成[0.298296s].
[2019-04-03 15:44:25.306608] INFO: bigquant: backtest.v8 开始运行..
[2019-04-03 15:44:25.310282] INFO: bigquant: biglearning backtest:V8.1.11
[2019-04-03 15:44:25.311997] INFO: bigquant: product_type:stock by specified
[2019-04-03 15:44:30.802091] INFO: bigquant: 读取股票行情完成:628
[2019-04-03 15:44:30.829380] INFO: algo: TradingAlgorithm V1.4.10
[2019-04-03 15:44:30.950969] INFO: algo: trading transform...
[2019-04-03 15:44:31.204706] INFO: algo: handle_splits get splits [dt:2017-08-24 00:00:00+00:00] [asset:Equity(0 [600009.SHA]), ratio:0.9882099126332605]
[2019-04-03 15:44:31.206468] INFO: Position: position stock handle split[sid:0, orig_amount:33000, new_amount:33393.0, orig_cost:30.28002702066339, new_cost:29.92, ratio:0.9882099126332605, last_sale_price:36.87999363789555]
[2019-04-03 15:44:31.207960] INFO: Position: after split: asset: Equity(0 [600009.SHA]), amount: 33393.0, cost_basis: 29.92, last_sale_price: 37.31999969482422
[2019-04-03 15:44:31.209479] INFO: Position: returning cash: 26.36
[2019-04-03 15:44:31.846564] INFO: algo: handle_splits get splits [dt:2018-08-23 00:00:00+00:00] [asset:Equity(0 [600009.SHA]), ratio:0.9900703412737303]
[2019-04-03 15:44:31.848456] INFO: Position: position stock handle split[sid:0, orig_amount:33393.0, new_amount:33727.0, orig_cost:29.92, new_cost:29.62, ratio:0.9900703412737303, last_sale_price:57.83000848272584]
[2019-04-03 15:44:31.850185] INFO: Position: after split: asset: Equity(0 [600009.SHA]), amount: 33727.0, cost_basis: 29.62, last_sale_price: 58.40999984741211
[2019-04-03 15:44:31.851676] INFO: Position: returning cash: 52.43
[2019-04-03 15:44:31.963208] INFO: Performance: Simulated 385 trading days out of 385.
[2019-04-03 15:44:31.965006] INFO: Performance: first open: 2017-04-05 09:30:00+00:00
[2019-04-03 15:44:31.966448] INFO: Performance: last close: 2018-10-30 15:00:00+00:00
[2019-04-03 15:44:33.462868] INFO: bigquant: backtest.v8 运行完成[8.156248s].