资金流策略,年化收益69.55%

策略分享
标签: #<Tag:0x00007f61e71bf8f0>

(kuiiu) #1
克隆策略
In [4]:
# 基础参数配置
class conf:
    start_date = '2010-01-01'
    end_date='2017-01-01'
    # split_date 之前的数据用于训练,之后的数据用作效果评估
    split_date = '2015-01-01'
    # D.instruments: https://bigquant.com/docs/data_instruments.html
    instruments = D.instruments(start_date, end_date)

    # 机器学习目标标注函数
    # 如下标注函数等价于 min(max((持有期间的收益 * 100), -20), 20) + 20 (后面的M.fast_auto_labeler会做取整操作)
    # 说明:max/min这里将标注分数限定在区间[-20, 20],+20将分数变为非负数 (StockRanker要求标注分数非负整数)
    label_expr = ['return * 100', 'where(label > {0}, {0}, where(label < -{0}, -{0}, label)) + {0}'.format(20)]
    # 持有天数,用于计算label_expr中的return值(收益)
    hold_days = 5

    # 特征 https://bigquant.com/docs/data_features.html,你可以通过表达式构造任何特征
    features = [
        'avg_turn_5',  # 5日平均换手率
        '(high_0-low_0+high_1-low_1+high_2-low_2+high_3-low_3+high_4-low_4)/5',  # 5日平均振幅
        #'pe_lyr_0',  # 市盈率LYR
        'mf_net_amount_5',  # 5日净主动买入额
        'mf_net_amount_10',  # 10日净主动买入额
        'mf_net_amount_20',  # 20日净主动买入额
    ]

# 给数据做标注:给每一行数据(样本)打分,一般分数越高表示越好
m1 = M.fast_auto_labeler.v5(
    instruments=conf.instruments, start_date=conf.start_date, end_date=conf.end_date,
    label_expr=conf.label_expr, hold_days=conf.hold_days,
    benchmark='000300.SHA', sell_at='open', buy_at='open')
# 计算特征数据
m2 = M.general_feature_extractor.v5(
    instruments=conf.instruments, start_date=conf.start_date, end_date=conf.end_date,
    features=conf.features)
# 数据预处理:缺失数据处理,数据规范化,T.get_stock_ranker_default_transforms为StockRanker模型做数据预处理
m3 = M.transform.v2(
    data=m2.data, transforms=T.get_stock_ranker_default_transforms(),
    drop_null=True, astype='int32', except_columns=['date', 'instrument'],
    clip_lower=0, clip_upper=200000000)
# 合并标注和特征数据
m4 = M.join.v2(data1=m1.data, data2=m3.data, on=['date', 'instrument'], sort=True)

# 训练数据集
m5_training = M.filter.v2(data=m4.data, expr='date < "%s"' % conf.split_date)
# 评估数据集
m5_evaluation = M.filter.v2(data=m4.data, expr='"%s" <= date' % conf.split_date)
# StockRanker机器学习训练
m6 = M.stock_ranker_train.v2(training_ds=m5_training.data, features=conf.features)
# 对评估集做预测
m7 = M.stock_ranker_predict.v2(model_id=m6.model_id, data=m5_evaluation.data)


## 量化回测 https://bigquant.com/docs/strategy_backtest.html
# 回测引擎:初始化函数,只执行一次
def initialize(context):
    # 系统已经设置了默认的交易手续费和滑点,要修改手续费可使用如下函数
    context.set_commission(PerOrder(buy_cost=0.0003, sell_cost=0.0013, min_cost=5))
    # 预测数据,通过options传入进来,使用 read_df 函数,加载到内存 (DataFrame)
    context.ranker_prediction = context.options['ranker_prediction'].read_df()
    # 设置买入的股票数量,这里买入预测股票列表排名靠前的5只
    stock_count = 5
    # 每只的股票的权重,如下的权重分配会使得靠前的股票分配多一点的资金,[0.339160, 0.213986, 0.169580, ..]
    context.stock_weights = T.norm([1 / math.log(i + 2) for i in range(0, stock_count)])
    # 设置每只股票占用的最大资金比例
    context.max_cash_per_instrument = 0.2

# 回测引擎:每日数据处理函数,每天执行一次
def handle_data(context, data):
    # 按日期过滤得到今日的预测数据
    ranker_prediction = context.ranker_prediction[context.ranker_prediction.date == data.current_dt.strftime('%Y-%m-%d')]

    # 1. 资金分配
    # 平均持仓时间是hold_days,每日都将买入股票,每日预期使用 1/hold_days 的资金
    # 实际操作中,会存在一定的买入误差,所以在前hold_days天,等量使用资金;之后,尽量使用剩余资金(这里设置最多用等量的1.5倍)
    is_staging = context.trading_day_index < context.options['hold_days'] # 是否在建仓期间(前 hold_days 天)
    cash_avg = context.portfolio.portfolio_value / context.options['hold_days']
    cash_for_buy = min(context.portfolio.cash, (1 if is_staging else 1.5) * cash_avg)
    cash_for_sell = cash_avg - (context.portfolio.cash - cash_for_buy)
    positions = {e.symbol: p.amount * p.last_sale_price         for e, p in context.perf_tracker.position_tracker.positions.items()}

    # 2. 生成卖出订单:hold_days天之后才开始卖出;对持仓的股票,按StockRanker预测的排序末位淘汰
    if not is_staging and cash_for_sell > 0:
        equities = {e.symbol: e for e, p in context.perf_tracker.position_tracker.positions.items()}
        instruments = list(reversed(list(ranker_prediction.instrument[ranker_prediction.instrument.apply(
                lambda x: x in equities and not context.has_unfinished_sell_order(equities[x]))])))
        # print('rank order for sell %s' % instruments)
        for instrument in instruments:
            context.order_target(context.symbol(instrument), 0)
            cash_for_sell -= positions[instrument]
            if cash_for_sell <= 0:
                break

    # 3. 生成买入订单:按StockRanker预测的排序,买入前面的stock_count只股票
    buy_cash_weights = context.stock_weights
    buy_instruments = list(ranker_prediction.instrument[:len(buy_cash_weights)])
    max_cash_per_instrument = context.portfolio.portfolio_value * context.max_cash_per_instrument
    for i, instrument in enumerate(buy_instruments):
        cash = cash_for_buy * buy_cash_weights[i]
        if cash > max_cash_per_instrument - positions.get(instrument, 0):
            # 确保股票持仓量不会超过每次股票最大的占用资金量
            cash = max_cash_per_instrument - positions.get(instrument, 0)
        if cash > 0:
            context.order_value(context.symbol(instrument), cash)

# 调用回测引擎
m8 = M.backtest.v5(
    instruments=m7.instruments,
    start_date=m7.start_date,
    end_date=m7.end_date,
    initialize=initialize,
    handle_data=handle_data,
    order_price_field_buy='open',       # 表示 开盘 时买入
    order_price_field_sell='close',     # 表示 收盘 前卖出
    capital_base=1000000,               # 初始资金
    benchmark='000300.SHA',             # 比较基准,不影响回测结果
    # 通过 options 参数传递预测数据和参数给回测引擎
    options={'ranker_prediction': m7.predictions, 'hold_days': conf.hold_days}
)
[2017-05-12 22:44:29.903266] INFO: bigquant: fast_auto_labeler.v5 start ..
[2017-05-12 22:44:29.906687] INFO: bigquant: hit cache
[2017-05-12 22:44:29.916410] INFO: bigquant: fast_auto_labeler.v5 end [0.013157s].
[2017-05-12 22:44:29.927875] INFO: bigquant: general_feature_extractor.v5 start ..
[2017-05-12 22:45:03.500160] INFO: general_feature_extractor: year 2010, featurerows=431567
[2017-05-12 22:45:41.096754] INFO: general_feature_extractor: year 2011, featurerows=511455
[2017-05-12 22:45:58.624818] INFO: general_feature_extractor: year 2012, featurerows=565675
[2017-05-12 22:46:15.034549] INFO: general_feature_extractor: year 2013, featurerows=564168
[2017-05-12 22:46:44.593715] INFO: general_feature_extractor: year 2014, featurerows=569948
[2017-05-12 22:47:11.034142] INFO: general_feature_extractor: year 2015, featurerows=569698
[2017-05-12 22:47:43.772980] INFO: general_feature_extractor: year 2016, featurerows=641546
[2017-05-12 22:47:50.474838] INFO: general_feature_extractor: year 2017, featurerows=0
[2017-05-12 22:47:50.924059] INFO: general_feature_extractor: total feature rows: 3854057
[2017-05-12 22:47:50.927785] INFO: bigquant: general_feature_extractor.v5 end [200.99993s].
[2017-05-12 22:47:50.938198] INFO: bigquant: transform.v2 start ..
[2017-05-12 22:47:54.585781] INFO: transform: transformed /y_2010, 429839/431567
[2017-05-12 22:47:58.398435] INFO: transform: transformed /y_2011, 510034/511455
[2017-05-12 22:48:02.449314] INFO: transform: transformed /y_2012, 564884/565675
[2017-05-12 22:48:06.826487] INFO: transform: transformed /y_2013, 564158/564168
[2017-05-12 22:48:11.247977] INFO: transform: transformed /y_2014, 569356/569948
[2017-05-12 22:48:15.394477] INFO: transform: transformed /y_2015, 568581/569698
[2017-05-12 22:48:19.971010] INFO: transform: transformed /y_2016, 640408/641546
[2017-05-12 22:48:20.149124] INFO: transform: transformed /y_2017, 0/0
[2017-05-12 22:48:20.572705] INFO: transform: transformed rows: 3847260/3854057
[2017-05-12 22:48:20.576854] INFO: bigquant: transform.v2 end [29.63866s].
[2017-05-12 22:48:20.584753] INFO: bigquant: join.v2 start ..
[2017-05-12 22:48:33.914885] INFO: filter: /y_2010, rows=429304/429839, timetaken=10.209864s
[2017-05-12 22:48:43.835972] INFO: filter: /y_2011, rows=509502/510034, timetaken=9.824819s
[2017-05-12 22:48:54.157218] INFO: filter: /y_2012, rows=563788/564884, timetaken=10.236114s
[2017-05-12 22:49:05.139704] INFO: filter: /y_2013, rows=563123/564158, timetaken=10.827156s
[2017-05-12 22:49:17.850829] INFO: filter: /y_2014, rows=567664/569356, timetaken=12.595105s
[2017-05-12 22:49:29.071704] INFO: filter: /y_2015, rows=560341/568581, timetaken=11.093662s
[2017-05-12 22:49:43.144609] INFO: filter: /y_2016, rows=619535/640408, timetaken=14.040108s
[2017-05-12 22:49:44.417781] INFO: filter: total result rows: 3813257
[2017-05-12 22:49:44.423244] INFO: bigquant: join.v2 end [83.838474s].
[2017-05-12 22:49:44.434571] INFO: bigquant: filter.v2 start ..
[2017-05-12 22:49:44.440385] INFO: filter: filter with expr date < "2015-01-01"
[2017-05-12 22:49:45.911875] INFO: filter: filter /y_2010, 429304/429304
[2017-05-12 22:49:47.573325] INFO: filter: filter /y_2011, 509502/509502
[2017-05-12 22:49:49.253844] INFO: filter: filter /y_2012, 563788/563788
[2017-05-12 22:49:51.001825] INFO: filter: filter /y_2013, 563123/563123
[2017-05-12 22:49:52.976296] INFO: filter: filter /y_2014, 567664/567664
[2017-05-12 22:49:53.299295] INFO: filter: filter /y_2015, 0/560341
[2017-05-12 22:49:53.666310] INFO: filter: filter /y_2016, 0/619535
[2017-05-12 22:49:53.825817] INFO: bigquant: filter.v2 end [9.391196s].
[2017-05-12 22:49:53.834018] INFO: bigquant: filter.v2 start ..
[2017-05-12 22:49:53.840618] INFO: filter: filter with expr "2015-01-01" <= date
[2017-05-12 22:49:54.327991] INFO: filter: filter /y_2010, 0/429304
[2017-05-12 22:49:54.707064] INFO: filter: filter /y_2011, 0/509502
[2017-05-12 22:49:55.002113] INFO: filter: filter /y_2012, 0/563788
[2017-05-12 22:49:55.294498] INFO: filter: filter /y_2013, 0/563123
[2017-05-12 22:49:55.632113] INFO: filter: filter /y_2014, 0/567664
[2017-05-12 22:49:57.247220] INFO: filter: filter /y_2015, 560341/560341
[2017-05-12 22:49:59.234607] INFO: filter: filter /y_2016, 619535/619535
[2017-05-12 22:49:59.435722] INFO: bigquant: filter.v2 end [5.601616s].
[2017-05-12 22:49:59.446450] INFO: bigquant: stock_ranker_train.v2 start ..
[2017-05-12 22:50:04.856016] INFO: df2bin: prepare data: training ..
[2017-05-12 22:51:24.253311] INFO: stock_ranker_train: training: 2633381 rows
[2017-05-12 22:53:59.602452] INFO: bigquant: stock_ranker_train.v2 end [240.155948s].
[2017-05-12 22:53:59.612003] INFO: bigquant: stock_ranker_predict.v2 start ..
[2017-05-12 22:54:01.020911] INFO: df2bin: prepare data: prediction ..
[2017-05-12 22:54:37.741067] INFO: stock_ranker_predict: prediction: 1179876 rows
[2017-05-12 22:54:54.776345] INFO: bigquant: stock_ranker_predict.v2 end [55.164211s].
[2017-05-12 22:54:54.798179] INFO: bigquant: backtest.v5 start ..
[2017-05-12 22:57:00.123965] INFO: Performance: Simulated 482 trading days out of 482.
[2017-05-12 22:57:00.126490] INFO: Performance: first open: 2015-01-05 14:30:00+00:00
[2017-05-12 22:57:00.127798] INFO: Performance: last close: 2016-12-22 20:00:00+00:00
[注意] 有 1 笔卖出是在多天内完成的。当日卖出股票超过了当日股票交易的2.5%会出现这种情况。
  • 收益率174.51%
  • 年化收益率69.55%
  • 基准收益率-5.6%
  • 阿尔法0.71
  • 贝塔0.8
  • 夏普比率2.06
  • 收益波动率32.15%
  • 信息比率3.55
  • 最大回撤45.68%
[2017-05-12 22:57:03.811199] INFO: bigquant: backtest.v5 end [129.012974s].

(htly) #2

因为每天都有交易,持仓时间短,所以在牛市中表现很出彩。你的策略涵盖了15年牛市,你试试16年11月到17年5月呢?市场整体不好,可能策略表现也不是很好。


(kuiiu) #3

请教,修改时间段为2016-11-01到2017-05-10,回测出错,这是什么原因?另外,持仓时间短,每天都交易这就是应对熊市的方法啊。


(kuiiu) #4

年化收益降为33.26%了。


(iQuant) #5

请问报什么错了?能不能贴一个图,你是怎么修改的?


(kuiiu) #6

不太清楚问题出在哪里,现在回测又没问题了。等下次遇上再贴图。