克隆策略
  • 本文参考优矿大神call的帖子《MultiFactors Alpha Model - 基于因子IC的多因子合成》,采用bigquant提供的因子,在bigquant平台进行实现,除了部分数据组织方式与原文略有区别外,其他基本相同。 首先载入所需的各种包(有些不需要的我也载入了,常用)。 在计算因子IC的协方差矩阵时,采用了一种压缩估计的方法。它的基本思想使用一个方差小但偏差大的协方差矩阵估计量作为目标估计量,和样本协方差矩阵做一个调和,牺牲部分偏差来获得更稳健的估计量,借助sklearn.covariance的LedoitWolf包
In [9]:
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
from sklearn import linear_model
from matplotlib import rc
rc('mathtext', default='regular')
import seaborn as sns
sns.set_style('white')
from matplotlib import dates
import numpy as np
import pandas as pd
import statsmodels.api as sm
import time
import datetime as DT
import scipy.stats as st
from math import sqrt

在估算因子IC的协方差矩阵时,出现极大极小值情况,此处需要去极值。计算IC时采用的是皮尔逊相关系数。

In [10]:
# 去级值函数
def winsorize_series(se):
    q = se.quantile([0.025, 0.975])
    if isinstance(q, pd.Series) and len(q) == 2:
        se[se < q.iloc[0]] = q.iloc[0]
        se[se > q.iloc[1]] = q.iloc[1]
    return se

# 皮尔逊相关系数函数(IC)
def multiply(a,b):
    sum_ab=0.0
    for i in range(len(a)):
        temp=a[i]*b[i]
        sum_ab+=temp
    return sum_ab

def cal_pearson(x,y):
    n=len(x)
    sum_x=sum(x)
    sum_y=sum(y)
    sum_xy=multiply(x,y)
    sum_x2 = sum([pow(i,2) for i in x])
    sum_y2 = sum([pow(j,2) for j in y])
    molecular=sum_xy-(float(sum_x)*float(sum_y)/n)
    denominator=sqrt((sum_x2-float(sum_x**2)/n)*(sum_y2-float(sum_y**2)/n))
    return molecular/denominator
In [4]:
# 起始日期
start_date = '2010-01-01'
# 结束日期
end_date = '2017-09-29'
rebalance_period = 5
instruments = D.instruments()   #获取所有股票列表
factors = ['return_5','avg_amount_20','pb_lf_0','ps_ttm_0','pe_ttm_0','volatility_10_0','avg_turn_20']
features_data = D.features(instruments=instruments, start_date=start_date, end_date=end_date, fields=factors)
In [5]:
mydata = features_data.set_index(['date','instrument'])[factors].unstack()
In [6]:
for col in mydata['return_5'].columns:
    mydata['return_5'][col] = mydata['return_5'][col].shift(-rebalance_period)
In [7]:
data_ret = mydata['return_5']
In [11]:
group_fac = []
for fac in factors[1:]:
    data_tmp = mydata[fac]
    for col in data_ret.T.columns[:-rebalance_period]:
        tmp_df = pd.DataFrame(index=data_ret.T.index)
        tmp_df['fac'] = data_tmp.T[col]
        tmp_df['ret'] = data_ret.T[col]
        tmp_df = tmp_df.dropna()
        coff = cal_pearson(tmp_df['fac'],tmp_df['ret'])
        dict_temp = {'factor':fac,'date':col,'coff':coff}
        group_fac.append(dict_temp)

_ic = pd.DataFrame(group_fac,columns=['factor','date','coff'])
In [12]:
ic_temp = _ic.set_index(['date','factor']).unstack()
In [13]:
ic_df = ic_temp['coff']
ic_df.head()
Out[13]:
factor avg_amount_20 avg_turn_20 pb_lf_0 pe_ttm_0 ps_ttm_0 volatility_10_0
date
2010-01-04 -0.095868 0.037070 -0.009930 -0.028243 -0.016798 0.037300
2010-01-05 -0.084027 0.019482 -0.014350 -0.032316 -0.018839 0.074761
2010-01-06 -0.196026 0.013950 -0.013925 -0.036536 -0.020597 -0.026074
2010-01-07 -0.225805 0.096165 -0.018311 -0.038708 -0.009619 -0.044134
2010-01-08 -0.256634 0.077277 -0.017676 -0.008691 -0.018676 0.104339

先把IC_DF的图画一下:

In [18]:
ic_df.plot(figsize=(12,6), title='Factor weight using sample covariance')
Out[18]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f4a88b48e10>
In [ ]:
这个图看出如果直接用因子IC去做预测的话会比较乱下面用两种方法估算因子IC的协方差矩阵进而得到两种不同的因子权重

ic_weight_df使用Sample 协方差矩阵估算
ic_weight_shrink_df使用 Ledoit-Wolf 压缩方法得到的协方差矩阵估算
下面计算中的滚动窗口为120天即计算每一天的因子权重时使用了之前6个月的IC时间序列来计算IC均值向量和IC协方差矩阵
In [14]:
# ---------------------------------------------------------------------------
# unshrunk covariance

n = 120 
ic_weight_df = pd.DataFrame(index=ic_df.index, columns=ic_df.columns)
for dt in ic_df.index:
    ic_dt = ic_df[ic_df.index<dt].tail(n)
    if len(ic_dt) < n:
        continue
        
    ic_cov_mat = np.mat(np.cov(ic_dt.T.as_matrix()).astype(float)) 
    inv_ic_cov_mat = np.linalg.inv(ic_cov_mat)
    weight = inv_ic_cov_mat*np.mat(ic_dt.mean()).reshape(len(inv_ic_cov_mat),1)
    weight = np.array(weight.reshape(len(weight),))[0]
    ic_weight_df.ix[dt] = weight/np.sum(weight)
    
# ---------------------------------------------------------------------------
# Ledoit-Wolf shrink covariance

n = 120
ic_weight_shrink_df = pd.DataFrame(index=ic_df.index, columns=ic_df.columns)
lw = LedoitWolf()
for dt in ic_df.index:
    ic_dt = ic_df[ic_df.index<dt].tail(n)
    if len(ic_dt) < n:
        continue
    ic_cov_mat = lw.fit(ic_dt.as_matrix()).covariance_
    inv_ic_cov_mat = np.linalg.inv(ic_cov_mat)
    weight = inv_ic_cov_mat*np.mat(ic_dt.mean()).reshape(len(inv_ic_cov_mat),1)
    weight = np.array(weight.reshape(len(weight),))[0]
    ic_weight_shrink_df.ix[dt] = weight/np.sum(weight)
In [16]:
ic_weight_df = ic_weight_df.apply(winsorize_series)
ic_weight_shrink_df = ic_weight_shrink_df.apply(winsorize_series)
In [20]:
ic_weight_shrink_df.tail()
Out[20]:
factor avg_amount_20 avg_turn_20 pb_lf_0 pe_ttm_0 ps_ttm_0 volatility_10_0
date
2017-09-18 0.078866 0.078488 -0.109825 0.449737 0.547471 -0.044737
2017-09-19 0.086138 0.072087 -0.118045 0.456017 0.541968 -0.038166
2017-09-20 0.089628 0.068889 -0.122014 0.462565 0.534255 -0.033324
2017-09-21 0.090283 0.068133 -0.123906 0.471896 0.527999 -0.034405
2017-09-22 0.092277 0.067538 -0.133570 0.487229 0.519462 -0.032936
In [17]:
# 因子权重作图展示

ic_weight_df.plot(figsize=(12,6), title='Factor weight using sample covariance')
ic_weight_shrink_df.plot(figsize=(12,6), title='Factor weight using shrink covariance')
Out[17]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f4a8ac69eb8>

后续:下一节将用以上因子和权重计算方法,采用5天、10天、20天周期调仓,进行2010-2017年回测,查看这些因子的Alpha预测能力