BigQuant使用文档

DAI SQL FAQ

由qxiao创建,最终由qxiao 被浏览 640 用户

如何实现自定义的带有窗口的 macro 函数

创建 macro 函数的语法可参见 create macro.

DAI 提供的滚动窗口函数 m_aggregate_func (e.g. m_avg, m_median, etc.) 有这样的模型:

create macro m_aggregate_func(args, win_sz, pb:=instrument, ob:=date) as
    aggregate_func(args) over (partition by pb order by ob rows win_sz-1 preceding)

注意args 如果是多个参数,定义 macro 函数时需要分开指明。默认参数 pb, ob 如果有多个参数也需要分开指明。也就是说每个标识符只能传入一个参数。

类似的,时间截面窗口函数 c_rank, c_avg 实现:

create macro c_rank_asc(arg, pb:=date) as
    rank() over (partition by pb order by arg);

create macro c_rank_desc(arg, pb:=date) as
    rank() over (partition by pb order by arg desc);

-- 不能传入关键字作为参数,asc/desc 是关键字
-- create macro c_rank(arg, pb:=date, order_type:=asc) as
--     rank() over (partition by pb order by arg order_type);

-- workaround
create macro c_rank(arg, pb:=date, ascending:=true) as
    if (ascending, rank() over (partition by pb order by arg), 
                   rank() over (partition by pb order by arg desc));

create macro c_avg(arg, pb:=date) as
    avg(arg) over (partition by pb);

窗口函数宏传入多个 partition by/order by 项

有时候我们会需要按多个列做分区,比如求每日同一行业内所有的股票的平均收盘价:

select
    instrument,
    date,
    close,
    c_avg(close, pb:='date, industry_level1_code') as avg1,
    -- avg1 is equivalent to avg2
    avg(close) over (partition by date, industry_level1_code) as avg2,
    -- some factor: f(close, avg)
from cn_stock_bar1d
join cn_stock_industry_component  -- industry_level1_code
using(date, instrument)
where date >= '2023-01-01' and date < '2024-01-01' and industry = 'sw2021'

我们可以传入以逗号,分隔的字符串(单引号括起)给 pb参数, e.g., pb:=’A, B, C, D’,它会自动解析成 partition by A, B, C, D。如果某列列名含有空格,e.g., pb:=’date, industry level name’ ,它相当于 partition by date, “industry level name“,即该项除去两边的空白符。order by可以传入类似的字符串,但由于 order 有升降序之分,可以传入前置 -来表示降序,否则为升序。例如,ob:=’A,-B,C’ 则表示 order by A asc, B desc, C asc(亦可省略asc, 默认升序)。

df = pd.DataFrame({'A': ['a', 'b', 'c', 'a', 'b', 'c', 'a', 'b', 'c'],
                   'B': [1, 1, 1, np.nan, 2, 2, 3, 3, 3],
                   'C': [1, 2, 3, 4, 5, 6, 7, 8, 9],
                   'D': ['X', 'X', 'X', 'Y', 'Y', 'Y', 'Y', 'X', 'X']})

df2 = pd.DataFrame({'A': ['a', 'b', 'c'],
                    'E': ['apple', 'banana', 'coconut']})

select *,
    rank('A, -B') as x, -- equivalent to rAB
    rank() over (order by A, B desc) as rAB,
    rank_by(D, '-A, C') as y, -- equivalent to rDAC
    rank() over (partition by D order by A desc, C) as rDAC
from df
join df2 using (A)

-- output
   A    B  C  D        E    x  rAB    y  rDAC
0  c  1.0  3  X  coconut  8.0  8.0  1.0   1.0
1  c  3.0  9  X  coconut  6.0  6.0  2.0   2.0
2  b  1.0  2  X   banana  5.0  5.0  3.0   3.0
3  b  3.0  8  X   banana  3.0  3.0  4.0   4.0
4  a  1.0  1  X    apple  2.0  2.0  5.0   5.0
5  c  2.0  6  Y  coconut  7.0  7.0  1.0   1.0
6  b  2.0  5  Y   banana  4.0  4.0  2.0   2.0
7  a  NaN  4  Y    apple  NaN  NaN  3.0   3.0
8  a  3.0  7  Y    apple  1.0  1.0  4.0   4.0

当有歧义时(例如包含 join 语句),可以列名前可添加表名来消除歧义。比如上面的例子可以写成 rank_by(D, '-A, df.C') 如果 df2也含有列C

select *,
    m_max(B, 2, pb:=D, ob:='C') as max_asc,
    m_max(B, 2, pb:=D, ob:='-C') as max_desc,
from df

-- output
   A    B  C  D  max_asc  max_desc
0  c  3.0  9  X      3.0       NaN
1  b  3.0  8  X      3.0       3.0
2  c  1.0  3  X      1.0       3.0
3  b  1.0  2  X      1.0       1.0
4  a  1.0  1  X      NaN       1.0
5  a  3.0  7  Y      3.0       NaN
6  c  2.0  6  Y      2.0       3.0
7  b  2.0  5  Y      NaN       2.0
8  a  NaN  4  Y      NaN       NaN

行业市值中性化 c_neutralize 的使用

  1. 为什么求出来基本全是 NaN值呢?

    答:c_neutralize(y, industry_level1_code, market_cap) 中传入的因子值是 y (右端向量),通过industry_level1_code 获取的dummy矩阵及取对数后的市值(以及为constant添加的常向量1)共同组成 X,然后做的线性回归。当 y, X 中有任何元素为NaN时求线性方程时就会全部为NaN。所以算之前需要先把 NaN 过滤掉或者填充,一般y很有可能含有 NaN,比如 y = m_avg(close, 5)

  2. c_neutralize采用的算法:

    def process_factor(factor):
        median = np.median(factor)
        mad = np.median(abs(factor - median))
        mad *= 3*1.4826
        clipped = factor.clip(median - mad, median + mad)
        # pandas std() has ddof=1, while numpy std() has ddof=0
        return (clipped - clipped.mean()) / clipped.std(ddof=1)
    
    def sm_resid(data):
        X = pd.get_dummies(data['industry_level1_code']).astype('float64')
        # X = X.reindex(columns=dummy_cols)
        X['log_marketcap'] = np.log(data['total_market_cap'])
        y = process_factor(data['pb'])
        X = sm.add_constant(X)  # intercept term
        model = sm.OLS(y, X)
        results = model.fit(method='pinv')
        return pd.DataFrame(results.resid, index=data.index)
    
    def np_resid(data):
        X = pd.get_dummies(data['industry_level1_code']).astype('float64')
        # X = X.reindex(columns=dummy_cols)
        X['log_marketcap'] = np.log(data['total_market_cap'])
        y = process_factor(data['pb'])
        beta, _, _, _ = np.linalg.lstsq(X, y, rcond=None)
        y_pred = X.dot(beta)
        return pd.DataFrame(y - y_pred, index=data.index)
    
    t1 = time.time()
    df['sm_resid'] = df.groupby('date').apply(sm_resid).reset_index(level=0, drop=True)
    

    使用例子和对比参见: https://bigquant.com/codeshare/5739e696-fd64-480c-95ec-793ea9ff889c

  3. 如果想自己处理 factor 后再只算个残差, 可以使用 c_neutralize_resid(y, industry_level1_code, log_marketcap) 函数 (y = processed_factor)。

缺失值填充/替换

当分区后的数据含有NA值时,填充其值为前一个非NA值:

select 
    instrument, date, 
    last(columns(* exclude (date, instrument)) ignore nulls) 
        over (partition by instrument order by date rows between 
              unbounded preceding and current row) as 'columns(*)' 
from cn_stock_bar1d
where date >= '2023-01-01' 
order by instrument, date

columns(* exclude (date, instrument))   会依次扩展成每一个非 date  和 instrument  的列,最好减少不需要操作的数据。as 'columns(*)’ 会保存处理后的数据成原始的列名,as 'columns(*)_suffix_name’ 的话会把处理后的数据依次存成原列名加上后缀名(_suffix_name)的列名。

填充前 填充后

类似的如果想批量替换NA值为其他值,比如 0,可以把 last 函数替换成 ifnull 函数:

select 
    instrument, date, 
    ifnull(columns(* exclude (date, instrument, name)), 0) as 'columns(*)' 
from cn_stock_bar1d
where date >= '2023-01-01' 
order by instrument, date

把当天open, close为空的值填充为当天所有股票的中位数:

select 
    instrument, date, 
    ifnull(columns(['close', 'open']), 
           c_median(columns(['close', 'open']))) as 'columns(*)' 
from cn_stock_bar1d
where date >= '2023-01-01'
order by instrument, date

\

{link}