传统移动平均线(MA)的缺点

移动平均线(MA)是技术分析中常用的一类趋势跟踪指标,其可以在一定程度上刻画股票价格或指数的变动方向。MA 的计算天数越多,平滑性越好,但时滞带来的延迟影响也越严重。因此,在使用 MA 指标进行趋势跟踪时,容易出现“跟不紧”甚至“跟不上”的情况。平滑性和延迟性在 MA 指标中成为了不可避免的矛盾,这就促使我们去寻找化解这一矛盾的工具和方法。

低延迟趋势线(LLT)的构造

与 MA 类似的均线指标还有 EMA,其本质是在计算中对靠近计算日的价格赋予更大的权重。EMA 指标的计算方式在信号处理理论中恰好对应着一类一阶低通滤波器,其可以将信号的高频分量进行有效的过滤。通过分析,我们认为如果希望滤波效果更好,则需要选择合适阶数的滤波器。上述一阶滤波的效果相对较差,通带和阻带间的过渡带太长;阶数越高,滤波器传输函数在截止频率附近衰减得越快,但同时通带会变得不平,也就是靠近截止频率的信号会有些放大。因此折中来看,可以选择二阶滤波器,我们根据二阶滤波器设计了 LLT 低延迟趋势线,发现其在低频部分的输出信号较强,同时相比 MA 均线和 EMA 均线,延迟大幅下降。

LLT 趋势线可以实现交易性趋势择时

我们将 LLT 趋势择时应用于沪深 300、上证指数、深证成指等市场指数的日数据,通过切线法进行方向判断,获得良好的风险收益情况。相比MA 趋势择时,我们发现 LLT 模型的择时周期更短,稳定性也更好。不过采用切线法对趋势线进行追踪有一个问题,就是在趋势拐点附近,切线斜率容易在零附近震荡,从而造成多次择时判断且正确率下降的情况,这相当于在择时模型中内嵌了一定的止损机制,因此我们将这类择时方法称为交易性择时。对于 LLT 指标,趋势一旦确立,持仓可以保持相对较长的盈利时间,而在拐点附近的震荡交易次数虽多,但持仓时间往往都很短。因此对于交易性择时来说,在判断正确率相对较低的情况下,判断正确的时间占比却往往较高,并且盈利也主要来自于这一部分的贡献。

LLT 趋势择时可应用于股票、ETF、期货等金融产品的交易

基于 LLT 对趋势跟踪的有效性,我们认为 LLT 趋势择时可应用于股票、ETF、期货等金融产品的交易。在本篇报告中,我们实证计算了 LLT 在 ETF趋势交易中的应用,获得了良好的风险收益表现。

一、 传统均线系统

跟随市场趋势是一种简单有效的投资方式。在市场处于上升趋势时,投资者可以买入并持有;当市场转为下降趋势时,投资者可以选择卖空或空仓。

跟随趋势最简单的办法是采用移动平均(Moving Average)线,其算法为:

图片

其中 price一般选择收盘价,MA(n)即为T日的n日均线指标。对于MA指标,n越大,趋势线的平滑性越好。MA指标可以很好地刻画指数或股票价格趋势,但其最大的问题在于存在延迟。

如下图所示的指数日线及 MA 均线系统,蓝色、橙色、紫色、绿色分别代表 5 日、 10 日、30 日和 60 日均线。可以看出,随着均线分母n的增加,MA 指标的局部波动显著减小(即平滑性越来越好),但趋势跟随也出现了越来越高的延迟。

In [1]:
import dai
# ---------------------------------------------------------------------------新版 AIStudio 开发环境暂不支持
# 默认导入,需要手动导入相关模块,请在当前或上方代码块中添加如下代码:
from bigdatasource.api import DataSource


df=dai.query("""
    SELECT *
    FROM cn_stock_index_bar1d
    WHERE date > '2020-01-01';
""").df()

# df1=df[df['name']=='科创50指数']
df=df[df['instrument']=='000300.SH']
print(df)
#df.instrument = df.instrument.str.replace("SHI", "SH")
#ds = DataSource.write_df(df)
In [2]:
import bigcharts
from bigcharts import opts
data=df
bigcharts.Chart(
    data=data[-100:],
    type_="kline"
).render()

因此,在使用 MA 指标进行趋势跟踪时,容易出现“跟不紧”甚至“跟不上”的 情况。本篇低延迟趋势线 LLT 指标(Low-lag Trendline),通过信号处理理论中的一些滤波方法,克服了 MA 指标的上述缺点,可以实现低延迟趋势跟踪。

低延迟趋势线 LLT 的构造

In [3]:
import matplotlib.pyplot as plt
import pandas as pd
# 支持中文
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号
price_df=df
for i in [5,10,30,60]:
    price_df['MA_{}'.format(i)] = price_df['close'].rolling(i).mean()
price_df[['close','MA_5','MA_10','MA_30',"MA_60"]].plot(figsize=(18,8),title='传统MA均线指标')
plt.show()
In [4]:
import pandas as pd
def cal_LLT(price: pd.Series, alpha: float):

    LLT = []
    price_value = price.values
    LLT.append(price_value[0])
    LLT.append(price_value[1])


    for i, e in enumerate(price_value):
        if i > 1:

            v = (alpha - alpha**2 / 4) * e + (alpha**2 / 2) * price_value[i - 1] - (
                alpha - 3 * (alpha**2) / 4) * price_value[i - 2] + 2 * (
                    1 - alpha) * LLT[i - 1] - (1 - alpha)**2 * LLT[i - 2]

            LLT.append(v)

    return LLT
In [5]:
# 计算EMA
price_df['EMA'] = price_df['close'].ewm(alpha=0.05,adjust=False).mean()
# 画图
plt.rcParams['font.family']='serif'
price_df['MA30'] = price_df['close'].rolling(30).mean()
price_df['LLT'] = cal_LLT(price_df['close'],0.05)
price_df[['close','MA30','LLT','EMA']].plot(figsize=(18,8),title='Comparison of various trend lines')
plt.show()

对比传统 MA 均线指标、EMA 指标、修正 EMA 指标,以及低延迟趋势线 LLT 指标, 可以看出,相对其他趋势线指标,LLT 具有更显著的拐点和更低的延迟

对于低延迟趋势线 LLT,可以看出,无论 𝛼 取多少,其零频附近的延迟都接近于零; 随着频率增大至截止频率,其延迟也都低于 MA 及 EMA 指标。不过 LLT 趋势线仍然具有“ 𝛼 越小, 延迟越高,平滑性越好”的特点。分别计算 𝛼 0.01,0.01,0.03------0.1

In [6]:
price_df=df[-300:]
import numpy as np
data=pd.DataFrame()
data['close']=price_df['close']

data['EMA']=price_df['EMA']
for i in np.linspace(0.01,0.1,10):
    data['alpha_{}'.format(i)]=cal_LLT(data['close'],i)
data.plot(figsize=(18,8))
plt.show()
In [7]:
import bigcharts
from bigcharts import opts
data=df
bigcharts.Chart(
    data=data,
    type_="kline"
).render()

基于 LLT 趋势线的交易性择时 由于 LLT 趋势线中有且仅有参数 𝛼 ,因此我们有必要对这一参数进行较为细致的研究。 𝛼 参数等价于 EMA 指标的计算中用到了多少个交易日d 的历史价格数据。因此,我们这里选择d等于20 到 90 的情况(窗口间隔为 10 个交易日),分别计算了多、空双向交易的累积收益情况(未考虑交易成本),回测标的(上证指数)和回测时间不变: a=1/d+1

In [12]:
# alpha
alpha_all = [2/(d+1) for d in range(20,91,10)]

for a in alpha_all:
    price_df[f'LLT(%0.5s)'%a] = cal_LLT(price_df['close'],a)
print(price_df)

采用切线法:通过向前差分计算,我们可以在每个交易日结束后得到 LLT 趋势线在该点处切线的斜率k。当k>0时,看多市场;当 k<0时,看空市场;当k=0时,维持之前的方向判断。

In [16]:
# 计算llt的差值
## 不同的差分日期期对收益也会有影响
diff_llt = price_df[[f'LLT(%0.5s)'%(2/(d+1)) for d in range(20,91,10)]].diff().shift(1)

# 1为多头,-1为空头
cond = ((diff_llt>0)*1+(diff_llt<0)*-1)
# 计算收益率
ret_sreies = price_df['close'].pct_change()
ret_shape = np.broadcast_to(np.expand_dims(ret_sreies.values,1),diff_llt.shape)
# 计算策略收益
strategy_ret = cond*ret_shape
# 计算净值
strategy_cum = (1+strategy_ret).cumprod()
print(strategy_ret)
In [17]:
#LLT采用切线法开平仓
strategy_cum.plot(figsize=(18,8),title='LLT adopts tangent method to open and close positions')
plt.show()
In [22]:
#安装第三方库
!pip install -i https://pypi.tuna.tsinghua.edu.cn/simple some-package akshare
In [23]:
#买入持有,不进行择时操作
import quantstats as qs
strategy_ret['date']=pd.to_datetime(price_df['date'])
strategy_ret.index=strategy_ret['date']
qs.reports.full(strategy_ret['LLT(0.095)'])

Performance Metrics

None

Worst 5 Drawdowns

Start Valley End Days Max Drawdown 99% Max Drawdown
1 2023-01-31 2023-07-27 2023-08-04 186 -16.120343 -15.845856
2 2022-08-04 2022-09-15 2022-10-28 86 -8.627736 -7.760090
3 2022-11-16 2022-11-29 2023-01-13 59 -6.412622 -6.300414
4 2022-05-19 2022-05-30 2022-06-14 27 -5.550519 -4.891654
5 2022-11-01 2022-11-03 2022-11-14 14 -5.508348 -4.733627

Strategy Visualization

---------------------------------------------------------------------------
OptionError                               Traceback (most recent call last)
Cell In[23], line 5
      3 strategy_ret['date']=pd.to_datetime(price_df['date'])
      4 strategy_ret.index=strategy_ret['date']
----> 5 qs.reports.full(strategy_ret['LLT(0.095)'])

File ~/.local/lib/python3.8/site-packages/quantstats/reports.py:628, in full(returns, benchmark, rf, grayscale, figsize, display, compounded, periods_per_year, match_dates, **kwargs)
    625     print("\n\n")
    626     print("[Strategy Visualization]\nvia Matplotlib")
--> 628 plots(
    629     returns=returns,
    630     benchmark=benchmark,
    631     grayscale=grayscale,
    632     figsize=figsize,
    633     mode="full",
    634     periods_per_year=periods_per_year,
    635     prepare_returns=False,
    636     benchmark_title=benchmark_title,
    637     strategy_title=strategy_title,
    638     active=active,
    639 )

File ~/.local/lib/python3.8/site-packages/quantstats/reports.py:1327, in plots(returns, benchmark, grayscale, figsize, mode, compounded, periods_per_year, prepare_returns, match_dates, **kwargs)
   1306     _plots.returns(
   1307         returns,
   1308         benchmark,
   (...)
   1314         prepare_returns=False,
   1315     )
   1317 _plots.yearly_returns(
   1318     returns,
   1319     benchmark,
   (...)
   1324     prepare_returns=False,
   1325 )
-> 1327 _plots.histogram(
   1328     returns,
   1329     benchmark,
   1330     grayscale=grayscale,
   1331     figsize=(figsize[0], figsize[0] * 0.5),
   1332     show=True,
   1333     ylabel=False,
   1334     prepare_returns=False,
   1335 )
   1337 small_fig_size = (figsize[0], figsize[0] * 0.35)
   1338 if len(returns.columns) > 1:

File ~/.local/lib/python3.8/site-packages/quantstats/_plotting/wrappers.py:656, in histogram(returns, benchmark, resample, fontname, grayscale, figsize, ylabel, subtitle, compounded, savefig, show, prepare_returns)
    653 else:
    654     title = ""
--> 656 return _core.plot_histogram(
    657     returns,
    658     benchmark,
    659     resample=resample,
    660     grayscale=grayscale,
    661     fontname=fontname,
    662     title="Distribution of %sReturns" % title,
    663     figsize=figsize,
    664     ylabel=ylabel,
    665     subtitle=subtitle,
    666     compounded=compounded,
    667     savefig=savefig,
    668     show=show,
    669 )

File ~/.local/lib/python3.8/site-packages/quantstats/_plotting/core.py:516, in plot_histogram(returns, benchmark, resample, bins, fontname, grayscale, title, kde, figsize, ylabel, subtitle, compounded, savefig, show)
    514     combined_returns = returns.copy()
    515     if kde:
--> 516         _sns.kdeplot(data=combined_returns, color="black", ax=ax)
    517     x = _sns.histplot(
    518         data=combined_returns,
    519         bins=bins,
   (...)
    524         ax=ax,
    525     )
    527 elif isinstance(returns, _pd.DataFrame):

File /usr/local/python3/lib/python3.8/site-packages/seaborn/_decorators.py:46, in _deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
     36     warnings.warn(
     37         "Pass the following variable{} as {}keyword arg{}: {}. "
     38         "From version 0.12, the only valid positional argument "
   (...)
     43         FutureWarning
     44     )
     45 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 46 return f(**kwargs)

File /usr/local/python3/lib/python3.8/site-packages/seaborn/distributions.py:1770, in kdeplot(x, y, shade, vertical, kernel, bw, gridsize, cut, clip, legend, cumulative, shade_lowest, cbar, cbar_ax, cbar_kws, ax, weights, hue, palette, hue_order, hue_norm, multiple, common_norm, common_grid, levels, thresh, bw_method, bw_adjust, log_scale, color, fill, data, data2, warn_singular, **kwargs)
   1767     if color is not None:
   1768         plot_kws["color"] = color
-> 1770     p.plot_univariate_density(
   1771         multiple=multiple,
   1772         common_norm=common_norm,
   1773         common_grid=common_grid,
   1774         fill=fill,
   1775         legend=legend,
   1776         warn_singular=warn_singular,
   1777         estimate_kws=estimate_kws,
   1778         **plot_kws,
   1779     )
   1781 else:
   1783     p.plot_bivariate_density(
   1784         common_norm=common_norm,
   1785         fill=fill,
   (...)
   1795         **kwargs,
   1796     )

File /usr/local/python3/lib/python3.8/site-packages/seaborn/distributions.py:928, in _DistributionPlotter.plot_univariate_density(self, multiple, common_norm, common_grid, warn_singular, fill, legend, estimate_kws, **plot_kws)
    925 log_scale = self._log_scaled(self.data_variable)
    927 # Do the computation
--> 928 densities = self._compute_univariate_density(
    929     self.data_variable,
    930     common_norm,
    931     common_grid,
    932     estimate_kws,
    933     log_scale,
    934     warn_singular,
    935 )
    937 # Adjust densities based on the `multiple` rule
    938 densities, baselines = self._resolve_multiple(densities, multiple)

File /usr/local/python3/lib/python3.8/site-packages/seaborn/distributions.py:303, in _DistributionPlotter._compute_univariate_density(self, data_variable, common_norm, common_grid, estimate_kws, log_scale, warn_singular)
    299     common_norm = False
    301 densities = {}
--> 303 for sub_vars, sub_data in self.iter_data("hue", from_comp_data=True):
    304 
    305     # Extract the data points from this sub set and remove nulls
    306     sub_data = sub_data.dropna()
    307     observations = sub_data[data_variable]

File /usr/local/python3/lib/python3.8/site-packages/seaborn/_core.py:983, in VectorPlotter.iter_data(self, grouping_vars, reverse, from_comp_data)
    978 grouping_vars = [
    979     var for var in grouping_vars if var in self.variables
    980 ]
    982 if from_comp_data:
--> 983     data = self.comp_data
    984 else:
    985     data = self.plot_data

File /usr/local/python3/lib/python3.8/site-packages/seaborn/_core.py:1054, in VectorPlotter.comp_data(self)
   1050 axis = getattr(ax, f"{var}axis")
   1052 # Use the converter assigned to the axis to get a float representation
   1053 # of the data, passing np.nan or pd.NA through (pd.NA becomes np.nan)
-> 1054 with pd.option_context('mode.use_inf_as_null', True):
   1055     orig = self.plot_data[var].dropna()
   1056 comp_col = pd.Series(index=orig.index, dtype=float, name=var)

File ~/.local/lib/python3.8/site-packages/pandas/_config/config.py:441, in option_context.__enter__(self)
    440 def __enter__(self) -> None:
--> 441     self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
    443     for pat, val in self.ops:
    444         _set_option(pat, val, silent=True)

File ~/.local/lib/python3.8/site-packages/pandas/_config/config.py:441, in <listcomp>(.0)
    440 def __enter__(self) -> None:
--> 441     self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
    443     for pat, val in self.ops:
    444         _set_option(pat, val, silent=True)

File ~/.local/lib/python3.8/site-packages/pandas/_config/config.py:135, in _get_option(pat, silent)
    134 def _get_option(pat: str, silent: bool = False) -> Any:
--> 135     key = _get_single_key(pat, silent)
    137     # walk the nested dict
    138     root, k = _get_root(key)

File ~/.local/lib/python3.8/site-packages/pandas/_config/config.py:121, in _get_single_key(pat, silent)
    119     if not silent:
    120         _warn_if_deprecated(pat)
--> 121     raise OptionError(f"No such keys(s): {repr(pat)}")
    122 if len(keys) > 1:
    123     raise OptionError("Pattern matched multiple keys")

OptionError: "No such keys(s): 'mode.use_inf_as_null'"
In [ ]:
#指数的分析
price_df.index=pd.to_datetime(price_df['date'])
qs.reports.full(price_df['close'].pct_change())

Performance Metrics

None

Worst 5 Drawdowns

Start Valley End Days Max Drawdown 99% Max Drawdown
1 2022-07-05 2022-10-31 2023-08-03 395 -21.959970 -19.236647
2 2022-05-23 2022-05-24 2022-05-30 8 -2.904790 -2.315612
3 2022-06-29 2022-06-29 2022-07-01 3 -1.540126 -0.529977
4 2022-06-21 2022-06-22 2022-06-22 2 -1.381158 -0.112393
5 2022-06-13 2022-06-13 2022-06-14 2 -1.171054 -0.393542

Strategy Visualization

---------------------------------------------------------------------------
OptionError                               Traceback (most recent call last)
Cell In[35], line 3
      1 #指数的分析
      2 price_df.index=pd.to_datetime(price_df['date'])
----> 3 qs.reports.full(price_df['close'].pct_change())

File ~/.local/lib/python3.8/site-packages/quantstats/reports.py:628, in full(returns, benchmark, rf, grayscale, figsize, display, compounded, periods_per_year, match_dates, **kwargs)
    625     print("\n\n")
    626     print("[Strategy Visualization]\nvia Matplotlib")
--> 628 plots(
    629     returns=returns,
    630     benchmark=benchmark,
    631     grayscale=grayscale,
    632     figsize=figsize,
    633     mode="full",
    634     periods_per_year=periods_per_year,
    635     prepare_returns=False,
    636     benchmark_title=benchmark_title,
    637     strategy_title=strategy_title,
    638     active=active,
    639 )

File ~/.local/lib/python3.8/site-packages/quantstats/reports.py:1327, in plots(returns, benchmark, grayscale, figsize, mode, compounded, periods_per_year, prepare_returns, match_dates, **kwargs)
   1306     _plots.returns(
   1307         returns,
   1308         benchmark,
   (...)
   1314         prepare_returns=False,
   1315     )
   1317 _plots.yearly_returns(
   1318     returns,
   1319     benchmark,
   (...)
   1324     prepare_returns=False,
   1325 )
-> 1327 _plots.histogram(
   1328     returns,
   1329     benchmark,
   1330     grayscale=grayscale,
   1331     figsize=(figsize[0], figsize[0] * 0.5),
   1332     show=True,
   1333     ylabel=False,
   1334     prepare_returns=False,
   1335 )
   1337 small_fig_size = (figsize[0], figsize[0] * 0.35)
   1338 if len(returns.columns) > 1:

File ~/.local/lib/python3.8/site-packages/quantstats/_plotting/wrappers.py:656, in histogram(returns, benchmark, resample, fontname, grayscale, figsize, ylabel, subtitle, compounded, savefig, show, prepare_returns)
    653 else:
    654     title = ""
--> 656 return _core.plot_histogram(
    657     returns,
    658     benchmark,
    659     resample=resample,
    660     grayscale=grayscale,
    661     fontname=fontname,
    662     title="Distribution of %sReturns" % title,
    663     figsize=figsize,
    664     ylabel=ylabel,
    665     subtitle=subtitle,
    666     compounded=compounded,
    667     savefig=savefig,
    668     show=show,
    669 )

File ~/.local/lib/python3.8/site-packages/quantstats/_plotting/core.py:516, in plot_histogram(returns, benchmark, resample, bins, fontname, grayscale, title, kde, figsize, ylabel, subtitle, compounded, savefig, show)
    514     combined_returns = returns.copy()
    515     if kde:
--> 516         _sns.kdeplot(data=combined_returns, color="black", ax=ax)
    517     x = _sns.histplot(
    518         data=combined_returns,
    519         bins=bins,
   (...)
    524         ax=ax,
    525     )
    527 elif isinstance(returns, _pd.DataFrame):

File /usr/local/python3/lib/python3.8/site-packages/seaborn/_decorators.py:46, in _deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
     36     warnings.warn(
     37         "Pass the following variable{} as {}keyword arg{}: {}. "
     38         "From version 0.12, the only valid positional argument "
   (...)
     43         FutureWarning
     44     )
     45 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 46 return f(**kwargs)

File /usr/local/python3/lib/python3.8/site-packages/seaborn/distributions.py:1770, in kdeplot(x, y, shade, vertical, kernel, bw, gridsize, cut, clip, legend, cumulative, shade_lowest, cbar, cbar_ax, cbar_kws, ax, weights, hue, palette, hue_order, hue_norm, multiple, common_norm, common_grid, levels, thresh, bw_method, bw_adjust, log_scale, color, fill, data, data2, warn_singular, **kwargs)
   1767     if color is not None:
   1768         plot_kws["color"] = color
-> 1770     p.plot_univariate_density(
   1771         multiple=multiple,
   1772         common_norm=common_norm,
   1773         common_grid=common_grid,
   1774         fill=fill,
   1775         legend=legend,
   1776         warn_singular=warn_singular,
   1777         estimate_kws=estimate_kws,
   1778         **plot_kws,
   1779     )
   1781 else:
   1783     p.plot_bivariate_density(
   1784         common_norm=common_norm,
   1785         fill=fill,
   (...)
   1795         **kwargs,
   1796     )

File /usr/local/python3/lib/python3.8/site-packages/seaborn/distributions.py:928, in _DistributionPlotter.plot_univariate_density(self, multiple, common_norm, common_grid, warn_singular, fill, legend, estimate_kws, **plot_kws)
    925 log_scale = self._log_scaled(self.data_variable)
    927 # Do the computation
--> 928 densities = self._compute_univariate_density(
    929     self.data_variable,
    930     common_norm,
    931     common_grid,
    932     estimate_kws,
    933     log_scale,
    934     warn_singular,
    935 )
    937 # Adjust densities based on the `multiple` rule
    938 densities, baselines = self._resolve_multiple(densities, multiple)

File /usr/local/python3/lib/python3.8/site-packages/seaborn/distributions.py:303, in _DistributionPlotter._compute_univariate_density(self, data_variable, common_norm, common_grid, estimate_kws, log_scale, warn_singular)
    299     common_norm = False
    301 densities = {}
--> 303 for sub_vars, sub_data in self.iter_data("hue", from_comp_data=True):
    304 
    305     # Extract the data points from this sub set and remove nulls
    306     sub_data = sub_data.dropna()
    307     observations = sub_data[data_variable]

File /usr/local/python3/lib/python3.8/site-packages/seaborn/_core.py:983, in VectorPlotter.iter_data(self, grouping_vars, reverse, from_comp_data)
    978 grouping_vars = [
    979     var for var in grouping_vars if var in self.variables
    980 ]
    982 if from_comp_data:
--> 983     data = self.comp_data
    984 else:
    985     data = self.plot_data

File /usr/local/python3/lib/python3.8/site-packages/seaborn/_core.py:1054, in VectorPlotter.comp_data(self)
   1050 axis = getattr(ax, f"{var}axis")
   1052 # Use the converter assigned to the axis to get a float representation
   1053 # of the data, passing np.nan or pd.NA through (pd.NA becomes np.nan)
-> 1054 with pd.option_context('mode.use_inf_as_null', True):
   1055     orig = self.plot_data[var].dropna()
   1056 comp_col = pd.Series(index=orig.index, dtype=float, name=var)

File ~/.local/lib/python3.8/site-packages/pandas/_config/config.py:441, in option_context.__enter__(self)
    440 def __enter__(self) -> None:
--> 441     self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
    443     for pat, val in self.ops:
    444         _set_option(pat, val, silent=True)

File ~/.local/lib/python3.8/site-packages/pandas/_config/config.py:441, in <listcomp>(.0)
    440 def __enter__(self) -> None:
--> 441     self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
    443     for pat, val in self.ops:
    444         _set_option(pat, val, silent=True)

File ~/.local/lib/python3.8/site-packages/pandas/_config/config.py:135, in _get_option(pat, silent)
    134 def _get_option(pat: str, silent: bool = False) -> Any:
--> 135     key = _get_single_key(pat, silent)
    137     # walk the nested dict
    138     root, k = _get_root(key)

File ~/.local/lib/python3.8/site-packages/pandas/_config/config.py:121, in _get_single_key(pat, silent)
    119     if not silent:
    120         _warn_if_deprecated(pat)
--> 121     raise OptionError(f"No such keys(s): {repr(pat)}")
    122 if len(keys) > 1:
    123     raise OptionError("Pattern matched multiple keys")

OptionError: "No such keys(s): 'mode.use_inf_as_null'"
In [24]:
plt.figure(figsize=(10,6))
plt.title('不同d参数下的LLT趋势择时累积收益率')
plt.bar(x=[f'd=%s'%x for x in range(20,91,10)],height=(strategy_cum.iloc[-1]-1).values)
plt.show()
In [26]:
# 计算斜率
def cal_shope(arr):
    
    return np.mean(arr[1:])/np.mean(arr[:-1])

# 计算LLT的每日斜率进行开平仓
shope_df = price_df[[f'LLT(%0.5s)' % (2 / (d + 1)) for d in range(20, 91, 10)
                    ]].rolling(22).apply(
                        cal_shope, raw=True)
                        
cond = (shope_df > 1) * 1 + (shope_df < 1) * -1
strategy_ret_a = cond * ret_shape
strategy_cum_a = (1 + strategy_ret_a).cumprod()
In [27]:
strategy_cum_a.plot(figsize=(18,8),title='LLT斜率开平仓')
plt.show()
In [28]:
plt.figure(figsize=(10,6))
plt.title('不同d参数下的LLT趋势择时累积收益率')
Out[28]:
Text(0.5, 1.0, '不同d参数下的LLT趋势择时累积收益率')
In [29]:
plt.bar(x=[f'd=%s'%x for x in range(20,91,10)],height=(strategy_cum_a.iloc[-1]-1).values)
Out[29]:
<BarContainer object of 8 artists>