克隆策略

基于卷积神经网络的多因子时间序列预测

以下策略是根据这篇文章中的模型来实现的,主要是利用卷积神经网络模型,分析 ['close', 'open', 'high', 'low', 'volume'] 这五个字段信息和下一日股价涨跌之间的关系。 article

In [208]:
import pandas as pd
import matplotlib.pylab as plt

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.recurrent import LSTM, GRU
from keras.layers import Convolution1D, MaxPooling1D, AtrousConvolution1D, RepeatVector
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau, CSVLogger
from keras.layers.wrappers import Bidirectional
from keras import regularizers
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import *
from keras.optimizers import RMSprop, Adam, SGD, Nadam
from keras.initializers import *

from sklearn.model_selection import train_test_split

import seaborn as sns
sns.despine()
<matplotlib.figure.Figure at 0x7f2817193f98>

在分析时间序列数据的时候,我们不能用随机化的方法来分割训练集和测试集。

虽然大部分机器学习算法进行训练的时候都是随机化分割训练集,但是这种分割方法的前提是数据是全部符合某一固定的概率分布的,而我们的金融数据时间序列显然不具备这样的特诊。另一方面,我们的因子都是包含一个时间窗口的历史数据,如果利用随机化分割,测试集的很多信息其实都已经隐含在了训练集里面。 在文章原文里面就利用了随机化进行分割,得到了60%准确率的成绩,但是在实际回测中效果却没有那么好。这一点是需要重视的。

感兴趣的话可以自己在下面注释掉原来的函数试一试。利用随机分割的测试集甚至可以达到70%的准确率,不过这并不真正的实用。

In [313]:
# 按时间分离训练集和测试集
def create_Xt_Yt(X, Y, ratio=0.9):
    p = int(len(X) * ratio)
    X_train = X[0:p]
    X_test = X[p:]
    Y_train = Y[0:p]
    Y_test = Y[p:]
    return X_train, X_test, Y_train, Y_test
In [320]:
#训练起始时间
start_date='2005-01-01'
end_date='2016-02-01'
#inst = D.instruments(start_date, end_date, market='CN_STOCK_A')
#print(inst)
instruments = ['000300.SHA']
features = ['close', 'open', 'high', 'low', 'volume']
hist = D.history_data(instruments, start_date, end_date, fields=features)

print(hist.head())

plt.plot(hist['date'], hist['close'])
plt.show()

# 转化为时间序列数据
closep = hist['close'].tolist()

'''
data = []
for feature in features:
    data.append(hist[feature].tolist())
'''

WINDOW = 60
EMB_SIZE = len(features)
STEP = 1
FORECAST = 1

# Straightforward way for creating time windows
X, Y = [], []
for i in range(0, len(hist), STEP): 
    try:
        o = hist['open'].tolist()[i:i+WINDOW]
        h = hist['high'].tolist()[i:i+WINDOW]
        l = hist['low'].tolist()[i:i+WINDOW]
        c = hist['close'].tolist()[i:i+WINDOW]
        v = hist['volume'].tolist()[i:i+WINDOW]

        o = (np.array(o) - np.mean(o)) / np.std(o)
        h = (np.array(h) - np.mean(h)) / np.std(h)
        l = (np.array(l) - np.mean(l)) / np.std(l)
        c = (np.array(c) - np.mean(c)) / np.std(c)
        v = (np.array(v) - np.mean(v)) / np.std(v)
        
        '''
        x_i = []
        # normalization for one time window
        for arr in data:
            o = arr[i:i+WINDOW]
            o = (np.array(o) - np.mean(o)) / np.std(o)
            x_i.append(o)
            print(x_i)
        x_i = np.array(x_i)
        print(x_i.shape)
        

        temp_i = closep[i:i+WINDOW]
        y_i = closep[i+WINDOW+FORECAST]  

        last_close = temp_i[-1]
        next_close = y_i
        '''

        x_i = closep[i:i+WINDOW]
        y_i = closep[i+WINDOW+FORECAST]  

        last_close = x_i[-1]
        next_close = y_i
        
        if last_close * (1 + 0.00) < next_close:
            y_i = [1, 0]
        elif last_close * (1 - 0.00) > next_close:
            y_i = [0, 1] 
        else: y_i = [0, 0]
            
        x_i = np.column_stack((o, h, l, c, v))


    except Exception as e:
        print(e)
        break

    X.append(x_i)
    Y.append(y_i)


X, Y = np.array(X), np.array(Y)
print(X.shape)



X_train, X_test, Y_train, Y_test = create_Xt_Yt(X, Y) # 按时间分割训练集和测试集
#X_train, X_test, Y_train, Y_test = train_test_split(X, Y) # 随机化分割

#print(X_test)

#X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[2], EMB_SIZE))
#X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[2], EMB_SIZE))

X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], EMB_SIZE))
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], EMB_SIZE))

print(X_train.shape)
      volume       date        open       close        high         low  \
0  741286894 2005-01-04  994.768982  982.794006  994.768982  980.658020   
1  711910898 2005-01-05  981.577026  992.564026  997.322998  979.877014   
2  628802905 2005-01-06  993.330994  983.174011  993.788025  980.330017   
3  729869409 2005-01-07  983.044983  983.958008  995.710999  979.812012   
4  579169799 2005-01-10  983.760010  993.879028  993.958984  979.789001   

   instrument  
0  000300.SHA  
1  000300.SHA  
2  000300.SHA  
3  000300.SHA  
4  000300.SHA  
list index out of range
(2631, 60, 5)
(1973, 60, 5)
In [321]:
# set up model
model = Sequential()
model.add(Convolution1D(input_shape = (WINDOW, EMB_SIZE),
                        nb_filter=16,
                        filter_length=4,
                        border_mode='same'))
model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(Dropout(0.5))

model.add(Convolution1D(nb_filter=8,
                        filter_length=4,
                        border_mode='same'))
model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(Dropout(0.5))

model.add(Flatten())

model.add(Dense(64))
model.add(BatchNormalization())
model.add(LeakyReLU())


model.add(Dense(2))
model.add(Activation('softmax'))
In [322]:
# prepare and train
opt = Nadam(lr=0.002)

reduce_lr = ReduceLROnPlateau(monitor='val_acc', factor=0.9, patience=30, min_lr=0.000001, verbose=1)
checkpointer = ModelCheckpoint(filepath="lolkek.hdf5", verbose=1, save_best_only=True)

model.compile(optimizer=opt, 
              loss='categorical_crossentropy',
              metrics=['accuracy'])

history = model.fit(X_train, Y_train, 
          nb_epoch = 100, 
          batch_size = 128, 
          verbose=1, 
          validation_data=(X_test, Y_test),
          callbacks=[reduce_lr, checkpointer],
          shuffle=True)
Train on 1973 samples, validate on 658 samples
Epoch 1/100
1920/1973 [============================>.] - ETA: 0s - loss: 0.8641 - acc: 0.5333Epoch 00000: val_loss improved from inf to 0.69909, saving model to lolkek.hdf5
1973/1973 [==============================] - 4s - loss: 0.8622 - acc: 0.5317 - val_loss: 0.6991 - val_acc: 0.4666
Epoch 2/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.7680 - acc: 0.5352Epoch 00001: val_loss improved from 0.69909 to 0.69868, saving model to lolkek.hdf5
1973/1973 [==============================] - 0s - loss: 0.7642 - acc: 0.5383 - val_loss: 0.6987 - val_acc: 0.5243
Epoch 3/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.7511 - acc: 0.5397Epoch 00002: val_loss improved from 0.69868 to 0.69679, saving model to lolkek.hdf5
1973/1973 [==============================] - 0s - loss: 0.7480 - acc: 0.5388 - val_loss: 0.6968 - val_acc: 0.5152
Epoch 4/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.7181 - acc: 0.5511Epoch 00003: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.7223 - acc: 0.5428 - val_loss: 0.6970 - val_acc: 0.5289
Epoch 5/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.7077 - acc: 0.5511Epoch 00004: val_loss improved from 0.69679 to 0.69055, saving model to lolkek.hdf5
1973/1973 [==============================] - 0s - loss: 0.7080 - acc: 0.5509 - val_loss: 0.6905 - val_acc: 0.5456
Epoch 6/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6917 - acc: 0.5703Epoch 00005: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6895 - acc: 0.5682 - val_loss: 0.6934 - val_acc: 0.5547
Epoch 7/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6842 - acc: 0.5883Epoch 00006: val_loss improved from 0.69055 to 0.68867, saving model to lolkek.hdf5
1973/1973 [==============================] - 0s - loss: 0.6878 - acc: 0.5849 - val_loss: 0.6887 - val_acc: 0.5274
Epoch 8/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6802 - acc: 0.5685Epoch 00007: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6812 - acc: 0.5641 - val_loss: 0.6887 - val_acc: 0.5395
Epoch 9/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6873 - acc: 0.5733Epoch 00008: val_loss improved from 0.68867 to 0.68789, saving model to lolkek.hdf5
1973/1973 [==============================] - 0s - loss: 0.6853 - acc: 0.5783 - val_loss: 0.6879 - val_acc: 0.5334
Epoch 10/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6826 - acc: 0.5775Epoch 00009: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6783 - acc: 0.5824 - val_loss: 0.6882 - val_acc: 0.5517
Epoch 11/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6719 - acc: 0.5829Epoch 00010: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6748 - acc: 0.5808 - val_loss: 0.6888 - val_acc: 0.5562
Epoch 12/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6694 - acc: 0.5901Epoch 00011: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6682 - acc: 0.5895 - val_loss: 0.6992 - val_acc: 0.5319
Epoch 13/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6641 - acc: 0.6088Epoch 00012: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6673 - acc: 0.6057 - val_loss: 0.6905 - val_acc: 0.5426
Epoch 14/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6713 - acc: 0.5944Epoch 00013: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6717 - acc: 0.5945 - val_loss: 0.6915 - val_acc: 0.5426
Epoch 15/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6639 - acc: 0.6136Epoch 00014: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6599 - acc: 0.6153 - val_loss: 0.6989 - val_acc: 0.5426
Epoch 16/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6669 - acc: 0.5931Epoch 00015: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6668 - acc: 0.5966 - val_loss: 0.6970 - val_acc: 0.5517
Epoch 17/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6631 - acc: 0.6028Epoch 00016: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6694 - acc: 0.5945 - val_loss: 0.6926 - val_acc: 0.5395
Epoch 18/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6623 - acc: 0.5986Epoch 00017: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6605 - acc: 0.6036 - val_loss: 0.7033 - val_acc: 0.5441
Epoch 19/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.6622 - acc: 0.6016Epoch 00018: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6608 - acc: 0.6042 - val_loss: 0.6989 - val_acc: 0.5456
Epoch 20/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6560 - acc: 0.6118Epoch 00019: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6527 - acc: 0.6189 - val_loss: 0.7016 - val_acc: 0.5502
Epoch 21/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6545 - acc: 0.6208Epoch 00020: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6573 - acc: 0.6123 - val_loss: 0.6958 - val_acc: 0.5410
Epoch 22/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6549 - acc: 0.6124Epoch 00021: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6565 - acc: 0.6077 - val_loss: 0.6988 - val_acc: 0.5228
Epoch 23/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6587 - acc: 0.6052Epoch 00022: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6571 - acc: 0.6067 - val_loss: 0.6974 - val_acc: 0.5426
Epoch 24/100
1920/1973 [============================>.] - ETA: 0s - loss: 0.6464 - acc: 0.6208Epoch 00023: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6460 - acc: 0.6214 - val_loss: 0.7006 - val_acc: 0.5365
Epoch 25/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.6528 - acc: 0.6021Epoch 00024: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6541 - acc: 0.6036 - val_loss: 0.7003 - val_acc: 0.5198
Epoch 26/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6456 - acc: 0.6142Epoch 00025: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6458 - acc: 0.6102 - val_loss: 0.6996 - val_acc: 0.5502
Epoch 27/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6526 - acc: 0.6226Epoch 00026: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6520 - acc: 0.6229 - val_loss: 0.7028 - val_acc: 0.5547
Epoch 28/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.6533 - acc: 0.6122Epoch 00027: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6519 - acc: 0.6173 - val_loss: 0.7001 - val_acc: 0.5274
Epoch 29/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6435 - acc: 0.6358Epoch 00028: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6426 - acc: 0.6356 - val_loss: 0.7012 - val_acc: 0.5365
Epoch 30/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6513 - acc: 0.6172Epoch 00029: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6478 - acc: 0.6214 - val_loss: 0.7078 - val_acc: 0.5608
Epoch 31/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.6491 - acc: 0.6077Epoch 00030: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6465 - acc: 0.6113 - val_loss: 0.7038 - val_acc: 0.5486
Epoch 32/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6372 - acc: 0.6400Epoch 00031: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6398 - acc: 0.6366 - val_loss: 0.7082 - val_acc: 0.5456
Epoch 33/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6469 - acc: 0.6232Epoch 00032: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6467 - acc: 0.6249 - val_loss: 0.7070 - val_acc: 0.5578
Epoch 34/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.6436 - acc: 0.6150Epoch 00033: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6437 - acc: 0.6153 - val_loss: 0.7093 - val_acc: 0.5410
Epoch 35/100
1920/1973 [============================>.] - ETA: 0s - loss: 0.6423 - acc: 0.6281Epoch 00034: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6419 - acc: 0.6285 - val_loss: 0.7112 - val_acc: 0.5319
Epoch 36/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.6453 - acc: 0.6133Epoch 00035: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6430 - acc: 0.6158 - val_loss: 0.7145 - val_acc: 0.5486
Epoch 37/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6339 - acc: 0.6424Epoch 00036: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6386 - acc: 0.6371 - val_loss: 0.7118 - val_acc: 0.5274
Epoch 38/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6261 - acc: 0.6587Epoch 00037: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6285 - acc: 0.6513 - val_loss: 0.7195 - val_acc: 0.5410
Epoch 39/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6394 - acc: 0.6310Epoch 00038: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6336 - acc: 0.6386 - val_loss: 0.7318 - val_acc: 0.5441
Epoch 40/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6420 - acc: 0.6226Epoch 00039: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6424 - acc: 0.6194 - val_loss: 0.7200 - val_acc: 0.5365
Epoch 41/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6241 - acc: 0.6520Epoch 00040: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6284 - acc: 0.6498 - val_loss: 0.7107 - val_acc: 0.5228
Epoch 42/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6301 - acc: 0.6496Epoch 00041: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6318 - acc: 0.6462 - val_loss: 0.7127 - val_acc: 0.5532
Epoch 43/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6364 - acc: 0.6484Epoch 00042: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6361 - acc: 0.6432 - val_loss: 0.7142 - val_acc: 0.5456
Epoch 44/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6290 - acc: 0.6496Epoch 00043: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6268 - acc: 0.6543 - val_loss: 0.7132 - val_acc: 0.5486
Epoch 45/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6207 - acc: 0.6605Epoch 00044: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6229 - acc: 0.6594 - val_loss: 0.7321 - val_acc: 0.5578
Epoch 46/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.6222 - acc: 0.6490Epoch 00045: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6239 - acc: 0.6477 - val_loss: 0.7170 - val_acc: 0.5608
Epoch 47/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.6295 - acc: 0.6557Epoch 00046: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6296 - acc: 0.6548 - val_loss: 0.7245 - val_acc: 0.5578
Epoch 48/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6209 - acc: 0.6478Epoch 00047: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6260 - acc: 0.6381 - val_loss: 0.7170 - val_acc: 0.5395
Epoch 49/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6277 - acc: 0.6400Epoch 00048: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6334 - acc: 0.6285 - val_loss: 0.7197 - val_acc: 0.5578
Epoch 50/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6169 - acc: 0.6538Epoch 00049: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6167 - acc: 0.6543 - val_loss: 0.7173 - val_acc: 0.5365
Epoch 51/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6107 - acc: 0.6701Epoch 00050: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6119 - acc: 0.6675 - val_loss: 0.7264 - val_acc: 0.5562
Epoch 52/100
1920/1973 [============================>.] - ETA: 0s - loss: 0.6138 - acc: 0.6646Epoch 00051: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6126 - acc: 0.6660 - val_loss: 0.7178 - val_acc: 0.5471
Epoch 53/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6159 - acc: 0.6490Epoch 00052: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6149 - acc: 0.6498 - val_loss: 0.7200 - val_acc: 0.5395
Epoch 54/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6249 - acc: 0.6544Epoch 00053: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6202 - acc: 0.6619 - val_loss: 0.7465 - val_acc: 0.5790
Epoch 55/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6144 - acc: 0.6635Epoch 00054: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6152 - acc: 0.6650 - val_loss: 0.7267 - val_acc: 0.5578
Epoch 56/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6201 - acc: 0.6526Epoch 00055: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6199 - acc: 0.6523 - val_loss: 0.7302 - val_acc: 0.5699
Epoch 57/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6001 - acc: 0.6827Epoch 00056: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6012 - acc: 0.6858 - val_loss: 0.7248 - val_acc: 0.5395
Epoch 58/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6120 - acc: 0.6629Epoch 00057: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6152 - acc: 0.6619 - val_loss: 0.7255 - val_acc: 0.5486
Epoch 59/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6098 - acc: 0.6659Epoch 00058: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6143 - acc: 0.6660 - val_loss: 0.7240 - val_acc: 0.5471
Epoch 60/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6034 - acc: 0.6737Epoch 00059: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6072 - acc: 0.6670 - val_loss: 0.7232 - val_acc: 0.5486
Epoch 61/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6150 - acc: 0.6701Epoch 00060: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6175 - acc: 0.6675 - val_loss: 0.7227 - val_acc: 0.5593
Epoch 62/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5965 - acc: 0.6827Epoch 00061: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5976 - acc: 0.6832 - val_loss: 0.7275 - val_acc: 0.5562
Epoch 63/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6108 - acc: 0.6695Epoch 00062: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6068 - acc: 0.6751 - val_loss: 0.7298 - val_acc: 0.5623
Epoch 64/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6088 - acc: 0.6575Epoch 00063: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6058 - acc: 0.6594 - val_loss: 0.7373 - val_acc: 0.5562
Epoch 65/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6075 - acc: 0.6749Epoch 00064: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6110 - acc: 0.6680 - val_loss: 0.7198 - val_acc: 0.5517
Epoch 66/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6026 - acc: 0.6695Epoch 00065: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6027 - acc: 0.6716 - val_loss: 0.7264 - val_acc: 0.5714
Epoch 67/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6049 - acc: 0.6647Epoch 00066: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5970 - acc: 0.6721 - val_loss: 0.7267 - val_acc: 0.5562
Epoch 68/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5902 - acc: 0.6755Epoch 00067: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5930 - acc: 0.6761 - val_loss: 0.7357 - val_acc: 0.5350
Epoch 69/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6030 - acc: 0.6821Epoch 00068: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.6004 - acc: 0.6832 - val_loss: 0.7314 - val_acc: 0.5304
Epoch 70/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5878 - acc: 0.6845Epoch 00069: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5916 - acc: 0.6756 - val_loss: 0.7503 - val_acc: 0.5486
Epoch 71/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.5965 - acc: 0.6791Epoch 00070: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5927 - acc: 0.6863 - val_loss: 0.7297 - val_acc: 0.5532
Epoch 72/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.6013 - acc: 0.6779Epoch 00071: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5936 - acc: 0.6853 - val_loss: 0.7322 - val_acc: 0.5471
Epoch 73/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.5864 - acc: 0.6864Epoch 00072: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5854 - acc: 0.6878 - val_loss: 0.7413 - val_acc: 0.5714
Epoch 74/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5778 - acc: 0.6905Epoch 00073: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5776 - acc: 0.6913 - val_loss: 0.7364 - val_acc: 0.5653
Epoch 75/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5815 - acc: 0.7031Epoch 00074: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5820 - acc: 0.6984 - val_loss: 0.7571 - val_acc: 0.5684
Epoch 76/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5955 - acc: 0.6851Epoch 00075: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5908 - acc: 0.6908 - val_loss: 0.7489 - val_acc: 0.5562
Epoch 77/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5888 - acc: 0.6875Epoch 00076: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5829 - acc: 0.6934 - val_loss: 0.7419 - val_acc: 0.5623
Epoch 78/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5712 - acc: 0.7061Epoch 00077: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5719 - acc: 0.7035 - val_loss: 0.7446 - val_acc: 0.5486
Epoch 79/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5790 - acc: 0.6983Epoch 00078: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5792 - acc: 0.6949 - val_loss: 0.7481 - val_acc: 0.5456
Epoch 80/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5704 - acc: 0.6893Epoch 00079: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5710 - acc: 0.6868 - val_loss: 0.7492 - val_acc: 0.5547
Epoch 81/100
1792/1973 [==========================>...] - ETA: 0s - loss: 0.5765 - acc: 0.7065Epoch 00080: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5779 - acc: 0.7050 - val_loss: 0.7327 - val_acc: 0.5441
Epoch 82/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5643 - acc: 0.7061Epoch 00081: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5642 - acc: 0.7101 - val_loss: 0.7347 - val_acc: 0.5471
Epoch 83/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5699 - acc: 0.7091Epoch 00082: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5703 - acc: 0.7065 - val_loss: 0.7515 - val_acc: 0.5608
Epoch 84/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5679 - acc: 0.6947Epoch 00083: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5673 - acc: 0.6999 - val_loss: 0.7465 - val_acc: 0.5517
Epoch 85/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5570 - acc: 0.7091
Epoch 00084: reducing learning rate to 0.0018000000854954123.
Epoch 00084: val_loss did not improve
1973/1973 [==============================] - 1s - loss: 0.5579 - acc: 0.7101 - val_loss: 0.7404 - val_acc: 0.5456
Epoch 86/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5573 - acc: 0.7121Epoch 00085: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5593 - acc: 0.7101 - val_loss: 0.7420 - val_acc: 0.5441
Epoch 87/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5662 - acc: 0.6977Epoch 00086: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5670 - acc: 0.7005 - val_loss: 0.7447 - val_acc: 0.5517
Epoch 88/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5483 - acc: 0.7242Epoch 00087: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5470 - acc: 0.7223 - val_loss: 0.7504 - val_acc: 0.5486
Epoch 89/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5490 - acc: 0.7212Epoch 00088: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5499 - acc: 0.7187 - val_loss: 0.7638 - val_acc: 0.5562
Epoch 90/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5538 - acc: 0.7194Epoch 00089: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5545 - acc: 0.7223 - val_loss: 0.7521 - val_acc: 0.5653
Epoch 91/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5448 - acc: 0.7194Epoch 00090: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5561 - acc: 0.7141 - val_loss: 0.7514 - val_acc: 0.5623
Epoch 92/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5418 - acc: 0.7272Epoch 00091: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5450 - acc: 0.7258 - val_loss: 0.7436 - val_acc: 0.5669
Epoch 93/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5507 - acc: 0.7326Epoch 00092: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5480 - acc: 0.7334 - val_loss: 0.7405 - val_acc: 0.5653
Epoch 94/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5416 - acc: 0.7157Epoch 00093: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5433 - acc: 0.7167 - val_loss: 0.7649 - val_acc: 0.5562
Epoch 95/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5319 - acc: 0.7374Epoch 00094: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5361 - acc: 0.7329 - val_loss: 0.7786 - val_acc: 0.5471
Epoch 96/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5520 - acc: 0.7163Epoch 00095: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5573 - acc: 0.7111 - val_loss: 0.7599 - val_acc: 0.5684
Epoch 97/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5427 - acc: 0.7308Epoch 00096: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5407 - acc: 0.7309 - val_loss: 0.7529 - val_acc: 0.5638
Epoch 98/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5332 - acc: 0.7302Epoch 00097: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5397 - acc: 0.7238 - val_loss: 0.7492 - val_acc: 0.5669
Epoch 99/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5449 - acc: 0.7332Epoch 00098: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5459 - acc: 0.7283 - val_loss: 0.7507 - val_acc: 0.5638
Epoch 100/100
1664/1973 [========================>.....] - ETA: 0s - loss: 0.5328 - acc: 0.7200Epoch 00099: val_loss did not improve
1973/1973 [==============================] - 0s - loss: 0.5304 - acc: 0.7278 - val_loss: 0.7326 - val_acc: 0.5517
In [323]:
model.load_weights("lolkek.hdf5")
pred = model.predict(np.array(X_test))
#for prediction in pred:
#    print(np.argmax(prediction))

损失函数与训练迭代次数的关系

In [324]:
# loss plot
plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

预测准确率与迭代次数的关系

如果训练集的准确率持续上升而测试集反之,说明可能有过拟合现象,模型的泛化能力可能比较差。

In [325]:
# accuracy plot
plt.figure()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
In [306]:
# 1. 策略基本参数

# 回测起始时间
start_date_2 = '2016-02-06'
# 回测结束时间
end_date_2 = '2017-06-20'
# 策略比较参考标准,以沪深300为例
benchmark = '000300.INDX'
# 证券池 以贵州茅台为例
# instruments = ['000030.SZA']
# 起始资金
capital_base = 100000
In [307]:
hist = D.history_data(instruments, '2015-11-4', end_date_2, fields=features)
#print(hist)

# Straightforward way for creating time windows
prediction = {}
for i in range(0, len(hist) - WINDOW, STEP): 
    try:
        o = hist['open'].tolist()[i:i+WINDOW]
        h = hist['high'].tolist()[i:i+WINDOW]
        l = hist['low'].tolist()[i:i+WINDOW]
        c = hist['close'].tolist()[i:i+WINDOW]
        v = hist['volume'].tolist()[i:i+WINDOW]
        date = hist['date'][i+WINDOW].date()

        o = (np.array(o) - np.mean(o)) / np.std(o)
        h = (np.array(h) - np.mean(h)) / np.std(h)
        l = (np.array(l) - np.mean(l)) / np.std(l)
        c = (np.array(c) - np.mean(c)) / np.std(c)
        v = (np.array(v) - np.mean(v)) / np.std(v)

        x_i = np.column_stack((o, h, l, c, v))


    except Exception as e:
        print(e)
        break

    X = np.array([x_i])
    pred = model.predict(X)
    prediction[date] = pred

#print(prediction)
In [308]:
# 3. 策略主体函数
# 初始化虚拟账户状态,只在第一个交易日运行
def initialize(context):
    # 设置手续费
    context.set_commission(PerOrder(buy_cost=0.000, sell_cost=0.000, min_cost=5))

# 策略交易逻辑,每个交易日运行一次
def handle_data(context, data):
    global prediction
    
    date = data.current_dt.date()

    # 在这里添加策略代码
    for instrument in instruments:
        # 字符型股票代码转化成 BigQuant回测引擎所需的股票代码
        instrument = context.symbol(instrument)
        if not data.can_trade(instrument):
            break
        try:
            pred = prediction[date]
            print(pred)
        except Exception as e:
            continue

        threshold = 0
        
        if  np.argmax(pred[0]) == 0:
            order_target_percent(instrument, 1)
            
        elif np.argmax(pred[0]) == 1:
            order_target_percent(instrument, 0)

# 3. 启动回测

# 策略回测接口: https://bigquant.com/docs/module_trade.html
m = M.trade.v1(
    instruments=instruments,
    start_date=start_date_2,
    end_date=end_date_2,
    initialize=initialize,
    handle_data=handle_data,
    # 买入订单以开盘价成交
    order_price_field_buy='open',
    # 卖出订单以开盘价成交
    order_price_field_sell='open',
    capital_base=capital_base,
    benchmark=benchmark,
)
[2017-07-03 10:54:29.229577] INFO: bigquant: backtest.v6 start ..
[[ 0.56592745  0.43407252]]
[[ 0.56148899  0.43851098]]
[[ 0.56317806  0.436822  ]]
[[ 0.56634545  0.43365455]]
[[ 0.54618859  0.45381138]]
[[ 0.53374219  0.46625781]]
[[ 0.5302189   0.46978113]]
[[ 0.53641802  0.46358195]]
[[ 0.53420204  0.46579796]]
[[ 0.5442  0.4558]]
[[ 0.56579137  0.43420872]]
[[ 0.57336253  0.42663744]]
[[ 0.57956457  0.4204354 ]]
[[ 0.58749729  0.41250277]]
[[ 0.59582806  0.40417194]]
[[ 0.58698922  0.41301084]]
[[ 0.57044774  0.42955226]]
[[ 0.55662608  0.44337392]]
[[ 0.56336105  0.43663898]]
[[ 0.57094824  0.42905173]]
[[ 0.56163669  0.43836331]]
[[ 0.57116896  0.42883107]]
[[ 0.57396477  0.4260352 ]]
[[ 0.58523047  0.4147695 ]]
[[ 0.57641476  0.42358521]]
[[ 0.57428861  0.42571148]]
[[ 0.57184738  0.42815262]]
[[ 0.55689156  0.44310847]]
[[ 0.53665394  0.46334606]]
[[ 0.51971495  0.48028508]]
[[ 0.51312441  0.48687562]]
[[ 0.5137763   0.48622367]]
[[ 0.50628132  0.49371868]]
[[ 0.50081128  0.49918872]]
[[ 0.49581081  0.50418919]]
[[ 0.5231573  0.4768427]]
[[ 0.52384126  0.47615874]]
[[ 0.53007966  0.46992037]]
[[ 0.52922976  0.47077021]]
[[ 0.55247045  0.44752955]]
[[ 0.55732876  0.44267124]]
[[ 0.53334159  0.46665844]]
[[ 0.52347177  0.4765282 ]]
[[ 0.48287478  0.51712519]]
[[ 0.46441311  0.53558695]]
[[ 0.46238399  0.53761595]]
[[ 0.44875979  0.55124027]]
[[ 0.44953418  0.55046582]]
[[ 0.47649685  0.52350318]]
[[ 0.49267989  0.50732017]]
[[ 0.50909758  0.49090248]]
[[ 0.53611714  0.46388289]]
[[ 0.53560191  0.46439803]]
[[ 0.51993251  0.48006752]]
[[ 0.5057736   0.49422637]]
[[ 0.4838745  0.5161255]]
[[ 0.48428541  0.51571465]]
[[ 0.48855284  0.51144725]]
[[ 0.49415523  0.50584477]]
[[ 0.51328158  0.48671839]]
[[ 0.54467458  0.45532545]]
[[ 0.56528443  0.43471563]]
[[ 0.58077538  0.41922459]]
[[ 0.55938017  0.4406198 ]]
[[ 0.5144521  0.4855479]]
[[ 0.50435221  0.49564785]]
[[ 0.51950115  0.48049885]]
[[ 0.54138744  0.45861259]]
[[ 0.546821    0.45317894]]
[[ 0.56328636  0.43671361]]
[[ 0.57651609  0.42348385]]
[[ 0.58814496  0.41185507]]
[[ 0.58692443  0.41307554]]
[[ 0.56797385  0.43202612]]
[[ 0.55066127  0.44933873]]
[[ 0.55623215  0.44376782]]
[[ 0.5698331   0.43016693]]
[[ 0.5636785   0.43632147]]
[[ 0.56366998  0.43633002]]
[[ 0.54910874  0.45089123]]
[[ 0.54096019  0.45903981]]
[[ 0.55038291  0.44961715]]
[[ 0.55810583  0.44189417]]
[[ 0.56043017  0.43956992]]
[[ 0.56858891  0.43141112]]
[[ 0.57570845  0.42429155]]
[[ 0.58200037  0.41799957]]
[[ 0.58037466  0.41962534]]
[[ 0.58291042  0.41708955]]
[[ 0.59514785  0.40485209]]
[[ 0.62303978  0.37696022]]
[[ 0.62919992  0.37080011]]
[[ 0.60583246  0.39416757]]
[[ 0.5557723   0.44422773]]
[[ 0.51980901  0.48019102]]
[[ 0.49868697  0.50131309]]
[[ 0.48563623  0.51436377]]
[[ 0.51076859  0.48923141]]
[[ 0.54956454  0.45043546]]
[[ 0.55402851  0.44597149]]
[[ 0.56150681  0.43849319]]
[[ 0.55073512  0.44926488]]
[[ 0.55852705  0.44147301]]
[[ 0.56489062  0.43510935]]
[[ 0.55112559  0.44887444]]
[[ 0.5126487  0.4873513]]
[[ 0.49891177  0.50108826]]
[[ 0.49901748  0.50098246]]
[[ 0.50604039  0.49395958]]
[[ 0.50606841  0.49393162]]
[[ 0.53165811  0.46834186]]
[[ 0.53986919  0.4601309 ]]
[[ 0.52810085  0.47189921]]
[[ 0.50173289  0.49826705]]
[[ 0.48397174  0.51602817]]
[[ 0.45124611  0.54875392]]
[[ 0.42843032  0.57156974]]
[[ 0.42819038  0.57180959]]
[[ 0.41703704  0.58296299]]
[[ 0.41814974  0.58185029]]
[[ 0.4365229  0.5634771]]
[[ 0.46119905  0.53880095]]
[[ 0.47929376  0.52070618]]
[[ 0.48499587  0.51500416]]
[[ 0.50109088  0.49890912]]
[[ 0.50307667  0.49692333]]
[[ 0.52173293  0.47826707]]
[[ 0.53664887  0.46335113]]
[[ 0.55714154  0.44285846]]
[[ 0.56763631  0.43236372]]
[[ 0.58433676  0.41566324]]
[[ 0.56731331  0.43268666]]
[[ 0.57094216  0.42905787]]
[[ 0.56391466  0.4360854 ]]
[[ 0.54644102  0.45355901]]
[[ 0.51701146  0.48298854]]
[[ 0.50449109  0.49550891]]
[[ 0.49857986  0.50142014]]
[[ 0.49489829  0.50510174]]
[[ 0.49740753  0.5025925 ]]
[[ 0.48850647  0.51149356]]
[[ 0.47010377  0.5298962 ]]
[[ 0.45354176  0.5464583 ]]
[[ 0.43839967  0.56160033]]
[[ 0.436988    0.56301194]]
[[ 0.44229817  0.55770183]]
[[ 0.45005187  0.54994816]]
[[ 0.45860013  0.5413999 ]]
[[ 0.46062675  0.53937328]]
[[ 0.48362425  0.51637578]]
[[ 0.51137799  0.48862204]]
[[ 0.52101839  0.47898158]]
[[ 0.52421558  0.47578445]]
[[ 0.53710783  0.46289214]]
[[ 0.54731691  0.45268312]]
[[ 0.54752219  0.45247778]]
[[ 0.56407148  0.43592849]]
[[ 0.5805797   0.41942027]]
[[ 0.56780678  0.43219328]]
[[ 0.55921894  0.44078106]]
[[ 0.56373507  0.4362649 ]]
[[ 0.55836529  0.44163471]]
[[ 0.56374472  0.43625525]]
[[ 0.54783922  0.45216087]]
[[ 0.53318244  0.46681762]]
[[ 0.52220416  0.47779587]]
[[ 0.52861524  0.47138479]]
[[ 0.52102423  0.4789758 ]]
[[ 0.52750403  0.47249597]]
[[ 0.53761816  0.46238187]]
[[ 0.55148119  0.44851881]]
[[ 0.5541538  0.4458462]]
[[ 0.54962242  0.45037752]]
[[ 0.54613262  0.45386741]]
[[ 0.5468781   0.45312193]]
[[ 0.53982186  0.46017814]]
[[ 0.55930883  0.44069117]]
[[ 0.55946505  0.44053498]]
[[ 0.55858344  0.44141656]]
[[ 0.56090385  0.43909612]]
[[ 0.54570335  0.45429668]]
[[ 0.51989269  0.48010728]]
[[ 0.50840157  0.4915984 ]]
[[ 0.51122326  0.48877674]]
[[ 0.53975362  0.46024632]]
[[ 0.55640489  0.44359511]]
[[ 0.55518442  0.44481561]]
[[ 0.52984822  0.47015175]]
[[ 0.52350485  0.47649518]]
[[ 0.51172704  0.48827302]]
[[ 0.51114517  0.48885489]]
[[ 0.50762159  0.49237838]]
[[ 0.50223732  0.49776265]]
[[ 0.51262558  0.48737445]]
[[ 0.52021897  0.47978109]]
[[ 0.53772897  0.46227112]]
[[ 0.55407691  0.44592312]]
[[ 0.55006731  0.44993263]]
[[ 0.52827668  0.47172338]]
[[ 0.51564711  0.48435292]]
[[ 0.48998937  0.51001066]]
[[ 0.4658789  0.5341211]]
[[ 0.45256034  0.54743969]]
[[ 0.45843604  0.54156399]]
[[ 0.45862406  0.54137594]]
[[ 0.47378331  0.52621675]]
[[ 0.46148804  0.53851199]]
[[ 0.45410854  0.54589146]]
[[ 0.44247985  0.55752009]]
[[ 0.44305104  0.55694896]]
[[ 0.4501619   0.54983807]]
[[ 0.45560133  0.54439867]]
[[ 0.45834711  0.54165292]]
[[ 0.46458033  0.53541964]]
[[ 0.46920735  0.53079265]]
[[ 0.48760143  0.5123986 ]]
[[ 0.49258384  0.50741625]]
[[ 0.5075143   0.49248573]]
[[ 0.53213018  0.46786982]]
[[ 0.53686267  0.46313727]]
[[ 0.54628205  0.45371795]]
[[ 0.57191217  0.42808783]]
[[ 0.58207738  0.41792262]]
[[ 0.58021981  0.41978022]]
[[ 0.58632439  0.41367564]]
[[ 0.5925191   0.40748093]]
[[ 0.56848091  0.43151912]]
[[ 0.54483765  0.45516241]]
[[ 0.52073067  0.47926936]]
[[ 0.51278162  0.48721838]]
[[ 0.51985919  0.48014081]]
[[ 0.53416008  0.46583989]]
[[ 0.54381853  0.45618147]]
[[ 0.54285228  0.45714781]]
[[ 0.51690531  0.48309463]]
[[ 0.4952623   0.50473773]]
[[ 0.48107827  0.51892173]]
[[ 0.49162304  0.50837702]]
[[ 0.51922441  0.48077562]]
[[ 0.54892153  0.4510785 ]]
[[ 0.56353998  0.43646005]]
[[ 0.58294964  0.41705036]]
[[ 0.59180421  0.40819576]]
[[ 0.59171546  0.40828457]]
[[ 0.59183598  0.40816402]]
[[ 0.58735651  0.41264349]]
[[ 0.58222508  0.41777492]]
[[ 0.57825768  0.42174235]]
[[ 0.58167833  0.41832161]]
[[ 0.57512116  0.42487884]]
[[ 0.55643845  0.44356152]]
[[ 0.55295712  0.44704291]]
[[ 0.542377    0.45762303]]
[[ 0.52679026  0.4732098 ]]
[[ 0.53443128  0.46556863]]
[[ 0.53443569  0.46556428]]
[[ 0.54751402  0.45248598]]
[[ 0.54784316  0.4521569 ]]
[[ 0.53918815  0.46081185]]
[[ 0.52947652  0.47052348]]
[[ 0.51546597  0.48453397]]
[[ 0.4963057   0.50369436]]
[[ 0.48079079  0.51920915]]
[[ 0.46170476  0.53829527]]
[[ 0.43884435  0.56115562]]
[[ 0.43795288  0.56204718]]
[[ 0.45958734  0.5404126 ]]
[[ 0.47735152  0.52264845]]
[[ 0.50342238  0.49657765]]
[[ 0.52511048  0.47488955]]
[[ 0.52499145  0.47500855]]
[[ 0.53890675  0.46109325]]
[[ 0.54361188  0.45638803]]
[[ 0.5415383  0.4584617]]
[[ 0.55421668  0.44578335]]
[[ 0.56943202  0.43056798]]
[[ 0.58335489  0.41664508]]
[[ 0.58996016  0.41003981]]
[[ 0.57842302  0.42157695]]
[[ 0.57263815  0.42736191]]
[[ 0.56652975  0.43347031]]
[[ 0.53546017  0.4645398 ]]
[[ 0.51163393  0.48836601]]
[[ 0.50543505  0.49456495]]
[[ 0.49478868  0.50521129]]
[[ 0.47249344  0.52750653]]
[[ 0.47617137  0.52382869]]
[[ 0.4638806  0.5361194]]
[[ 0.45107278  0.54892719]]
[[ 0.45796275  0.54203719]]
[[ 0.46767455  0.53232539]]
[[ 0.48404405  0.51595604]]
[[ 0.48889577  0.51110417]]
[[ 0.4850761   0.51492393]]
[[ 0.4727942   0.52720582]]
[[ 0.46356457  0.53643543]]
[[ 0.47536808  0.52463192]]
[[ 0.46349743  0.53650254]]
[[ 0.48697469  0.51302534]]
[[ 0.5032559  0.4967441]]
[[ 0.48544934  0.51455063]]
[[ 0.49238533  0.50761461]]
[[ 0.5043568   0.49564311]]
[[ 0.50440574  0.49559426]]
[[ 0.51292425  0.48707581]]
[[ 0.52763766  0.47236231]]
[[ 0.53260213  0.46739793]]
[[ 0.55366111  0.44633886]]
[[ 0.57922333  0.4207767 ]]
[[ 0.58166075  0.41833919]]
[[ 0.58306414  0.4169358 ]]
[[ 0.57232028  0.42767972]]
[[ 0.54892403  0.45107597]]
[[ 0.55123025  0.44876978]]
[[ 0.5605709   0.43942916]]
[[ 0.55154115  0.44845885]]
[[ 0.52555883  0.4744412 ]]
[[ 0.52996457  0.47003546]]
[[ 0.51360053  0.48639944]]
[[ 0.51894522  0.48105475]]
[[ 0.52655971  0.47344026]]
[[ 0.50190544  0.4980945 ]]
[[ 0.49010623  0.50989383]]
[[ 0.49165419  0.50834584]]
[[ 0.50740391  0.49259618]]
[[ 0.50389183  0.49610814]]
[[ 0.52099872  0.47900128]]
[[ 0.53929436  0.46070564]]
[[ 0.54357934  0.4564206 ]]
[[ 0.57037991  0.42962012]]
[2017-07-03 10:54:32.180829] INFO: Performance: Simulated 330 trading days out of 330.
[2017-07-03 10:54:32.181738] INFO: Performance: first open: 2016-02-15 14:30:00+00:00
[2017-07-03 10:54:32.182830] INFO: Performance: last close: 2017-06-20 19:00:00+00:00
  • 收益率35.84%
  • 年化收益率26.36%
  • 基准收益率19.66%
  • 阿尔法0.14
  • 贝塔0.81
  • 夏普比率1.67
  • 收益波动率13.13%
  • 信息比率1.87
  • 最大回撤7.76%
[2017-07-03 10:54:35.909209] INFO: bigquant: backtest.v6 end [6.679619s].
In [311]:
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
model.load_weights("model.hdf5")
pred = model.predict(np.array(X_test))
C = confusion_matrix([np.argmax(y) for y in Y_test], [np.argmax(y) for y in pred])
print(C / C.astype(np.float).sum(axis=1))
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-311-47146818f9b0> in <module>()
      1 from sklearn.metrics import classification_report
      2 from sklearn.metrics import confusion_matrix
----> 3 model.load_weights("model.hdf5")
      4 pred = model.predict(np.array(X_test))
      5 C = confusion_matrix([np.argmax(y) for y in Y_test], [np.argmax(y) for y in pred])

OSError: Unable to open file (Unable to open file: name = 'model.hdf5', errno = 2, error message = 'no such file or directory', flags = 0, o_flags = 0)
In [ ]: