【模板案例】-模型保存读取


(iQuant) #1

7月16日Meetup模板案例

2.模型保存读取

克隆策略
In [ ]:
6保持和恢复
a. 权重保存
model = tf.keras.Sequential([
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')])

model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.save_weights('./weights/model')
model.load_weights('./weights/model')
model.save_weights('./model.h5')
model.load_weights('./model.h5')

b.保存网络结构
# 序列化成json
import json
import pprint
json_str = model.to_json()
pprint.pprint(json.loads(json_str))
fresh_model = tf.keras.models.model_from_json(json_str)
# 保持为yaml格式  #需要提前安装pyyaml

yaml_str = model.to_yaml()
print(yaml_str)
fresh_model = tf.keras.models.model_from_yaml(yaml_str)

c.保存整个模型
model = tf.keras.Sequential([
  layers.Dense(10, activation='softmax', input_shape=(72,)),
  layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit(train_x, train_y, batch_size=32, epochs=5)
model.save('all_model.h5')
model = tf.keras.models.load_model('all_model.h5')
In [ ]:
data = pd.read_pickle('/home/bigquant/work/userlib/model.csv')
model_ds = DataSource.write_pickle(data.iloc[0].to_dict())
In [ ]:
model_dict = trained_model.read_pickle()
model = tf.keras.models.model_from_yaml(model_dict['model_graph'])
model.set_weights(model_dict['model_weights'])

BigQuant AI量化专家Meetup(7月30日场回放及案例模板)
BigQuant AI量化专家Meetup(7月30日场回放及案例模板)