我有一个可以运行任意长时间的模拟。为了存储模拟的输出,我天真地创建了一个可调整大小的 HDF5 文件,我不断地将数据存储到其中,如这个玩具示例所示:
import contextlib
import os
import time
import numpy as np
import h5py
num_timepoints = 18000
num_vertices = 16
num_info = 38
output_size = 10
t0 = "A:\\t0.hdf5"
with contextlib.suppress(FileNotFoundError):
os.remove(t0)
st = time.time()
with h5py.File(t0, "a") as f:
dset = f.create_dataset("test", (0, num_vertices, num_info), maxshape=(None, num_vertices, num_info))
for n in np.arange(18000/output_size):
chunk = np.random.rand(output_size, 16, 38)
with h5py.File(t0, "a") as f:
dset = f["test"]
orig_index = dset.shape[0]
dset.resize(dset.shape[0] + chunk.shape[0], axis=0)
dset[orig_index:, :, :] = chunk
et = time.time()
print("test0: time taken: {} s, size: {} kB".format(np.round(et - st, 2), int(os.path.getsize(t0))/1000))
请注意,平均而言,测试数据的大小与我从模拟中获得的数据大小相似(在最坏的情况下,我可能有 2 到 3 倍的测试时间点数)。
这个测试的输出是:
test0: time taken: 2.02 s, size: 46332.856 kB
将此输出与预先提供数据大小的测试进行比较:
t1 = "A:\\t1.hdf5"
with contextlib.suppress(FileNotFoundError):
os.remove(t1)
st = time.time()
data = np.random.rand(num_timepoints, num_vertices, num_info)
with h5py.File(t1, "a") as f:
dset = f.create_dataset("test", data.shape)
dset = data
et = time.time()
print("test1: time taken: {} s, size: {} kB".format(np.round(et - st, 2), int(os.path.getsize(t1))/1000))
其中有作为输出:
test1: time taken: 0.09 s, size: 1.4 kB
如果我选择output_size(这反映了我一次从模拟中获得的数据块有多大)1,那么test0大约需要 40 秒,并创建一个大约 700 MB 的文件!
相关分类