我(仍然)试图在Tensorflow 2.0后端上使用Keras实现一个简单的Unet网络。
我的模板和蒙版是1536x1536 RGB图像(蒙版是黑白的)。根据本文,可以测量所需的内存量。
我的模型在张量 [1,16,1536,1536] 上因内存分配错误而崩溃。使用上面文章中给出的等式,我计算了此张量所需的内存量:1 * 16 * 1536 * 1536 * 4 = 144 MB。我有GTX 1080 Ti,其中大约9 GB可用于Tensorflow。哪里出错了?我错过了什么吗?
以下是几乎完全的回溯:
我正在尝试将嗅探时间转换为秒数。我已经在Python Pandas DataFrame中查看了将timedelta64[ns]列转换为秒,但是该解决方案不起作用。我想也许熊猫线可能有错。
print(sniffTime)
print(type(sniffTime))
输出:
821693000 nanoseconds
<class 'numpy.timedelta64'>
错误
AttributeError: 'numpy.timedelta64' object has no attribute 'total_seconds'
在线:
df['PerSec']=df['PerSec'].div(sniffTime.total_seconds())2020-03-02 15:59:13.841967: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2020-03-02 15:59:16.083234: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2020-03-02 15:59:16.087240: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-03-02 15:59:16.210856: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.607
喵喔喔
慕虎7371278
相关分类