我想知道为什么子进程使这么多文件保持打开状态。我有一个例子,其中某些文件似乎永远保持打开状态(在子进程完成后,甚至在程序崩溃之后)。
请考虑以下代码:
import aiofiles
import tempfile
async def main():
return [await fds_test(i) for i in range(2000)]
async def fds_test(index):
print(f"Writing {index}")
handle, temp_filename = tempfile.mkstemp(suffix='.dat', text=True)
async with aiofiles.open(temp_filename, mode='w') as fp:
await fp.write('stuff')
await fp.write('other stuff')
await fp.write('EOF\n')
print(f"Reading {index}")
bash_cmd = 'cat {}'.format(temp_filename)
process = await asyncio.create_subprocess_exec(*bash_cmd.split(), stdout=asyncio.subprocess.DEVNULL, close_fds=True)
await process.wait()
print(f"Process terminated {index}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())
这将一个接一个地生成进程(按顺序)。我希望同时打开的文件数量也是一个。但事实并非如此,在某些时候我得到以下错误:
/Users/cglacet/.pyenv/versions/3.8.0/lib/python3.8/subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, start_new_session)
1410 # Data format: "exception name:hex errno:description"
1411 # Pickle is not used; it is complex and involves memory allocation.
-> 1412 errpipe_read, errpipe_write = os.pipe()
1413 # errpipe_write must not be in the standard io 0, 1, or 2 fd range.
1414 low_fds_to_close = []
OSError: [Errno 24] Too many open files
我尝试在没有该选项的情况下运行相同的代码,但它仍然崩溃。这个答案表明,这可能是问题的根源,错误也指向线 。但这似乎不是问题所在(在没有该选项的情况下运行会产生相同的错误)。stdout=asyncio.subprocess.DEVNULLerrpipe_read, errpipe_write = os.pipe()
翻翻过去那场雪
相关分类