TrioInternalError while using a PipeReceiveStream
Trying to convert a multiprocessing.Pipe to a PipeSendStream/PipeReceiveStream pair as I try to make my own multiprocessing wrapper. This is on windows 10, python 3.8.5, trio 0.17.
import trio
from trio._windows_pipes import PipeReceiveStream, PipeSendStream
from multiprocessing import Process, Pipe, Event
def echo(recv_pipe, send_pipe, event):
bytes = recv_pipe.recv_bytes()
print('received', flush=True)
event.set()
send_pipe.send_bytes(bytes)
print('sent', flush=True)
async def main():
recv_pipe, send_pipe = Pipe()
event = Event()
proc = Process(target=echo, args=(recv_pipe, send_pipe, event), daemon=True)
proc.start()
send_stream = PipeSendStream(send_pipe.fileno())
recv_stream = PipeReceiveStream(recv_pipe.fileno())
await send_stream.send_all(b"Hello World")
event.wait()
response = await recv_stream.receive_some()
print(response, flush=True)
print("done", flush=True)
if __name__ == "__main__":
trio.run(main)
Trio then crashes when trying to receive the data. The event, or some kind of work/sleep to delay the main process, is crucial to triggering the bug.
received
sent
Traceback (most recent call last):
File "C:\redacted\site-packages\trio\_core\_run.py", line 2068, in unrolled_run
runner.io_manager.process_events(events)
File "C:\redacted\site-packages\trio\_core\_io_windows.py", line 516, in process_events
waiter = self._overlapped_waiters.pop(entry.lpOverlapped)
KeyError: <cdata 'OVERLAPPED *' 0x000001EDFA33D220>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\redacted\site-packages\trio\_core\_run.py", line 1236, in guest_tick
timeout = self.unrolled_run_next_send.send(self.unrolled_run_gen)
File "C:\redacted\site-packages\outcome\_sync.py", line 91, in send
return gen.send(self.value)
File "C:\redacted\site-packages\trio\_core\_run.py", line 2226, in unrolled_run
raise TrioInternalError("internal error in Trio - please file a bug!") from exc
trio.TrioInternalError: internal error in Trio - please file a bug!
Tk shutdown. Outcome: Value(None)
Process finished with exit code -1
Apparently IOCP successfully received it, but waking up the task fails. My debugger shows that on "C:\redacted\site-packages\trio_core_io_windows.py", line 516, in process_events, self._overlapped_waiters contains a single entry <cdata 'OVERLAPPED *' owning 32 bytes> whereas entry.lpOverlapped is <cdata 'OVERLAPPED *' 0x000002302343F9A0>, which does not compare equal and leads to the KeyError.
It's hard for me to tell if I am doing something wrong while passing these handles around, or if I am trying to use unsupported behavior, or if it is really a Trio bug!
Huh, that is weird.
I spent a few minutes digging through multiprocessing and Trio's windows code, and nothing obvious jumped out at me.
What do you get in your debugger if you print hex(ffi.cast("uintptr_t", list(self._overlapped_waiters)[0]))? Does it match the pointer value you get from entry.lpOverlapped?
I had to use hex(int(...)) oddly, but no, the pointer values do not match in the failing case.
If you don't demand the pipes to have duplex behavior, the code works as expected:
import trio
from trio._windows_pipes import PipeReceiveStream, PipeSendStream
from multiprocessing import Process, Pipe
def echo(recv_pipe, send_pipe):
bytes = recv_pipe.recv_bytes()
print('received', flush=True)
send_pipe.send_bytes(bytes)
print('sent', flush=True)
async def main():
child_recv_pipe, parent_send_pipe = Pipe(duplex=False)
parent_recv_pipe, child_send_pipe = Pipe(duplex=False)
proc = Process(target=echo, args=(child_recv_pipe, child_send_pipe), daemon=True)
proc.start()
parent_send_stream = PipeSendStream(parent_send_pipe.fileno())
parent_recv_stream = PipeReceiveStream(parent_recv_pipe.fileno())
await parent_send_stream.send_all(b"Hello World")
response = await parent_recv_stream.receive_some()
print(response, flush=True)
print("done", flush=True)
if __name__ == "__main__":
trio.run(main)
However, there is some spam from _HandleHolder.__del__ in the output:
received
sent
bytearray(b'Hello World')
done
Exception ignored in: <function _HandleHolder.__del__ at 0x0000026B9B57A9D0>
Traceback (most recent call last):
File "C:\redacted\site-packages\trio\_windows_pipes.py", line 41, in __del__
self._close()
File "C:\redacted\site-packages\trio\_windows_pipes.py", line 34, in _close
raise_winerror()
File "C:\redacted\site-packages\trio\_core\_windows_cffi.py", line 323, in raise_winerror
raise OSError(0, msg, filename, winerror, filename2)
OSError: [WinError 6] The handle is invalid
Exception ignored in: <function _HandleHolder.__del__ at 0x0000026B9B57A9D0>
Traceback (most recent call last):
File "C:\redacted\site-packages\trio\_windows_pipes.py", line 41, in __del__
self._close()
File "C:\redacted\site-packages\trio\_windows_pipes.py", line 34, in _close
raise_winerror()
File "C:\redacted\site-packages\trio\_core\_windows_cffi.py", line 323, in raise_winerror
raise OSError(0, msg, filename, winerror, filename2)
OSError: [WinError 6] The handle is invalid
Process finished with exit code 0
I suppose this has to do with the fact that the other end of the pipe was closed in the child process when it exits? I propose that suppressing errors from _close in __del__ would be harmless. Would you be interested in a PR to that effect?
If you don't demand the pipes to have duplex behavior, the code works as expected:
Huh, that's very odd! I guess there's something we don't understand about how Windows named pipes work. It would be nice to figure it out.
I suppose this has to do with the fact that the other end of the pipe was closed in the child process when it exits? I propose that suppressing errors from
_closein__del__would be harmless. Would you be interested in a PR to that effect?
I think it's because multiprocessing and trio are both closing the handle in the parent process.
And unfortunately, suppressing this kind of error on handle close isn't harmless – if you close a handle twice, then there's a chance that in between the handle could have been reused for some other object, and you end up closing an unrelated object by accident.
I guess what you really want is something like socket.detach(), which tells an object that it shouldn't close the handle, without actually closing it. But it doesn't make much sense to add a method like that until after we make Windows named pipe support an actual public API (#824). In the mean time, while you're hacking together a proof-of-concept with internal APIs, you can accomplish the same thing by doing this at the end, once you're done with the pipes:
parent_*_stream._handle_holder.handle = -1
... if you close a handle twice, then there's a chance that in between the handle could have been reused for some other object, and you end up closing an unrelated object by accident.
Big oof
... Windows named pipe support...
Thanks for linking that other issue here, I think I had seen it but I never really get my head around what to actually do with the information. FYI, setting those handles to -1 at the end of main() does indeed silence the stderr garbage.