socket_vmnet icon indicating copy to clipboard operation
socket_vmnet copied to clipboard

socket_vmnet gets stuck randomly

Open sheelchand opened this issue 2 years ago • 6 comments

We have 7 qemu VMs running, having 3 virtual ethernet interface each.

socket_vmnet works most of the times but randomly stops working and the communication between the VMs is stopped.

The debug logs show the process get stuck on writev() call. DEBUG| [Socket-to-Socket i=1815762] Sending from socket 8 to socket 5: 4 + 95 bytes

There is no log after the above log: On the VM reboot the logs show that writev() call return -1

I suspect this is due to a race condition when multiple threads are accessing the socket to send and receive data. I don't have the exact explanation yet bet the behavior is pointing to a race condition.

sheelchand avatar Oct 17 '23 14:10 sheelchand

More information on the issue:

  1. I mapped each VM to a socket Id and saw that a thread was stuck on writing to socket 11. I stopped the VM corresponding to socket 11 and socket_vmnet crashes out: the process was stuck sending to socket 11 DEBUG| [Socket-to-Socket i=166220] Sending from socket 7 to socket 11: 4 + 46 bytes

Logs after the VM corresponding to socket 11 is stopped: DEBUG| [Socket-to-Socket i=166220] Sent from socket 7 to socket 11: -1 bytes (including uint32be header) writev: Broken pipe DEBUG| [Socket-to-VMNET i=166221] Receiving from the socket 7 DEBUG| [Socket-to-VMNET i=166221] Received from the socket 7: 46 bytes DEBUG| [Socket-to-VMNET i=166221] Sending to VMNET: 46 bytes on_accept(): vmnet_return_t VMNET_INVALID_ARGUMENT vmnet_write: Broken pipe DEBUG| [Socket-to-Socket i=9903992] Sent from socket 5 to socket 7: -1 bytes (including uint32be header) writev: Bad file descriptor DEBUG| [Socket-to-VMNET i=9903993] Receiving from the socket 5 DEBUG| [Socket-to-Socket i=53677] Sent from socket 10 to socket 7: -1 bytes (including uint32be header) writev: Bad file descriptor DEBUG| [Socket-to-VMNET i=53678] Receiving from the socket 10 DEBUG| [Socket-to-Socket i=4171484] Sent from socket 9 to socket 7: -1 bytes (including uint32be header) writev: Bad file descriptor DEBUG| [Socket-to-VMNET i=4171485] Receiving from the socket 9 DEBUG| [Socket-to-VMNET i=9903993] Received from the socket 5: 1514 bytes DEBUG| [Socket-to-VMNET i=9903993] Sending to VMNET: 1514 bytes DEBUG| [Socket-to-Socket i=5600510] Sent from socket 11 to socket 7: -1 bytes (including uint32be header) writev: Bad file descriptor DEBUG| [Socket-to-VMNET i=5600511] Receiving from the socket 11 DEBUG| [Socket-to-Socket i=93534327] Sent from socket 6 to socket 7: -1 bytes (including uint32be header) writev: Bad file descriptor DEBUG| [Socket-to-VMNET i=93534328] Receiving from the socket 6 on_accept(): vmnet_return_t VMNET_INVALID_ARGUMENT vmnet_write: Bad file descriptor DEBUG| [Socket-to-Socket i=68470895] Sent from socket 8 to socket 7: -1 bytes (including uint32be header) writev: Bad file descriptor DEBUG| [Socket-to-VMNET i=68470896] Receiving from the socket 8 DEBUG| [Handler i=0] Sent to the socket: -1 bytes (including uint32be header) writev: Bad file descriptor DEBUG| [Socket-to-VMNET i=53678] Received from the socket 10: 46 bytes DEBUG| [Socket-to-VMNET i=53678] Sending to VMNET: 46 bytes DEBUG| [Socket-to-VMNET i=5600511] Received from the socket 11: 78 bytes on_accept(): vmnet_return_t VMNET_INVALID_ARGUMENT vmnet_write: Bad file descriptor DEBUG| [Socket-to-VMNET i=93534328] Received from the socket 6: 70 bytes DEBUG| [Socket-to-VMNET i=93534328] Sending to VMNET: 70 bytes DEBUG| shutting down with rc=1 DEBUG| [Socket-to-VMNET i=4171485] Received from the socket 9: 124 bytes DEBUG| [Socket-to-VMNET i=4171485] Sending to VMNET: 124 bytes DEBUG| [Socket-to-VMNET i=5600511] Sending to VMNET: 78 bytes

sheelchand avatar Oct 18 '23 17:10 sheelchand

On further analysis, it looks like a deadlock problem: Socket 7's buffer is full, other sockets, including socket 11) are blocked sending to it. Socket 7 gets blocked sending to socket 11 because 11's buffer is full - socket 11 and socket 7 are in deadlock.

sheelchand avatar Oct 18 '23 18:10 sheelchand

This code has multiple threads writing to a socket at the same time. This will cause problem - potentially corrupting packets. We need to remove the flooding by mapping MAC addresses to socket-ids. so that 2 threads don't write to a given socket at the same time.

sheelchand avatar Oct 20 '23 16:10 sheelchand

@sheelchand Thanks for analysis, would you be interested in submitting a PR?

AkihiroSuda avatar Oct 24 '23 16:10 AkihiroSuda

I tried but the Mac’s in the packets don’t match the VMs Macs. I tried putting a semephore before sending to a socket but that did not help either. I will continue to look at it.

sheelchand avatar Oct 24 '23 22:10 sheelchand

@sheelchand removing flooding is important for performance but it will not solve the issue of writing to the same socket at the same time from different threads.

Example flow when we send each packet only the destination:

thread1                              thread2          thread3
write packet from conn1 to conn2     -                write packet from conn3 to conn2

writev(2) does not mention anything about thread safety or message size that can be written atomically, so we should assume that this is unsafe.

send(2) seems safe:

 ...The length of the message is given by length.  If the message is too long
 to pass atomically through the underlying protocol, the error EMSGSIZE is
 returned, and the message is not transmitted.

But using send we will have to copy the packet so do one syscall, so writev seems better way. Specially if we find how to send multiple packets per syscall instead of one without sending all packets to all the guests.

nirs avatar Sep 15 '24 19:09 nirs

I can reproduce it reliably now - the key is running socket_vmnet in the shell and not via launch. When we run with luanchd, we don't specify ProcessType so the system apply resources limits which makes socket_vment almost twice slower and harder to reproduce this bug.

The backtrace I see is:

(lldb) bt all
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
  * frame #0: 0x0000000193c856b4 libsystem_kernel.dylib`__accept + 8
    frame #1: 0x00000001044a5ad8 socket_vmnet`main + 2292
    frame #2: 0x0000000193933154 dyld`start + 2476
  thread #2, queue = 'io.github.lima-vm.socket_vmnet.accept'
    frame #0: 0x0000000193c80c4c libsystem_kernel.dylib`writev + 8
    frame #1: 0x00000001044a5ed0 socket_vmnet`__main_block_invoke + 748
    frame #2: 0x0000000193b0b750 libdispatch.dylib`_dispatch_call_block_and_release + 32
    frame #3: 0x0000000193b0d3e8 libdispatch.dylib`_dispatch_client_callout + 20
    frame #4: 0x0000000193b108ec libdispatch.dylib`_dispatch_continuation_pop + 600
    frame #5: 0x0000000193b0ff08 libdispatch.dylib`_dispatch_async_redirect_invoke + 584
    frame #6: 0x0000000193b1eea8 libdispatch.dylib`_dispatch_root_queue_drain + 392
    frame #7: 0x0000000193b1f6b8 libdispatch.dylib`_dispatch_worker_thread2 + 156
    frame #8: 0x0000000193cb9fd0 libsystem_pthread.dylib`_pthread_wqthread + 228
  thread #3, queue = 'io.github.lima-vm.socket_vmnet.accept'
    frame #0: 0x0000000193c7d994 libsystem_kernel.dylib`read + 8
    frame #1: 0x00000001044a5d98 socket_vmnet`__main_block_invoke + 436
    frame #2: 0x0000000193b0b750 libdispatch.dylib`_dispatch_call_block_and_release + 32
    frame #3: 0x0000000193b0d3e8 libdispatch.dylib`_dispatch_client_callout + 20
    frame #4: 0x0000000193b108ec libdispatch.dylib`_dispatch_continuation_pop + 600
    frame #5: 0x0000000193b0ff08 libdispatch.dylib`_dispatch_async_redirect_invoke + 584
    frame #6: 0x0000000193b1eea8 libdispatch.dylib`_dispatch_root_queue_drain + 392
    frame #7: 0x0000000193b1f6b8 libdispatch.dylib`_dispatch_worker_thread2 + 156
    frame #8: 0x0000000193cb9fd0 libsystem_pthread.dylib`_pthread_wqthread + 228
  thread #4, queue = 'io.github.lima-vm.socket_vmnet.events'
    frame #0: 0x0000000193c80c4c libsystem_kernel.dylib`writev + 8
    frame #1: 0x00000001044a66b8 socket_vmnet`_on_vmnet_packets_available + 740
    frame #2: 0x00000001044a622c socket_vmnet`__start_block_invoke.31 + 176
    frame #3: 0x000000022a5018c8 vmnet`__vmnet_interface_set_event_callback_block_invoke_3 + 132
    frame #4: 0x0000000193b0d3e8 libdispatch.dylib`_dispatch_client_callout + 20
    frame #5: 0x0000000193b108ec libdispatch.dylib`_dispatch_continuation_pop + 600
    frame #6: 0x0000000193b247f0 libdispatch.dylib`_dispatch_source_latch_and_call + 420
    frame #7: 0x0000000193b233b4 libdispatch.dylib`_dispatch_source_invoke + 832
    frame #8: 0x0000000193b10764 libdispatch.dylib`_dispatch_continuation_pop + 208
    frame #9: 0x0000000193b0ff08 libdispatch.dylib`_dispatch_async_redirect_invoke + 584
    frame #10: 0x0000000193b2000c libdispatch.dylib`_dispatch_root_queue_drain_deferred_item + 336
    frame #11: 0x0000000193b1f8f4 libdispatch.dylib`_dispatch_kevent_worker_thread + 500
    frame #12: 0x0000000193cba044 libsystem_pthread.dylib`_pthread_wqthread + 344
  thread #5
    frame #0: 0x0000000193c7ea84 libsystem_kernel.dylib`__workq_kernreturn + 8

This is not really a deadlock in socket_vmnet - it is trying to write to the vm unix socket, and if the buffers are full, the write will block. We can avoid this by using non-blocking I/O and drop packets when the socket is not writable. But why we cannot write to the socket?

I think the issue is in lima hostagent - if it fails to copy a packet from the unix socket to the datagram socket it stops reading without logging any error!

The issue comes from tcpproxy:

// HandleConn implements the Target interface.
func (dp *DialProxy) HandleConn(src net.Conn) {
	ctx := context.Background()
	var cancel context.CancelFunc
	if dp.DialTimeout >= 0 {
		ctx, cancel = context.WithTimeout(ctx, dp.dialTimeout())
	}
	dst, err := dp.dialContext()(ctx, "tcp", dp.Addr)
	if cancel != nil {
		cancel()
	}
	if err != nil {
		dp.onDialError()(src, err)
		return
	}
	defer goCloseConn(dst)

	if err = dp.sendProxyHeader(dst, src); err != nil {
		dp.onDialError()(src, err)
		return
	}
	defer goCloseConn(src)

	if ka := dp.keepAlivePeriod(); ka > 0 {
		for _, c := range []net.Conn{src, dst} {
			if c, ok := tcpConn(c); ok {
				c.SetKeepAlive(true)
				c.SetKeepAlivePeriod(ka)
			}
		}
	}

	errc := make(chan error, 2)
	go proxyCopy(errc, src, dst)
	go proxyCopy(errc, dst, src)
	<-errc
	<-errc
}

Inside proxyCopy we get the error from io.Copy and return it via the channel:

	_, err := io.Copy(dst, src)
	errc <- err

So it waits until both goroutines finish, but drops the error silently.

The most likely error is ENOBUFS - it is impossible to avoid this error in macOS when writing fast to a unix datagram socket. The write should be retried until it succeeds, or the packet should be dropped. But the tcpproxy just fail the entire operation silently.

@balajiv113 What do you think?

nirs avatar Sep 30 '24 21:09 nirs

@nirs In gvisor-tap-vsock as well we recently handled ENOBUFS

https://github.com/containers/gvisor-tap-vsock/pull/370/files

This usually happens in dgram socket, maybe we can simply try to create a custom connection overriding write alone and retry on ENOBUFS error.

balajiv113 avatar Oct 02 '24 05:10 balajiv113

@balajiv113 thanks, I'll send a lima fix.

nirs avatar Oct 02 '24 10:10 nirs

The ENOBUFS errors are easily reproducible and recoverable with https://github.com/lima-vm/lima/pull/2680.

@sheelchand can you test with the lima fix?

nirs avatar Oct 03 '24 19:10 nirs

The lima issue is fixed now. We can close this when a lima version including the fix is released.

nirs avatar Oct 11 '24 13:10 nirs