lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <loom.20121023T223837-223@post.gmane.org>
Date:	Tue, 23 Oct 2012 20:56:30 +0000 (UTC)
From:	Cong Xu <davidxu06@...il.com>
To:	netdev@...r.kernel.org
Subject: Synchronization between process context and softirq context on SMP machine

I met some problems when I did some research in improving the TCP/UDP 
performance of Virtual Machine(VM), if anybody can offer me some help or 
suggestion to handle my problem, I will be very appreciated.


On virtual machine platform, virtual CPU (vCPU) of each VM can not be always 
online when several vCPUs share one physical CPU (pCPU) (Here we can simply 
assume the vCPU scheduling is round-robin.). Therefore, the high delay of TCP 
receiving of VM hurts the TCP throughput significantly. In order to handle this 
problem I assign a virtual co-processor (co-vCPU) which is almost always online 
to each VM and pin NIC IRQ of the VM to this co-vCPU. ( If you are not familiar 
with VM, you can simply assume that in a common OS the user level application 
(e.g. iperf) runs on a cpu which will be offline every 30ms, and the bottom half
 or softirq context runs on another cpu which is always online. )

In my experiment, this method works well for UDP but does not work for TCP. I 
doubt that it is due to the synchronization between process context and softirq 
context. Because when I read some source code of TCP layer in linux, I found 
that both softirq context (e.g. tcp_v4_rcv() in net/ipv4/tcp_ipv4.c) and process 
context (e.g. tcp_recvmsg() in net/ipv4/tcp.c) call lock_sock()/unlock_sock() 
when they access the buffers in kernel(receive_queue, backlog_queue or p
requeue). Therefore, sometimes softirq context can not access the receiving 
buffers locked by another vCPU which runs the user level receiving process 
(iperf server) and this vCPU holding the spinlock has been descheduled by VM 
monitor(VMM) or hypervisor.


I am not sure I described my problem clearly. Anyway, welcome any suggestion on 
it.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ