lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Feb 2012 13:46:03 -0600
From:	Anthony Liguori <aliguori@...ibm.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
CC:	Tom Lendacky <toml@...ibm.com>, netdev@...r.kernel.org,
	Cristian Viana <vianac@...ibm.com>
Subject: Re: [PATCH 1/2] vhost: allow multiple workers threads

On 02/20/2012 01:27 PM, Michael S. Tsirkin wrote:
> On Mon, Feb 20, 2012 at 09:50:37AM -0600, Tom Lendacky wrote:
>> "Michael S. Tsirkin"<mst@...hat.com>  wrote on 02/19/2012 08:41:45 AM:
>>
>>> From: "Michael S. Tsirkin"<mst@...hat.com>
>>> To: Anthony Liguori/Austin/IBM@...US
>>> Cc: netdev@...r.kernel.org, Tom Lendacky/Austin/IBM@...US, Cristian
>>> Viana<vianac@...ibm.com>
>>> Date: 02/19/2012 08:42 AM
>>> Subject: Re: [PATCH 1/2] vhost: allow multiple workers threads
>>>
>>> On Fri, Feb 17, 2012 at 05:02:05PM -0600, Anthony Liguori wrote:
>>>> This patch allows vhost to have multiple worker threads for devices
>> such as
>>>> virtio-net which may have multiple virtqueues.
>>>>
>>>> Since virtqueues are a lockless ring queue, in an ideal world data is
>> being
>>>> produced by the producer as fast as data is being consumed by the
>> consumer.
>>>> These loops will continue to consume data until none is left.
>>>>
>>>> vhost currently multiplexes the consumer side of the queue on a
>>> single thread
>>>> by attempting to read from the queue until everything is read or it
>> cannot
>>>> process anymore.  This means that activity on one queue may stall
>>> another queue.
>>>
>>> There's actually an attempt to address this: look up
>>> VHOST_NET_WEIGHT in the code. I take it, this isn't effective?
>>>
>>>> This is exacerbated when using any form of polling to read from
>>> the queues (as
>>>> we'll introduce in the next patch).  By spawning a thread per-
>>> virtqueue, this
>>>> is addressed.
>>>>
>>>> The only problem with this patch right now is how the wake up of
>>> the threads is
>>>> done.  It's essentially a broadcast and we have seen lock contention as
>> a
>>>> result.
>>>
>>> On which lock?
>>
>> The mutex lock in the vhost_virtqueue struct.  This really shows up when
>> running with patch 2/2 and increasing the spin_threshold. Both threads wake
>> up and try to acquire the mutex.  As the spin_threshold increases you end
>> up
>> with one of the threads getting blocked for a longer and longer time and
>> unable to do any RX processing that might be needed.
>>
>> Tom
>
> Weird, I had the impression each thread handles one vq.
> Isn't this the design?

Not the way the code is structured today.  There is a single consumer/producer 
work queue and either the vq notification or other actions may get placed on it.

It would be possible to do three threads, one for background tasks and then one 
for each queue with a more invasive refactoring.

But I assumed that the reason the code was structured this was originally was 
because you saw some value in having a single producer/consumer queue for 
everything...

Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ