lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <571F1330.7030504@redhat.com>
Date:	Tue, 26 Apr 2016 15:05:20 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	Pankaj Gupta <pagupta@...hat.com>
Cc:	mst@...hat.com, kvm@...r.kernel.org,
	virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] vhost: lockless enqueuing



On 04/26/2016 02:24 PM, Pankaj Gupta wrote:
> Hi Jason,
>
> Overall patches look good. Just one doubt I have is below:
>> We use spinlock to synchronize the work list now which may cause
>> unnecessary contentions. So this patch switch to use llist to remove
>> this contention. Pktgen tests shows about 5% improvement:
>>
>> Before:
>> ~1300000 pps
>> After:
>> ~1370000 pps
>>
>> Signed-off-by: Jason Wang <jasowang@...hat.com>
>> ---
>>  drivers/vhost/vhost.c | 52
>>  +++++++++++++++++++++++++--------------------------
>>  drivers/vhost/vhost.h |  7 ++++---
>>  2 files changed, 29 insertions(+), 30 deletions(-)

[...]

>> -		if (work) {
>> +		node = llist_del_all(&dev->work_list);
>> +		if (!node)
>> +			schedule();
>> +
>> +		node = llist_reverse_order(node);
> Can we avoid llist reverse here?
>

Probably not, this is because:

- we should process the work exactly the same order as they were queued,
otherwise flush won't work
- llist can only add a node to the head of list.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ