lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 11 Nov 2016 10:07:44 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] tuntap: rx batching



On 2016年11月10日 00:38, Michael S. Tsirkin wrote:
> On Wed, Nov 09, 2016 at 03:38:31PM +0800, Jason Wang wrote:
>> Backlog were used for tuntap rx, but it can only process 1 packet at
>> one time since it was scheduled during sendmsg() synchronously in
>> process context. This lead bad cache utilization so this patch tries
>> to do some batching before call rx NAPI. This is done through:
>>
>> - accept MSG_MORE as a hint from sendmsg() caller, if it was set,
>>    batch the packet temporarily in a linked list and submit them all
>>    once MSG_MORE were cleared.
>> - implement a tuntap specific NAPI handler for processing this kind of
>>    possible batching. (This could be done by extending backlog to
>>    support skb like, but using a tun specific one looks cleaner and
>>    easier for future extension).
>>
>> Signed-off-by: Jason Wang <jasowang@...hat.com>
> So why do we need an extra queue?

The idea was borrowed from backlog to allow some kind of bulking and 
avoid spinlock on each dequeuing.

>   This is not what hardware devices do.
> How about adding the packet to queue unconditionally, deferring
> signalling until we get sendmsg without MSG_MORE?

Then you need touch spinlock when dequeuing each packet.

>
>
>> ---
>>   drivers/net/tun.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++++-----
>>   1 file changed, 65 insertions(+), 6 deletions(-)
>>

[...]

>>   	rxhash = skb_get_hash(skb);
>> -	netif_rx_ni(skb);
>> +	skb_queue_tail(&tfile->socket.sk->sk_write_queue, skb);
>> +
>> +	if (!more) {
>> +		local_bh_disable();
>> +		napi_schedule(&tfile->napi);
>> +		local_bh_enable();
> Why do we need to disable bh here? I thought napi_schedule can
> be called from any context.

Yes, it's unnecessary. Will remove.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ