lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 23 Mar 2008 19:48:29 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	David Miller <davem@...emloft.net>
CC:	netdev@...r.kernel.org
Subject: Re: [RFC,PATCH] loopback: calls netif_receive_skb() instead of netif_rx()

David Miller a écrit :
> From: Eric Dumazet <dada1@...mosbay.com>
> Date: Sat, 01 Mar 2008 11:26:17 +0100
> 
>> [PATCH] loopback: calls netif_receive_skb() instead of netif_rx()
> 
> Eric, did you get a chance to kernel stack usage stress this
> thing out like I asked?
> 

I noticed some paths in kernel are very stack aggressive, and on i386 with 
CONFIG_4KSTACKS we were really in a dangerous land, even without my patch.

What we call 4K stacks is in fact 4K - sizeof(struct task_struct), so a litle 
bit more than 2K. (not counting some insane configuration were struct 
task_struct take 3.5 KB, see CONFIG_LATENCYTOP for an example)

So I cooked a different patch that explicitly test available stack space 
instead of counting a depth value.

The problem is that this patch depends on CONFIG_STACK_GROWSUP and I had an 
issue with it
(see http://kerneltrap.org/mailarchive/linux-kernel/2008/3/5/1079774)
and I had no answer from Kyle or others on this subject.

So I had to disable the optimisation for HPPA arch (it seems the only arch 
that has CONFIG_STACK_GROWSUP)

[PATCH] loopback: calls netif_receive_skb() instead of netif_rx()

Loopback transmit function loopback_xmit() actually calls netif_rx() to queue 
a skb to the softnet queue, and arms a softirq so that this skb can be handled 
later.

This has a cost on SMP, because we need to hold a reference on the device, and 
free this reference when softirq dequeues packet.


Following patch directly calls netif_receive_skb() and avoids lot of atomic 
operations.

(atomic_inc(&dev->refcnt), set_and_set_bit(NAPI_STATE_SCHED, &n->state), ...

atomic_dec(&dev->refcnt)...), cache line ping-pongs on device refcnt, but also 
softirq overhead.

This gives a nice boost on tbench for example (5 % on my machine)

We check available free stack space to take the decision of directly call 
netif_receive_skb() or queue the packet for further softirq handling, when 
stack space will be back to an acceptable level.

Signed-off-by: Eric Dumazet <dada1@...mosbay.com>


View attachment "loopback.patch" of type "text/plain" (992 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ