lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 25 Sep 2010 11:12:25 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	romieu@...zoreil.com, sgruszka@...hat.com, netdev@...r.kernel.org
Subject: Re: [PATCH 1/2 -next] r8169: allocate with GFP_KERNEL flag when
 able to sleep

Le samedi 25 septembre 2010 à 00:13 -0700, David Miller a écrit :
> From: Eric Dumazet <eric.dumazet@...il.com>
> Date: Sat, 25 Sep 2010 08:06:37 +0200
> 
> > Patch solves the suspend/resume, probably, but as soon as we receive
> > trafic, we can hit the allocation error anyway...
> 
> It allocates 1536 + N, where N can be NET_IP_ALIGN, or some small
> value like 8.
> 
> This is in the same ballpark as what tg3 allocates for RX buffers.
> 
> SLAB/SLUB/whatever just wants multi-order page allocations even
> for SKBs which are about this size.
> 
> Furthermore, the sleeping allocations we do at ->open() time to
> allocate the entire RX ring all at once will buddy up a lot of
> pages and make 1-order allocs more likely.

Yes, I forgot this problem about SLUB (I ended using SLAB on servers
because of this order-3 problem on kmalloc(2048))

bnx2 uses GFP_KERNEL allocations at init time, but tg3 uses GFP_ATOMIC
(because tp->lock is held).

The r8169 current problem is its currently copying all incoming frames.
I guess nobody cares or noticed the performance drop.
(But commit c0cd884a is recent (2.6.34), this is not yet in
production...)




--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists