lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 3 Jun 2010 15:01:08 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [Bugme-new] [Bug 16083] New: swapper: Page allocation failure

On Thu, 03 Jun 2010 23:37:16 +0200
Eric Dumazet <eric.dumazet@...il.com> wrote:

> Le jeudi 03 juin 2010 __ 23:13 +0200, Eric Dumazet a __crit :
> 
> > MTU=9000 on a system with 4K pages... Oh well...
> > 
> > maybe net/ipv6/mcast.c should cap dev->mtu to PAGE_SIZE-128 or
> > something, so that order-0 allocations are done.
> > 
> > 
> 
> Something like this patch (completely untested) :
> 
> [PATCH] ipv6: avoid high order allocations
> 
> With mtu=9000, mld_newpack() use order-2 GFP_ATOMIC allocations, that
> are very unreliable, on machines where PAGE_SIZE=4K
> 
> Limit allocated skbs to be at most one page. (order-0 allocations)
> 

Maybe - I wouldn't know how desirable that is from the
imapct-on-efficiency POV.  But I think most failures I've seen are for
regular old tcpipv4.  Often with e1000, which does larger-than-needed
allocations for (iirc) weird alignment requirements.

> ---
>  net/ipv6/mcast.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
> index 59f1881..3484794 100644
> --- a/net/ipv6/mcast.c
> +++ b/net/ipv6/mcast.c
> @@ -1356,7 +1356,10 @@ static struct sk_buff *mld_newpack(struct net_device *dev, int size)
>  		     IPV6_TLV_PADN, 0 };
>  
>  	/* we assume size > sizeof(ra) here */
> -	skb = sock_alloc_send_skb(sk, size + LL_ALLOCATED_SPACE(dev), 1, &err);
> +	size += LL_ALLOCATED_SPACE(dev);
> +	/* limit our allocations to order-0 page */
> +	size = min(size, SKB_MAX_ORDER(0, 0));
> +	skb = sock_alloc_send_skb(sk, size, 1, &err);
>  
>  	if (!skb)
>  		return NULL;

An alternative which retains any performance benefit from the order-2
allocation would be:

	p = alloc_pages(__GFP_NOWARN|..., 2);
	if (!p)
		p = alloc_pages(..., 0);

if you see what I mean.

This would also fix any retry/timeout-related stalls which people might
experience when the order-2 allocation fails, so it might make things
better in general.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists