[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080402085437.7d9abf1f.akpm@linux-foundation.org>
Date: Wed, 2 Apr 2008 08:54:37 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Nick Piggin <nickpiggin@...oo.com.au>
Cc: Chris Snook <csnook@...hat.com>,
Dave Jones <davej@...emonkey.org.uk>,
Linux Kernel <linux-kernel@...r.kernel.org>,
netdev@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>
Subject: Re: GFP_ATOMIC page allocation failures.
On Wed, 2 Apr 2008 20:12:58 +1100 Nick Piggin <nickpiggin@...oo.com.au> wrote:
> On Wednesday 02 April 2008 18:56, Andrew Morton wrote:
>
> > > Limiting this to once per boot should suffice for debugging purposes.
> > > Even if you manage to concoct a bug that always survives the first
> > > failure, you should be able to take the hint when you keep seeing this
> > > in dmesg.
> >
> > The appropriate thing to do here is to convert known-good drivers (such as
> > e1000[e]) to use __GFP_NOWARN.
> >
> > Unfortunately netdev_alloc_skb() went and assumed GFP_ATOMIC, but I guess
> > we can dive below the covers and use __netdev_alloc_skb():
>
> It's still actually nice to know how often it is happening even for
> these known good sites because too much can indicate a problem and
> that you could actually bring performance up by tuning some things.
Yes, it's useful debugging. It tells us when we mucked up the page
allocator.
It also tells us when we mucked up the net driver - I doubt if we (or at
least, I) would have discovered that e1000 does a 32k allocation for a 5k(?)
frame if this warning wasn't coming out.
> So I think that the messages should stay, and they should print out
> some header to say that it is only a warning and if not happening
> too often then it is not a problem, and if it is continually
> happening then please try X or Y or post a message to lkml...
Yes, I suppose so.
hm, tricky.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists