[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1302662256.2811.27.camel@edumazet-laptop>
Date: Wed, 13 Apr 2011 04:37:36 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Changli Gao <xiaosuo@...il.com>,
Américo Wang <xiyou.wangcong@...il.com>,
Jiri Slaby <jslaby@...e.cz>, azurIt <azurit@...ox.sk>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, Jiri Slaby <jirislaby@...il.com>
Subject: Re: Regression from 2.6.36
Le mardi 12 avril 2011 à 18:31 -0700, Andrew Morton a écrit :
> On Wed, 13 Apr 2011 09:23:11 +0800 Changli Gao <xiaosuo@...il.com> wrote:
>
> > On Wed, Apr 13, 2011 at 6:49 AM, Andrew Morton
> > <akpm@...ux-foundation.org> wrote:
> > >
> > > It's somewhat unclear (to me) what caused this regression.
> > >
> > > Is it because the kernel is now doing large kmalloc()s for the fdtable,
> > > and this makes the page allocator go nuts trying to satisfy high-order
> > > page allocation requests?
> > >
> > > Is it because the kernel now will usually free the fdtable
> > > synchronously within the rcu callback, rather than deferring this to a
> > > workqueue?
> > >
> > > The latter seems unlikely, so I'm thinking this was a case of
> > > high-order-allocations-considered-harmful?
> > >
> >
> > Maybe, but I am not sure. Maybe my patch causes too many inner
> > fragments. For example, when asking for 5 pages, get 8 pages, and 3
> > pages are wasted, then memory thrash happens finally.
>
> That theory sounds less likely, but could be tested by using
> alloc_pages_exact().
>
Very unlikely, since fdtable sizes are powers of two, unless you hit
sysctl_nr_open and it was changed (default value being 2^20)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists