[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110504025609.GA8532@localhost>
Date: Wed, 4 May 2011 10:56:09 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Dave Young <hidave.darkstar@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan.kim@...il.com>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Mel Gorman <mel@...ux.vnet.ibm.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Christoph Lameter <cl@...ux.com>,
Dave Chinner <david@...morbit.com>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [RFC][PATCH] mm: cut down __GFP_NORETRY page allocation
failures
On Wed, May 04, 2011 at 10:32:01AM +0800, Dave Young wrote:
> On Wed, May 4, 2011 at 9:56 AM, Dave Young <hidave.darkstar@...il.com> wrote:
> > On Thu, Apr 28, 2011 at 9:36 PM, Wu Fengguang <fengguang.wu@...el.com> wrote:
> >> Concurrent page allocations are suffering from high failure rates.
> >>
> >> On a 8p, 3GB ram test box, when reading 1000 sparse files of size 1GB,
> >> the page allocation failures are
> >>
> >> nr_alloc_fail 733 # interleaved reads by 1 single task
> >> nr_alloc_fail 11799 # concurrent reads by 1000 tasks
> >>
> >> The concurrent read test script is:
> >>
> >> for i in `seq 1000`
> >> do
> >> truncate -s 1G /fs/sparse-$i
> >> dd if=/fs/sparse-$i of=/dev/null &
> >> done
> >>
> >
> > With Core2 Duo, 3G ram, No swap partition I can not produce the alloc fail
>
> unset CONFIG_SCHED_AUTOGROUP and CONFIG_CGROUP_SCHED seems affects the
> test results, now I see several nr_alloc_fail (dd is not finished
> yet):
>
> dave@...kstar-32:$ grep fail /proc/vmstat:
> nr_alloc_fail 4
> compact_pagemigrate_failed 0
> compact_fail 3
> htlb_buddy_alloc_fail 0
> thp_collapse_alloc_fail 4
>
> So the result is related to cpu scheduler.
Good catch! My kernel also disabled CONFIG_CGROUP_SCHED and
CONFIG_SCHED_AUTOGROUP.
Thanks,
Fengguang
View attachment ".config" of type "text/plain" (77511 bytes)
Powered by blists - more mailing lists