[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1205863811.5032.26.camel@localhost>
Date: Tue, 18 Mar 2008 14:10:11 -0400
From: Lee Schermerhorn <Lee.Schermerhorn@...com>
To: Paul Menage <menage@...gle.com>
Cc: linux-kernel <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>
Subject: Re: [BUG?] 2.6.25-rc[23]-mm1 cgroup list corruption under load
with VM Scalability patches
On Wed, 2008-03-05 at 13:09 -0800, Paul Menage wrote:
> On Wed, Mar 5, 2008 at 11:37 AM, Lee Schermerhorn
> <Lee.Schermerhorn@...com> wrote:
> > list_del corruption in cgroup_exit() on 16 cpu, 32GB ia64 NUMA platform.
> >
> > I've been seeing this for a while now, but we've had known problems
> > [page leaks, ...] with the VM scalability series. Now the system
> > appears to be running very well with these patches under stress loads
> > that would hang it or cause OOM kill of tests with plenty of swap space
> > left. Eventually, [after 40-45 minutes], I hit a list corruption in
> > cgroup_exit().
> >
> > I can't say for sure that our patches aren't causing this, but I've been
> > unable to keep the system up long enough under the stress load w/o the
> > splitlru+noreclaim patches to hit the problem.
> >
> > I looked in the mailing lists and found one other thread related to
> > cgroup list corruption:
> >
> > http://marc.info/?l=linux-kernel&m=119263666823236&w=4
> >
> > Paul looked into this and couldn't see anywhere that the lists are
> > manipulate w/o holding the css set lock. I concur. I did find one
> > possible race in enabling the task cg_lists [see patch below], but this
> > did not solve the problem. And I did not hit the printk in the patch.
>
> No, that's not a (malign) race - cgroup_enable_task_cg_lists() is
> idempotent. In the case that you see, every thread seen in the
> do_each_thread() loop will already have a non-empty cg_list field, so
> it will be a no-op. So adding the additional check isn't wrong but
> it's not needed.
>
> I'll look again at the code to try to figure out where the problem is.
Paul:
just wanted to let you know that I did manage to hit this list
corruption--same stack trace: cgroup_exit() from do_exit() ...--on
25-rc3-mm1 WITHOUT any of the vm scalability [split-lru/noreclaim-mlock]
patches applied. This occurred ~9 minutes into a fairly heavy 'usex'
load on my 16 cpu ia64 platform.
An x86_64 version [w/ prebuilt binaries of the tools used] of the stress
load is available here:
http://free.linux.hp.com/~lts/Temp/
There's a README there describing the contents of the tarball. I
haven't tried this load on an x86_64 recently, so I don't know if it
will trigger the problem there.
Lee
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists