[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1181578994.12368.10.camel@localhost.localdomain>
Date: Mon, 11 Jun 2007 11:23:14 -0500
From: Adam Litke <agl@...ibm.com>
To: dean gaudet <dean@...tic.org>
Cc: William Lee Irwin III <wli@...omorphy.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
linux-kernel@...r.kernel.org
Subject: Re: 2.6.21 numa policy and huge pages not working
On Sat, 2007-06-09 at 21:10 -0700, dean gaudet wrote:
> On Tue, 15 May 2007, William Lee Irwin III wrote:
>
> > On Tue, May 15, 2007 at 10:41:06PM -0700, dean gaudet wrote:
> > > prior to 2.6.21 i could "numactl --interleave=all" and use SHM_HUGETLB and
> > > the interleave policy would be respected. as of 2.6.21 it doesn't seem to
> > > respect the policy on SHM_HUGETLB request.
> > > see test program below.
> > > output from pre-2.6.21:
> > > 2ab196200000 interleave=0-3 file=/2\040(deleted) huge dirty=32 N0=8 N1=8 N2=8 N3=8
> > > 2ab19a200000 default file=/SYSV00000000\040(deleted) dirty=16384 active=0 N0=4096 N1=4096 N2=4096 N3=4096
> > > output from 2.6.21:
> > > 2b49b1c00000 default file=/10\040(deleted) huge dirty=32 N3=32
> > > 2b49b5c00000 default file=/SYSV00000000\040(deleted) dirty=16384 active=0 N0=4096 N1=4096 N2=4096 N3=4096
> > > was this an intentional behaviour change? it seems to be only affecting
> > > SHM_HUGETLB allocations. (i haven't tested hugetlbfs yet.)
> > > run with "numactl --interleave=all ./shmtest"
> >
> > This was not intentional. I'll search for where it broke.
>
> ok i've narrowed it some... maybe.
Thanks a lot for the detailed information. I am on it.
--
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists