[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121114122404.GN8218@suse.de>
Date: Wed, 14 Nov 2012 12:24:04 +0000
From: Mel Gorman <mgorman@...e.de>
To: Ingo Molnar <mingo@...nel.org>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andrea Arcangeli <aarcange@...hat.com>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Hugh Dickins <hughd@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 00/31] Foundation for automatic NUMA balancing V2
On Tue, Nov 13, 2012 at 06:27:34PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@...e.de> wrote:
>
> > > I'd also like to add another, structural side note: you
> > > mixed new vm-stats bits into the whole queue, needlessly
> > > blowing up the size and the mm/ specific portions of the
> > > tree. I'd suggest to post and keep those bits separately,
> > > preferably on top of what we have already once it has
> > > settled down. I'm keeping the 'perf bench numa' bits
> > > separate as well.
> >
> > The stats part are fairly late in the queue. I noticed they
> > break build for !CONFIG_BALANCE_NUMA but it was trivially
> > resolved. [...]
>
> Ok - the vm-stats bits are the last larger item remaining that
> I've seen - could you please redo any of your changes on top of
> the latest tip:numa/core tree, to make them easier for me to
> pick up?
>
I don't think it's that simple. I can rebase the stats patch on top without
too much effort of course but it's hardly a critical element. If the
stats were unavailable it would make no difference at all and no one would
lose any sleep over it. The greater issue for me is that superficially it
appears that a lot of the previous review comments still apply
prot_none still appears to be hard-coded (change_prot_none f.e.)
pick_numa_rand is still not random
THP migration optimisation is before patches, does schednuma depend on
this optimisation? Dunno
cannot be disabled from command line in case it goes pear shaped
the new numa balancing is a massive monolithic patch with little comment
(I have not reached the point yet where I'm ready to pick apart
how and why it works and tests will not start until tonight)
the page-flags splitout is still a monolithic patch (although not a
major concern in this case)
I think your scanner might not be restarting if the last VMA in the
process is !vma_migratable. If true, it will not adapt with
new information.
MIGRATE_FAULT is still there even though it's not clear it's even
necessary
etc. I didn't go back through the old thread. I know I also have not applied
the same review issues to myself and it sounds like I'm being hypocritical
but I'm also not trying to merge. I also know that I'm currently way behind
in terms of overall performance reflecting the relative age of the tree.
> Your tree is slowly becoming a rebase of tip:numa/core and that
> will certainly cause problems.
>
How so? What I'm trying to do is build a tree that shows the logical
progression of getting from the vanilla kernel to a working NUMA
balancer. It's not in linux-next colliding with your tree or causing a
direct problem. I intend to expose a git tree of it shortly but am not
planning on asking it to be pulled because I know it's not ready.
> I'll backmerge any delta patches and rebase as necessary - but
> please do them as deltas on top of tip:numa/core to make things
> reviewable and easier to merge:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git numa/core
>
It is a stretch to describe a git tree that requires a significant number
of scheduler patches to even apply and includes a monolith patch like
"sched, numa, mm: Add adaptive NUMA affinity support" as "reviewable".
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists