[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YEDZU0lYVxntQ/rt@dhcp22.suse.cz>
Date: Thu, 4 Mar 2021 13:57:55 +0100
From: Michal Hocko <mhocko@...e.com>
To: Ben Widawsky <ben.widawsky@...el.com>
Cc: Feng Tang <feng.tang@...el.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Mike Kravetz <mike.kravetz@...cle.com>,
Randy Dunlap <rdunlap@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
"Hansen, Dave" <dave.hansen@...el.com>,
Andi leen <ak@...ux.intel.com>,
"Williams, Dan J" <dan.j.williams@...el.com>
Subject: Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for
MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
On Wed 03-03-21 09:22:50, Ben Widawsky wrote:
> On 21-03-03 18:14:30, Michal Hocko wrote:
> > On Wed 03-03-21 08:31:41, Ben Widawsky wrote:
> > > On 21-03-03 14:59:35, Michal Hocko wrote:
> > > > On Wed 03-03-21 21:46:44, Feng Tang wrote:
> > > > > On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> > > > > > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > > > > > > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > > > [...]
> > > > > > > > One thing I tried which can fix the slowness is:
> > > > > > > >
> > > > > > > > + gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);
> > > > > > > >
> > > > > > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too
> > > > > > > > hacky and didn't mention it in the commit log.
> > > > > > >
> > > > > > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve
> > > > > > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well?
> > > > > >
> > > > > > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't
> > > > > > be fixed.
> > > > >
> > > > > I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM'
> > > > > can also accelerate the allocation much! though is still a little slower than
> > > > > this patch. Seems I've messed some of the tries, and sorry for the confusion!
> > > > >
> > > > > Could this be used as the solution? or the adding another fallback_nodemask way?
> > > > > but the latter will change the current API quite a bit.
> > > >
> > > > I haven't got to the whole series yet. The real question is whether the
> > > > first attempt to enforce the preferred mask is a general win. I would
> > > > argue that it resembles the existing single node preferred memory policy
> > > > because that one doesn't push heavily on the preferred node either. So
> > > > dropping just the direct reclaim mode makes some sense to me.
> > > >
> > > > IIRC this is something I was recommending in an early proposal of the
> > > > feature.
> > >
> > > My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> > > would want more heavy pushing on the preference mask. However, maybe the uapi
> > > could dictate how hard to try/not try.
> >
> > What does that mean and what is the expectation from the kernel to be
> > more or less cast in stone?
> >
>
> (I'm not positive I've understood your question, so correct me if I
> misunderstood)
>
> I'm not sure there is a stone-cast way to define it nor should we.
OK, I thought you want the behavior to diverge from the existing
MPOL_PREFERRED which only prefers the configured node as a default but
the allocator is free to fallback to any other node under memory
pressure. For the multiple preferred nodes the same should be applied
and only attempt lightweight attempt before falling back to full
nodeset. Your paragraph I was replying to is not in line with this
though.
> At the very
> least though, something in uapi that has a general mapping to GFP flags
> (specifically around reclaim) for the first round of allocation could make
> sense.
I do not think this is a good idea.
> In my head there are 3 levels of request possible for multiple nodes:
> 1. BIND: Those nodes or die.
> 2. Preferred hard: Those nodes and I'm willing to wait. Fallback if impossible.
> 3. Preferred soft: Those nodes but I don't want to wait.
I do agree that an intermediate "preference" can be helpful because
binding is just too strict and OOM semantic is far from ideal. But this
would need a new policy.
> Current UAPI in the series doesn't define a distinction between 2, and 3. As I
> understand the change, Feng is defining the behavior to be #3, which makes #2
> not an option. I sort of punted on defining it entirely, in the beginning.
I really think it should be in line with the existing preferred policy
behavior.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists