[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YnF0RyBaBSC1mdKo@casper.infradead.org>
Date: Tue, 3 May 2022 19:28:23 +0100
From: Matthew Wilcox <willy@...radead.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Michal Hocko <mhocko@...e.com>, liam.howlett@...cle.com,
walken.cr@...il.com, hannes@...xchg.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: Memory allocation on speculative fastpaths
On Tue, May 03, 2022 at 09:39:05AM -0700, Paul E. McKenney wrote:
> On Tue, May 03, 2022 at 06:04:13PM +0200, Michal Hocko wrote:
> > On Tue 03-05-22 08:59:13, Paul E. McKenney wrote:
> > > Hello!
> > >
> > > Just following up from off-list discussions yesterday.
> > >
> > > The requirements to allocate on an RCU-protected speculative fastpath
> > > seem to be as follows:
> > >
> > > 1. Never sleep.
> > > 2. Never reclaim.
> > > 3. Leave emergency pools alone.
> > >
> > > Any others?
> > >
> > > If those rules suffice, and if my understanding of the GFP flags is
> > > correct (ha!!!), then the following GFP flags should cover this:
> > >
> > > __GFP_NOMEMALLOC | __GFP_NOWARN
> >
> > GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN
>
> Ah, good point on GFP_NOWAIT, thank you!
Johannes (I think it was?) made the point to me that if we have another
task very slowly freeing memory, a task in this path can take advantage
of that other task's hard work and never go into reclaim. So the
approach we should take is:
p4d_alloc(GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);
pud_alloc(GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);
pmd_alloc(GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);
if (failure) {
rcu_read_unlock();
do_reclaim();
return FAULT_FLAG_RETRY;
}
... but all this is now moot since the approach we agreed to yesterday
is:
rcu_read_lock();
vma = vma_lookup();
if (down_read_trylock(&vma->sem)) {
rcu_read_unlock();
} else {
rcu_read_unlock();
mmap_read_lock(mm);
vma = vma_lookup();
down_read(&vma->sem);
}
... and we then execute the page table allocation under the protection of
the vma->sem.
At least, that's what I think we agreed to yesterday.
Powered by blists - more mailing lists