[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200811104557.GA5301@pc636>
Date: Tue, 11 Aug 2020 12:45:57 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Uladzislau Rezki <urezki@...il.com>,
LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
"Paul E . McKenney" <paulmck@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
"Theodore Y . Ts'o" <tytso@....edu>,
Joel Fernandes <joel@...lfernandes.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag
On Tue, Aug 11, 2020 at 12:28:18PM +0200, Michal Hocko wrote:
> On Tue 11-08-20 11:42:51, Uladzislau Rezki wrote:
> > On Tue, Aug 11, 2020 at 11:37:13AM +0200, Uladzislau Rezki wrote:
> > > On Tue, Aug 11, 2020 at 10:19:17AM +0200, Michal Hocko wrote:
> [...]
> > > > Anyway, if the zone->lock is not a good fit for raw_spin_lock then the
> > > > only way I can see forward is to detect real (RT) atomic contexts and
> > > > bail out early before taking the lock in the allocator for NOWAIT/ATOMIC
> > > > requests.
> > > >
> > This is similar what i have done with mm: Add __GFP_NO_LOCKS flag. I just
> > did it for order-0 pages(other paths are impossible) and made it common for
> > any kernel.
> >
> > Because when you say "bail out early" i suspect that we would like to check
> > the per-cpu-list cache.
>
> Bail out early means to do as much as possible until a raw non-compliant
> lock has to be taken.
>
<snip>
struct page *rmqueue(struct zone *preferred_zone,
struct zone *zone, unsigned int order,
gfp_t gfp_flags, unsigned int alloc_flags,
int migratetype)
{
unsigned long flags;
struct page *page;
if (likely(order == 0)) {
page = rmqueue_pcplist(preferred_zone, zone, gfp_flags,
migratetype, alloc_flags);
goto out;
}
/*
* We most definitely don't want callers attempting to
* allocate greater than order-1 page units with __GFP_NOFAIL.
*/
WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
spin_lock_irqsave(&zone->lock, flags);
<snip>
only order-0 allocations can be checked if CPUs pcp-list-cache has something.
I mean without taking any locks, i.e. it is lockless. "Pre-fetching" is not
possible since it takes zone->lock in order to do transfer pages from the buddy
to the per-cpu-lists. It is done in the rmqueue_bulk() function.
--
Vlad Rezki
Powered by blists - more mailing lists