[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200810160739.GA29884@pc636>
Date: Mon, 10 Aug 2020 18:07:39 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Michal Hocko <mhocko@...e.com>
Cc: "Uladzislau Rezki (Sony)" <urezki@...il.com>,
LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
"Paul E . McKenney" <paulmck@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
"Theodore Y . Ts'o" <tytso@....edu>,
Joel Fernandes <joel@...lfernandes.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag
> On Sun 09-08-20 22:43:53, Uladzislau Rezki (Sony) wrote:
> [...]
> > Limitations and concerns (Main part)
> > ====================================
> > The current memmory-allocation interface presents to following
> > difficulties that this patch is designed to overcome:
> >
> > a) If built with CONFIG_PROVE_RAW_LOCK_NESTING, the lockdep will
> > complain about violation("BUG: Invalid wait context") of the
> > nesting rules. It does the raw_spinlock vs. spinlock nesting
> > checks, i.e. it is not legal to acquire a spinlock_t while
> > holding a raw_spinlock_t.
> >
> > Internally the kfree_rcu() uses raw_spinlock_t(in rcu-dev branch)
> > whereas the "page allocator" internally deals with spinlock_t to
> > access to its zones. The code also can be broken from higher level
> > of view:
> > <snip>
> > raw_spin_lock(&some_lock);
> > kfree_rcu(some_pointer, some_field_offset);
> > <snip>
>
> Is there any fundamental problem to make zone raw_spin_lock?
>
Good point. Converting a regular spinlock to the raw_* variant can solve
an issue and to me it seems partly reasonable. Because there are other
questions if we do it:
a) what to do with kswapd and "wake-up path" that uses sleepable lock:
wakeup_kswapd() -> wake_up_interruptible(&pgdat->kswapd_wait).
b) How RT people reacts on it? I guess they will no be happy.
As i described before, calling the __get_free_page(0) with 0 as argument
will solve the (a). How correctly is it? From my point of view the logic
that bypass the wakeup path should be explicitly defined.
Or we can enter the allocator with (__GFP_HIGH|__GFP_ATOMIC) that bypass
the __GFP_KSWAPD_RECLAIM as well.
Any thoughts here? Please comment.
Having proposed flag will not heart RT latency and solve all concerns.
> > b) If built with CONFIG_PREEMPT_RT. Please note, in that case spinlock_t
> > is converted into sleepable variant. Invoking the page allocator from
> > atomic contexts leads to "BUG: scheduling while atomic".
>
> [...]
>
> > Proposal
> > ========
> > 1) Make GFP_* that ensures that the allocator returns NULL rather
> > than acquire its own spinlock_t. Having such flag will address a and b
> > limitations described above. It will also make the kfree_rcu() code
> > common for RT and regular kernel, more clean, less handling corner
> > cases and reduce the code size.
>
> I do not think this is a good idea. Single purpose gfp flags that tend
> to heavily depend on the current implementation of the page allocator
> have turned out to be problematic. Users used to misunderstand their
> meaning resulting in a lot of abuse which was not trivial to remove.
> This flag seem to fall into exactly this sort of category. If there is a
> problem in nesting then that should be addressed rather than a new flag
> exported IMHO. If that is absolutely not possible for some reason then
> we can try to figure out what to do but that really need a very strong
> justification.
>
The problem that i see is we can not use the page allocator from atomic
contexts, what is our case:
<snip>
local_irq_save(flags) or preempt_disable() or raw_spinlock();
__get_free_page(GFP_ATOMIC);
<snip>
So if we can convert the page allocator to raw_* lock it will be appreciated,
at least from our side, IMHO, not from RT one. But as i stated above we need
to sort raised questions out if converting is done.
What is your view?
Thank you for your help and feedback!
--
Vlad Rezki
Powered by blists - more mailing lists