[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200716182707.GA552227@google.com>
Date: Thu, 16 Jul 2020 14:27:07 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: Uladzislau Rezki <urezki@...il.com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
"Paul E . McKenney" <paulmck@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"Theodore Y . Ts'o" <tytso@....edu>,
Matthew Wilcox <willy@...radead.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [PATCH 1/1] rcu/tree: Drop the lock before entering to page
allocator
On Thu, Jul 16, 2020 at 04:37:14PM +0200, Uladzislau Rezki wrote:
> On Thu, Jul 16, 2020 at 09:36:47AM -0400, Joel Fernandes wrote:
> > On Thu, Jul 16, 2020 at 11:19:13AM +0200, Uladzislau Rezki wrote:
> > > On Wed, Jul 15, 2020 at 07:13:33PM -0400, Joel Fernandes wrote:
> > > > On Wed, Jul 15, 2020 at 2:56 PM Sebastian Andrzej Siewior
> > > > <bigeasy@...utronix.de> wrote:
> > > > >
> > > > > On 2020-07-15 20:35:37 [+0200], Uladzislau Rezki (Sony) wrote:
> > > > > > @@ -3306,6 +3307,9 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
> > > > > > if (IS_ENABLED(CONFIG_PREEMPT_RT))
> > > > > > return false;
> > > > > >
> > > > > > + preempt_disable();
> > > > > > + krc_this_cpu_unlock(*krcp, *flags);
> > > > >
> > > > > Now you enter memory allocator with disabled preemption. This isn't any
> > > > > better but we don't have a warning for this yet.
> > > > > What happened to the part where I asked for a spinlock_t?
> > > >
> > > > Ulad,
> > > > Wouldn't the replacing of preempt_disable() with migrate_disable()
> > > > above resolve Sebastian's issue?
> > > >
> > > This for regular kernel only. That means that migrate_disable() is
> > > equal to preempt_disable(). So, no difference.
> >
> > But this will force preempt_disable() context into the low-level page
> > allocator on -RT kernels which I believe is not what Sebastian wants. The
> > whole reason why the spinlock vs raw-spinlock ordering matters is, because on
> > RT, the spinlock is sleeping. So if you have:
> >
> > raw_spin_lock(..);
> > spin_lock(..); <-- can sleep on RT, so Sleep while atomic (SWA) violation.
> >
> > That's the main reason you are dropping the lock before calling the
> > allocator.
> >
> No. Please read the commit message of this patch. This is for regular kernel.
Wait, so what is the hesitation to put migrate_disable() here? It is even
further documentation (annotation) that the goal here is to stay on the same
CPU - as you indicated in later emails.
And the documentation aspect is also something Sebastian brought. A plain
preempt_disable() is frowned up if there are alternative API that document
the usage.
> You did a patch:
>
> <snip>
> if (IS_ENABLED(CONFIG_PREEMPT_RT))
> return false;
> <snip>
I know, that's what we're discussing.
So again, why the hatred for migrate_disable() ? :)
thanks,
- Joel
Powered by blists - more mailing lists