[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200422132853.GM17661@paulmck-ThinkPad-P72>
Date: Wed, 22 Apr 2020 06:28:53 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: joel@...lfernandes.org
Cc: Uladzislau Rezki <urezki@...il.com>, linux-kernel@...r.kernel.org,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Josh Triplett <josh@...htriplett.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
rcu@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH rcu/dev -fixes 2/4] rcu/tree: Skip entry into the page
allocator for PREEMPT_RT
On Wed, Apr 22, 2020 at 09:18:41AM -0400, joel@...lfernandes.org wrote:
>
>
> On April 22, 2020 6:35:36 AM EDT, Uladzislau Rezki <urezki@...il.com> wrote:
> >On Mon, Apr 20, 2020 at 11:38:35AM -0400, Joel Fernandes (Google)
> >wrote:
> >> To keep kfree_rcu() path working on raw non-preemptible sections,
> >> prevent the optional entry into the allocator as it uses sleeping
> >locks.
> >> In fact, even if the caller of kfree_rcu() is preemptible, this path
> >> still is not, as krcp->lock is a raw spinlock as done in previous
> >> patches. With additional page pre-allocation in the works, hitting
> >this
> >> return is going to be much less likely soon so just prevent it for
> >now
> >> so that PREEMPT_RT does not break. Note that page allocation here is
> >an
> >> optimization and skipping it still makes kfree_rcu() work.
> >>
> >> Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> >> Co-developed-by: Uladzislau Rezki <urezki@...il.com>
> >> Signed-off-by: Uladzislau Rezki <urezki@...il.com>
> >> Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> >> ---
> >> kernel/rcu/tree.c | 12 ++++++++++++
> >> 1 file changed, 12 insertions(+)
> >>
> >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> >> index cf68d3d9f5b81..cd61649e1b001 100644
> >> --- a/kernel/rcu/tree.c
> >> +++ b/kernel/rcu/tree.c
> >> @@ -3092,6 +3092,18 @@ kfree_call_rcu_add_ptr_to_bulk(struct
> >kfree_rcu_cpu *krcp,
> >> if (!bnode) {
> >> WARN_ON_ONCE(sizeof(struct kfree_rcu_bulk_data) > PAGE_SIZE);
> >>
> >> + /*
> >> + * To keep this path working on raw non-preemptible
> >> + * sections, prevent the optional entry into the
> >> + * allocator as it uses sleeping locks. In fact, even
> >> + * if the caller of kfree_rcu() is preemptible, this
> >> + * path still is not, as krcp->lock is a raw spinlock.
> >> + * With additional page pre-allocation in the works,
> >> + * hitting this return is going to be much less likely.
> >> + */
> >> + if (IS_ENABLED(CONFIG_PREEMPT_RT))
> >> + return false;
> >> +
> >> bnode = (struct kfree_rcu_bulk_data *)
> >> __get_free_page(GFP_NOWAIT | __GFP_NOWARN);
> >> }
> >This will not be correctly working by just reverting everything to the
> >"rcu_head path" for CONFIG_PREEMPT_RT kernel. We need to preallocate at
> >least once. I can add caching on top of this series to address it.
>
> I discussed with Vlad offline, this patch is fine for mainline as we
> don't have headless support yet. So this patch is good. Future patches
> adding caching will also add the headless support after additional
> caching, so skipping the allocation here is ok.
Sounds good!
But would one of the realtime guys be willing to give a Tested-by?
Thanx, Paul
> Thanks.
>
> - Joel
>
>
>
>
> >--
> >Vlad Rezki
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
Powered by blists - more mailing lists