[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YVRV2jhVIbGxd+JB@hirez.programming.kicks-ass.net>
Date: Wed, 29 Sep 2021 14:02:34 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [patch 5/5] sched: Move mmdrop to RCU on RT
On Tue, Sep 28, 2021 at 02:24:32PM +0200, Thomas Gleixner wrote:
> --- a/include/linux/sched/mm.h
> +++ b/include/linux/sched/mm.h
> @@ -49,6 +49,26 @@ static inline void mmdrop(struct mm_stru
> __mmdrop(mm);
> }
>
> +#ifdef CONFIG_PREEMPT_RT
> +extern void __mmdrop_delayed(struct rcu_head *rhp);
> +
> +/*
> + * Invoked from finish_task_switch(). Delegates the heavy lifting on RT
> + * kernels via RCU.
> + */
> +static inline void mmdrop_sched(struct mm_struct *mm)
> +{
> + /* Provides a full memory barrier. See mmdrop() */
> + if (atomic_dec_and_test(&mm->mm_count))
> + call_rcu(&mm->delayed_drop, __mmdrop_delayed);
> +}
> +#else
> +static inline void mmdrop_sched(struct mm_struct *mm)
> +{
> + mmdrop(mm);
> +}
> +#endif
> +
> /**
> * mmget() - Pin the address space associated with a &struct mm_struct.
> * @mm: The address space to pin.
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -708,6 +708,19 @@ void __mmdrop(struct mm_struct *mm)
> }
> EXPORT_SYMBOL_GPL(__mmdrop);
>
> +#ifdef CONFIG_PREEMPT_RT
> +/*
> + * RCU callback for delayed mm drop. Not strictly RCU, but call_rcu() is
> + * by far the least expensive way to do that.
> + */
> +void __mmdrop_delayed(struct rcu_head *rhp)
> +{
> + struct mm_struct *mm = container_of(rhp, struct mm_struct, delayed_drop);
> +
> + __mmdrop(mm);
> +}
> +#endif
Would you mind terribly if I fold this into mm.h as a static inline ?
The only risk that carries is that if mmdrop_sched() is called from
multiple translation units (it is not) we get multiple instances of this
function, but possibly even !LTO linkers can fix that for us.
Powered by blists - more mailing lists