lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 6 Feb 2018 23:15:35 -0500
From:   Steven Rostedt <rostedt@...dmis.org>
To:     "tip-bot for Steven Rostedt (VMware)" <tipbot@...or.com>
Cc:     torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
        hpa@...or.com, rostedt@...dmis.org, peterz@...radead.org,
        mingo@...nel.org, linux-kernel@...r.kernel.org, tglx@...utronix.de,
        efault@....de, pkondeti@...eaurora.org,
        linux-tip-commits@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [tip:sched/urgent] sched/rt: Up the root domain ref count when
 passing it around via IPIs


I see this was just applied to Linus's tree. This too probably should be
tagged for stable as well.

-- Steve


On Tue, 6 Feb 2018 03:54:42 -0800
"tip-bot for Steven Rostedt (VMware)" <tipbot@...or.com> wrote:

> Commit-ID:  364f56653708ba8bcdefd4f0da2a42904baa8eeb
> Gitweb:     https://git.kernel.org/tip/364f56653708ba8bcdefd4f0da2a42904baa8eeb
> Author:     Steven Rostedt (VMware) <rostedt@...dmis.org>
> AuthorDate: Tue, 23 Jan 2018 20:45:38 -0500
> Committer:  Ingo Molnar <mingo@...nel.org>
> CommitDate: Tue, 6 Feb 2018 10:20:33 +0100
> 
> sched/rt: Up the root domain ref count when passing it around via IPIs
> 
> When issuing an IPI RT push, where an IPI is sent to each CPU that has more
> than one RT task scheduled on it, it references the root domain's rto_mask,
> that contains all the CPUs within the root domain that has more than one RT
> task in the runable state. The problem is, after the IPIs are initiated, the
> rq->lock is released. This means that the root domain that is associated to
> the run queue could be freed while the IPIs are going around.
> 
> Add a sched_get_rd() and a sched_put_rd() that will increment and decrement
> the root domain's ref count respectively. This way when initiating the IPIs,
> the scheduler will up the root domain's ref count before releasing the
> rq->lock, ensuring that the root domain does not go away until the IPI round
> is complete.
> 
> Reported-by: Pavan Kondeti <pkondeti@...eaurora.org>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Mike Galbraith <efault@....de>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic")
> Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com
> Signed-off-by: Ingo Molnar <mingo@...nel.org>
> ---
>  kernel/sched/rt.c       |  9 +++++++--
>  kernel/sched/sched.h    |  2 ++
>  kernel/sched/topology.c | 13 +++++++++++++
>  3 files changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 2fb627d..89a086e 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1990,8 +1990,11 @@ static void tell_cpu_to_push(struct rq *rq)
>  
>  	rto_start_unlock(&rq->rd->rto_loop_start);
>  
> -	if (cpu >= 0)
> +	if (cpu >= 0) {
> +		/* Make sure the rd does not get freed while pushing */
> +		sched_get_rd(rq->rd);
>  		irq_work_queue_on(&rq->rd->rto_push_work, cpu);
> +	}
>  }
>  
>  /* Called from hardirq context */
> @@ -2021,8 +2024,10 @@ void rto_push_irq_work_func(struct irq_work *work)
>  
>  	raw_spin_unlock(&rd->rto_lock);
>  
> -	if (cpu < 0)
> +	if (cpu < 0) {
> +		sched_put_rd(rd);
>  		return;
> +	}
>  
>  	/* Try the next RT overloaded CPU */
>  	irq_work_queue_on(&rd->rto_push_work, cpu);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 2e95505..fb5fc45 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -691,6 +691,8 @@ extern struct mutex sched_domains_mutex;
>  extern void init_defrootdomain(void);
>  extern int sched_init_domains(const struct cpumask *cpu_map);
>  extern void rq_attach_root(struct rq *rq, struct root_domain *rd);
> +extern void sched_get_rd(struct root_domain *rd);
> +extern void sched_put_rd(struct root_domain *rd);
>  
>  #ifdef HAVE_RT_PUSH_IPI
>  extern void rto_push_irq_work_func(struct irq_work *work);
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 034cbed..519b024 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -259,6 +259,19 @@ void rq_attach_root(struct rq *rq, struct root_domain *rd)
>  		call_rcu_sched(&old_rd->rcu, free_rootdomain);
>  }
>  
> +void sched_get_rd(struct root_domain *rd)
> +{
> +	atomic_inc(&rd->refcount);
> +}
> +
> +void sched_put_rd(struct root_domain *rd)
> +{
> +	if (!atomic_dec_and_test(&rd->refcount))
> +		return;
> +
> +	call_rcu_sched(&rd->rcu, free_rootdomain);
> +}
> +
>  static int init_rootdomain(struct root_domain *rd)
>  {
>  	if (!zalloc_cpumask_var(&rd->span, GFP_KERNEL))

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ