lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Feb 2018 23:14:40 -0500
From:   Steven Rostedt <rostedt@...dmis.org>
To:     "tip-bot for Steven Rostedt (VMware)" <tipbot@...or.com>
Cc:     pkondeti@...eaurora.org, hpa@...or.com,
        linux-kernel@...r.kernel.org, rostedt@...dmis.org, efault@....de,
        peterz@...radead.org, akpm@...ux-foundation.org,
        torvalds@...ux-foundation.org, mingo@...nel.org,
        tglx@...utronix.de, linux-tip-commits@...r.kernel.org,
        stable@...r.kernel.org
Subject: Re: [tip:sched/urgent] sched/rt: Use container_of() to get root
 domain in rto_push_irq_work_func()


I see this was just applied to Linus's tree. It probably should be
tagged for stable as well.

-- Steve


On Tue, 6 Feb 2018 03:54:16 -0800
"tip-bot for Steven Rostedt (VMware)" <tipbot@...or.com> wrote:

> Commit-ID:  ad0f1d9d65938aec72a698116cd73a980916895e
> Gitweb:     https://git.kernel.org/tip/ad0f1d9d65938aec72a698116cd73a980916895e
> Author:     Steven Rostedt (VMware) <rostedt@...dmis.org>
> AuthorDate: Tue, 23 Jan 2018 20:45:37 -0500
> Committer:  Ingo Molnar <mingo@...nel.org>
> CommitDate: Tue, 6 Feb 2018 10:20:33 +0100
> 
> sched/rt: Use container_of() to get root domain in rto_push_irq_work_func()
> 
> When the rto_push_irq_work_func() is called, it looks at the RT overloaded
> bitmask in the root domain via the runqueue (rq->rd). The problem is that
> during CPU up and down, nothing here stops rq->rd from changing between
> taking the rq->rd->rto_lock and releasing it. That means the lock that is
> released is not the same lock that was taken.
> 
> Instead of using this_rq()->rd to get the root domain, as the irq work is
> part of the root domain, we can simply get the root domain from the irq work
> that is passed to the routine:
> 
>  container_of(work, struct root_domain, rto_push_work)
> 
> This keeps the root domain consistent.
> 
> Reported-by: Pavan Kondeti <pkondeti@...eaurora.org>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Mike Galbraith <efault@....de>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic")
> Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com
> Signed-off-by: Ingo Molnar <mingo@...nel.org>
> ---
>  kernel/sched/rt.c | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 862a513..2fb627d 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1907,9 +1907,8 @@ static void push_rt_tasks(struct rq *rq)
>   * the rt_loop_next will cause the iterator to perform another scan.
>   *
>   */
> -static int rto_next_cpu(struct rq *rq)
> +static int rto_next_cpu(struct root_domain *rd)
>  {
> -	struct root_domain *rd = rq->rd;
>  	int next;
>  	int cpu;
>  
> @@ -1985,7 +1984,7 @@ static void tell_cpu_to_push(struct rq *rq)
>  	 * Otherwise it is finishing up and an ipi needs to be sent.
>  	 */
>  	if (rq->rd->rto_cpu < 0)
> -		cpu = rto_next_cpu(rq);
> +		cpu = rto_next_cpu(rq->rd);
>  
>  	raw_spin_unlock(&rq->rd->rto_lock);
>  
> @@ -1998,6 +1997,8 @@ static void tell_cpu_to_push(struct rq *rq)
>  /* Called from hardirq context */
>  void rto_push_irq_work_func(struct irq_work *work)
>  {
> +	struct root_domain *rd =
> +		container_of(work, struct root_domain, rto_push_work);
>  	struct rq *rq;
>  	int cpu;
>  
> @@ -2013,18 +2014,18 @@ void rto_push_irq_work_func(struct irq_work *work)
>  		raw_spin_unlock(&rq->lock);
>  	}
>  
> -	raw_spin_lock(&rq->rd->rto_lock);
> +	raw_spin_lock(&rd->rto_lock);
>  
>  	/* Pass the IPI to the next rt overloaded queue */
> -	cpu = rto_next_cpu(rq);
> +	cpu = rto_next_cpu(rd);
>  
> -	raw_spin_unlock(&rq->rd->rto_lock);
> +	raw_spin_unlock(&rd->rto_lock);
>  
>  	if (cpu < 0)
>  		return;
>  
>  	/* Try the next RT overloaded CPU */
> -	irq_work_queue_on(&rq->rd->rto_push_work, cpu);
> +	irq_work_queue_on(&rd->rto_push_work, cpu);
>  }
>  #endif /* HAVE_RT_PUSH_IPI */
>  

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ