lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 5 May 2014 15:26:59 +0200
From:	Oleg Nesterov <oleg@...hat.com>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: lock_task_sighand() && rcu_boost()

On 05/04, Paul E. McKenney wrote:
>
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -884,6 +884,27 @@ static inline void rcu_read_lock(void)
>  /**
>   * rcu_read_unlock() - marks the end of an RCU read-side critical section.
>   *
> + * In most situations, rcu_read_unlock() is immune from deadlock.
> + * However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
> + * is responsible for deboosting, which it does via rt_mutex_unlock().
> + * However, this function acquires the scheduler's runqueue and
> + * priority-inheritance spinlocks.  Thus, deadlock could result if the
> + * caller of rcu_read_unlock() already held one of these locks or any lock
> + * acquired while holding them.
> + *
> + * That said, RCU readers are never priority boosted unless they were
> + * preempted.  Therefore, one way to avoid deadlock is to make sure
> + * that preemption never happens within any RCU read-side critical
> + * section whose outermost rcu_read_unlock() is called with one of
> + * rt_mutex_unlock()'s locks held.
> + *
> + * Given that the set of locks acquired by rt_mutex_unlock() might change
> + * at any time, a somewhat more future-proofed approach is to make sure that
> + * that preemption never happens within any RCU read-side critical
> + * section whose outermost rcu_read_unlock() is called with one of
> + * irqs disabled.  This approach relies on the fact that rt_mutex_unlock()
> + * currently only acquires irq-disabled locks.
> + *
>   * See rcu_read_lock() for more information.
>   */
>  static inline void rcu_read_unlock(void)

Great! And I agree with "might change at any time" part.

I'll update lock_task_sighand() after you push this change (or please feel
free to do this yourself). Cleanup is not that important, of course, but a
short comment referring the documentation above can help another reader to
understand the "unnecessary" local_irq_save/preempt_disable calls.

Thanks Paul.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ