lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 29 Oct 2019 07:10:34 -0400
From:   Qian Cai <cai@....pw>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     akpm@...ux-foundation.org, bigeasy@...utronix.de,
        tglx@...utronix.de, thgarnie@...gle.com, tytso@....edu,
        cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
        mingo@...hat.com, will@...nel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, keescook@...omium.org
Subject: Re: [PATCH] sched: Avoid spurious lock dependencies



> On Oct 1, 2019, at 5:18 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> 
> Does the below adequately describe the situation?
> 
> ---
> Subject: sched: Avoid spurious lock dependencies
> 
> While seemingly harmless, __sched_fork() does hrtimer_init(), which,
> when DEBUG_OBJETS, can end up doing allocations.
> 
> This then results in the following lock order:
> 
>  rq->lock
>    zone->lock.rlock
>      batched_entropy_u64.lock
> 
> Which in turn causes deadlocks when we do wakeups while holding that
> batched_entropy lock -- as the random code does.
> 
> Solve this by moving __sched_fork() out from under rq->lock. This is
> safe because nothing there relies on rq->lock, as also evident from the
> other __sched_fork() callsite.
> 
> Fixes: b7d5dc21072c ("random: add a spinlock_t to struct batched_entropy")
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> kernel/sched/core.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 7880f4f64d0e..1832fc0fbec5 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6039,10 +6039,11 @@ void init_idle(struct task_struct *idle, int cpu)
>    struct rq *rq = cpu_rq(cpu);
>    unsigned long flags;
> 
> +    __sched_fork(0, idle);
> +
>    raw_spin_lock_irqsave(&idle->pi_lock, flags);
>    raw_spin_lock(&rq->lock);
> 
> -    __sched_fork(0, idle);
>    idle->state = TASK_RUNNING;
>    idle->se.exec_start = sched_clock();
>    idle->flags |= PF_IDLE;

It looks like this patch has been forgotten forever. Do you need to repost, so Ingo might have a better chance to pick it up?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ