lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABk29NvicqM_c2ssYnDrEy_FPsfD5GH38rB_XHooErALOabe5g@mail.gmail.com>
Date:   Mon, 26 Apr 2021 15:21:36 -0700
From:   Josh Don <joshdon@...gle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Joel Fernandes <joel@...lfernandes.org>,
        "Hyser,Chris" <chris.hyser@...cle.com>,
        Ingo Molnar <mingo@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...e.de>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f732642e3e09..1a81e9cc9e5d 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -290,6 +290,10 @@ static void sched_core_assert_empty(void)
>  static void __sched_core_enable(void)
>  {
>         static_branch_enable(&__sched_core_enabled);
> +       /*
> +        * Ensure raw_spin_rq_*lock*() have completed before flipping.
> +        */
> +       synchronize_sched();

synchronize_rcu()

>         __sched_core_flip(true);
>         sched_core_assert_empty();
>  }
> @@ -449,16 +453,22 @@ void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
>  {
>         raw_spinlock_t *lock;
>
> +       preempt_disable();
>         if (sched_core_disabled()) {
>                 raw_spin_lock_nested(&rq->__lock, subclass);
> +               /* preempt *MUST* still be disabled here */
> +               preempt_enable_no_resched();
>                 return;
>         }

This approach looks good to me. I'm guessing you went this route
instead of doing the re-check after locking in order to optimize the
disabled case?

Recommend a comment that the preempt_disable() here pairs with the
synchronize_rcu() in __sched_core_enable().

>
>         for (;;) {
>                 lock = __rq_lockp(rq);
>                 raw_spin_lock_nested(lock, subclass);
> -               if (likely(lock == __rq_lockp(rq)))
> +               if (likely(lock == __rq_lockp(rq))) {
> +                       /* preempt *MUST* still be disabled here */
> +                       preempt_enable_no_resched();
>                         return;
> +               }
>                 raw_spin_unlock(lock);
>         }
>  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ