lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 29 Apr 2021 16:03:30 +0800
From:   Aubrey Li <aubrey.intel@...il.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Joel Fernandes <joel@...lfernandes.org>,
        "Hyser,Chris" <chris.hyser@...cle.com>,
        Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...e.de>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock

On Thu, Apr 22, 2021 at 8:39 PM Peter Zijlstra <peterz@...radead.org> wrote:
>
> When switching on core-sched, CPUs need to agree which lock to use for
> their RQ.
>
> The new rule will be that rq->core_enabled will be toggled while
> holding all rq->__locks that belong to a core. This means we need to
> double check the rq->core_enabled value after each lock acquire and
> retry if it changed.
>
> This also has implications for those sites that take multiple RQ
> locks, they need to be careful that the second lock doesn't end up
> being the first lock.
>
> Verify the lock pointer after acquiring the first lock, because if
> they're on the same core, holding any of the rq->__lock instances will
> pin the core state.
>
> While there, change the rq->__lock order to CPU number, instead of rq
> address, this greatly simplifies the next patch.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  kernel/sched/core.c  |   48 ++++++++++++++++++++++++++++++++++++++++++++++--
>  kernel/sched/sched.h |   41 +++++++++++------------------------------
>  2 files changed, 57 insertions(+), 32 deletions(-)
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
----snip----
> @@ -199,6 +224,25 @@ void raw_spin_rq_unlock(struct rq *rq)
>         raw_spin_unlock(rq_lockp(rq));
>  }
>
> +#ifdef CONFIG_SMP
> +/*
> + * double_rq_lock - safely lock two runqueues
> + */
> +void double_rq_lock(struct rq *rq1, struct rq *rq2)
> +{
> +       lockdep_assert_irqs_disabled();
> +
> +       if (rq1->cpu > rq2->cpu)

It's still a bit hard for me to digest this function, I guess using (rq->cpu)
can't guarantee the sequence of locking when coresched is enabled.

- cpu1 and cpu7 shares lockA
- cpu2 and cpu8 shares lockB

double_rq_lock(1,8) leads to lock(A) and lock(B)
double_rq_lock(7,2) leads to lock(B) and lock(A)

change to below to avoid ABBA?
+       if (__rq_lockp(rq1) > __rq_lockp(rq2))

Please correct me if I was wrong.

Thanks,
-Aubrey

> +               swap(rq1, rq2);
> +
> +       raw_spin_rq_lock(rq1);
> +       if (rq_lockp(rq1) == rq_lockp(rq2))
> +               return;
> +
> +       raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING);
> +}
> +#endif
> +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ