[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f67ce85c73941bd5d35e8af84765c70f56ddcdf7.camel@infradead.org>
Date: Wed, 08 Dec 2021 16:57:07 +0000
From: David Woodhouse <dwmw2@...radead.org>
To: paulmck@...nel.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Andy Lutomirski <luto@...nel.org>,
"Schander, Johanna 'Mimoja' Amelie" <mimoja@...zon.com>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
X86 ML <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
hewenliang4@...wei.com, hushiyuan@...wei.com,
luolongjun@...wei.com, hejingxian <hejingxian@...wei.com>
Subject: Re: [PATCH] use x86 cpu park to speedup smp_init in kexec situation
On Wed, 2021-12-08 at 15:10 +0000, David Woodhouse wrote:
> @@ -4266,13 +4266,13 @@ void rcu_cpu_starting(unsigned int cpu)
> rcu_disable_urgency_upon_qs(rdp);
> /* Report QS -after- changing ->qsmaskinitnext! */
> rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags);
> + /* Er, why didn't we drop the lock here? */
> - } else {
> - raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> }
>
Oh, I see... how about this straw man then...
From 083c8fb2656e9fc60a17c9bfd538fcee4c5ebacc Mon Sep 17 00:00:00 2001
From: David Woodhouse <dwmw@...zon.co.uk>
Date: Tue, 16 Feb 2021 15:04:34 +0000
Subject: [PATCH 1/4] rcu: Expand locking around rcu_cpu_starting() to cover
rnp->ofl_seq bump
To allow architectures to bring APs online in parallel, we need only one
of them to be going through rcu_cpu_starting() at a time. Expand the
coverage of the existing per-node lock to cover the manipulation of
rnp->ofl_seq too.
Signed-off-by: David Woodhouse <dwmw@...zon.co.uk>
---
kernel/rcu/tree.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index ef8d36f580fc..544198c674f2 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4246,11 +4246,11 @@ void rcu_cpu_starting(unsigned int cpu)
rnp = rdp->mynode;
mask = rdp->grpmask;
+ raw_spin_lock_irqsave_rcu_node(rnp, flags);
WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
WARN_ON_ONCE(!(rnp->ofl_seq & 0x1));
rcu_dynticks_eqs_online();
smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
- raw_spin_lock_irqsave_rcu_node(rnp, flags);
WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask);
newcpu = !(rnp->expmaskinitnext & mask);
rnp->expmaskinitnext |= mask;
@@ -4261,6 +4261,11 @@ void rcu_cpu_starting(unsigned int cpu)
rdp->rcu_onl_gp_seq = READ_ONCE(rcu_state.gp_seq);
rdp->rcu_onl_gp_flags = READ_ONCE(rcu_state.gp_flags);
+ smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+ WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
+ WARN_ON_ONCE(rnp->ofl_seq & 0x1);
+ smp_mb(); /* Ensure RCU read-side usage follows above initialization. */
+
/* An incoming CPU should never be blocking a grace period. */
if (WARN_ON_ONCE(rnp->qsmask & mask)) { /* RCU waiting on incoming CPU? */
rcu_disable_urgency_upon_qs(rdp);
@@ -4269,10 +4274,6 @@ void rcu_cpu_starting(unsigned int cpu)
} else {
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
}
- smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
- WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
- WARN_ON_ONCE(rnp->ofl_seq & 0x1);
- smp_mb(); /* Ensure RCU read-side usage follows above initialization. */
}
/*
--
2.31.1
Download attachment "smime.p7s" of type "application/pkcs7-signature" (5174 bytes)
Powered by blists - more mailing lists