[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170313124621.GA3328@twins.programming.kicks-ass.net>
Date: Mon, 13 Mar 2017 13:46:21 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com, fweisbec@...il.com
Subject: Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug
On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote:
> And it does pass light testing. I will hammer it harder this evening.
>
> So please send a formal patch!
Changed it a bit...
---
Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
Paul reported two independent problems with clear_sched_clock_stable().
- if we tickle it during hotplug (even though the sched_clock was
already marked unstable) we'll attempt to schedule_work() and
this explodes because RCU isn't watching the new CPU quite yet.
- since we run all of __clear_sched_clock_stable() from workqueue
context, there's a preempt problem.
Cure both by only doing the static_branch_disable() from a workqueue,
and only when it's still stable.
This leaves the problem what to do about hotplug actually wrecking TSC
though, because if it was stable and now isn't, then we will want to run
that work, which then will prod RCU the wrong way. Bloody hotplug.
Reported-by: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/sched/clock.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index a08795e21628..fec0f58c8dee 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -141,7 +141,14 @@ static void __set_sched_clock_stable(void)
tick_dep_clear(TICK_DEP_BIT_CLOCK_UNSTABLE);
}
-static void __clear_sched_clock_stable(struct work_struct *work)
+static void __sched_clock_work(struct work_struct *work)
+{
+ static_branch_disable(&__sched_clock_stable);
+}
+
+static DECLARE_WORK(sched_clock_work, __sched_clock_work);
+
+static void __clear_sched_clock_stable(void)
{
struct sched_clock_data *scd = this_scd();
@@ -160,11 +167,11 @@ static void __clear_sched_clock_stable(struct work_struct *work)
scd->tick_gtod, gtod_offset,
scd->tick_raw, raw_offset);
- static_branch_disable(&__sched_clock_stable);
tick_dep_set(TICK_DEP_BIT_CLOCK_UNSTABLE);
-}
-static DECLARE_WORK(sched_clock_work, __clear_sched_clock_stable);
+ if (sched_clock_stable())
+ schedule_work(&sched_clock_work);
+}
void clear_sched_clock_stable(void)
{
@@ -173,7 +180,7 @@ void clear_sched_clock_stable(void)
smp_mb(); /* matches sched_clock_init_late() */
if (sched_clock_running == 2)
- schedule_work(&sched_clock_work);
+ __clear_sched_clock_stable();
}
void sched_clock_init_late(void)
Powered by blists - more mailing lists