[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1546864114.26963.5.camel@gmx.de>
Date: Mon, 07 Jan 2019 13:28:34 +0100
From: Mike Galbraith <efault@....de>
To: Peter Zijlstra <peterz@...radead.org>,
Tom Putzeys <tom.putzeys@...atlascopco.com>
Cc: "mingo@...hat.com" <mingo@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: CFS scheduler: spin_lock usage causes dead lock when
smp_apic_timer_interrupt occurs
On Mon, 2019-01-07 at 11:26 +0100, Peter Zijlstra wrote:
>
> I would expect lockdep you also complain about this...
And grumble it did.
commit df7e8acc0c9a84979a448d215b8ef889efe4ac5a
Author: Mike Galbraith <efault@....de>
Date: Fri May 4 08:14:38 2018 +0200
sched/fair: Fix CFS bandwidth control lockdep DEADLOCK report
CFS bandwidth control yields the inversion gripe below, moving
handling quells it.
|========================================================
|WARNING: possible irq lock inversion dependency detected
|4.16.7-rt1-rt #2 Tainted: G E
|--------------------------------------------------------
|sirq-hrtimer/0/15 just changed the state of lock:
| (&cfs_b->lock){+...}, at: [<000000009adb5cf7>] sched_cfs_period_timer+0x28/0x140
|but this lock was taken by another, HARDIRQ-safe lock in the past: (&rq->lock){-...}
|and interrupts could create inverse lock ordering between them.
|other info that might help us debug this:
| Possible interrupt unsafe locking scenario:
| CPU0 CPU1
| ---- ----
| lock(&cfs_b->lock);
| local_irq_disable();
| lock(&rq->lock);
| lock(&cfs_b->lock);
| <Interrupt>
| lock(&rq->lock);
Cc: stable-rt@...r.kernel.org
Acked-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Signed-off-by: Mike Galbraith <efault@....de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 960ad0ce77d7..420624c49f38 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5007,9 +5007,9 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
cfs_b->period = ns_to_ktime(default_cfs_period());
INIT_LIST_HEAD(&cfs_b->throttled_cfs_rq);
- hrtimer_init(&cfs_b->period_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
+ hrtimer_init(&cfs_b->period_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
cfs_b->period_timer.function = sched_cfs_period_timer;
- hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
cfs_b->slack_timer.function = sched_cfs_slack_timer;
}
Powered by blists - more mailing lists