[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230320233720.3488453-12-jstultz@google.com>
Date: Mon, 20 Mar 2023 23:37:19 +0000
From: John Stultz <jstultz@...gle.com>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Valentin Schneider <valentin.schneider@....com>,
Joel Fernandes <joelaf@...gle.com>,
Qais Yousef <qyousef@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <vschneid@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Zimuzo Ezeozue <zezeozue@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E . McKenney" <paulmck@...nel.org>, kernel-team@...roid.com,
"Connor O'Brien" <connoro@...gle.com>,
John Stultz <jstultz@...gle.com>
Subject: [PATCH v2 11/12] sched/rt: Fix proxy/current (push,pull)ability
From: Valentin Schneider <valentin.schneider@....com>
Proxy execution forms atomic pairs of tasks: a proxy (scheduling context)
and an owner (execution context). The proxy, along with the rest of the
blocked chain, follows the owner wrt CPU placement.
They can be the same task, in which case push/pull doesn't need any
modification. When they are different, however,
FIFO1 & FIFO42:
,-> RT42
| | blocked-on
| v
proxied-by | mutex
| | owner
| v
`-- RT1
RT1
RT42
CPU0 CPU1
^ ^
| |
overloaded !overloaded
rq prio = 42 rq prio = 0
RT1 is eligible to be pushed to CPU1, but should that happen it will
"carry" RT42 along. Clearly here neither RT1 nor RT42 must be seen as
push/pullable.
Furthermore, tasks becoming blocked on a mutex don't need an explicit
dequeue/enqueue cycle to be made (push/pull)able: they have to be running
to block on a mutex, thus they will eventually hit put_prev_task().
XXX: pinned tasks becoming unblocked should be removed from the push/pull
lists, but those don't get to see __schedule() straight away.
Cc: Joel Fernandes <joelaf@...gle.com>
Cc: Qais Yousef <qyousef@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@...hat.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Valentin Schneider <vschneid@...hat.com>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Ben Segall <bsegall@...gle.com>
Cc: Zimuzo Ezeozue <zezeozue@...gle.com>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Daniel Bristot de Oliveira <bristot@...hat.com>
Cc: Will Deacon <will@...nel.org>
Cc: Waiman Long <longman@...hat.com>
Cc: Boqun Feng <boqun.feng@...il.com>
Cc: "Paul E . McKenney" <paulmck@...nel.org>
Cc: kernel-team@...roid.com
Signed-off-by: Valentin Schneider <valentin.schneider@....com>
Signed-off-by: Connor O'Brien <connoro@...gle.com>
Signed-off-by: John Stultz <jstultz@...gle.com>
---
kernel/sched/core.c | 36 ++++++++++++++++++++++++++----------
kernel/sched/rt.c | 22 +++++++++++++++++-----
2 files changed, 43 insertions(+), 15 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d0f86670bdf8..11138277c7c8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7062,12 +7062,28 @@ proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
WARN_ON_ONCE(!owner->on_rq);
return owner;
}
+
+static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next)
+{
+ /*
+ * pick_next_task() calls set_next_task() on the proxy at some
+ * point, which ensures it is not push/pullable. However, the
+ * proxy *and* the owner form an atomic pair wrt push/pull.
+ *
+ * Make sure owner is not pushable. Unfortunately we can only
+ * deal with that by means of a dequeue/enqueue cycle. :-/
+ */
+ dequeue_task(rq, next, DEQUEUE_NOCLOCK | DEQUEUE_SAVE);
+ enqueue_task(rq, next, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE);
+}
#else /* PROXY_EXEC */
static struct task_struct *
proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
{
return next;
}
+
+static inline void proxy_tag_curr(struct rq *rq, struct task_struct *next) { }
#endif /* PROXY_EXEC */
/*
@@ -7116,6 +7132,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
unsigned long prev_state;
struct rq_flags rf;
struct rq *rq;
+ bool proxied;
int cpu;
bool preserve_need_resched = false;
@@ -7189,20 +7206,11 @@ static void __sched notrace __schedule(unsigned int sched_mode)
atomic_inc(&rq->nr_iowait);
delayacct_blkio_start();
}
- } else {
- /*
- * XXX
- * Let's make this task, which is blocked on
- * a mutex, (push/pull)able (RT/DL).
- * Unfortunately we can only deal with that by
- * means of a dequeue/enqueue cycle. :-/
- */
- dequeue_task(rq, prev, 0);
- enqueue_task(rq, prev, 0);
}
switch_count = &prev->nvcsw;
}
+ proxied = !!prev->blocked_proxy;
pick_again:
/*
* If picked task is actually blocked it means that it can act as a
@@ -7244,6 +7252,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
* changes to task_struct made by pick_next_task().
*/
rq_set_curr_rcu_init(rq, next);
+
+ if (unlikely(!task_current_proxy(rq, next)))
+ proxy_tag_curr(rq, next);
+
/*
* The membarrier system call requires each architecture
* to have a full memory barrier after updating
@@ -7268,6 +7280,10 @@ static void __sched notrace __schedule(unsigned int sched_mode)
/* Also unlocks the rq: */
rq = context_switch(rq, prev, next, &rf);
} else {
+ /* In case next was already curr but just got blocked_proxy */
+ if (unlikely(!proxied && next->blocked_proxy))
+ proxy_tag_curr(rq, next);
+
rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP);
rq_unpin_lock(rq, &rf);
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index d1c5a022eae4..419270b0918e 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1537,9 +1537,21 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)
enqueue_rt_entity(rt_se, flags);
- if (!task_current(rq, p) && p->nr_cpus_allowed > 1 &&
- !task_is_blocked(p))
- enqueue_pushable_task(rq, p);
+ /*
+ * Current can't be pushed away. Proxy is tied to current, so don't
+ * push it either.
+ */
+ if (task_current(rq, p) || task_current_proxy(rq, p))
+ return;
+
+ /*
+ * Pinned tasks can't be pushed.
+ * Affinity of blocked tasks doesn't matter.
+ */
+ if (!task_is_blocked(p) && p->nr_cpus_allowed == 1)
+ return;
+
+ enqueue_pushable_task(rq, p);
}
static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags)
@@ -1832,9 +1844,9 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
/*
* The previous task needs to be made eligible for pushing
- * if it is still active
+ * if it is still active. Affinity of blocked task doesn't matter.
*/
- if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
+ if (on_rt_rq(&p->rt) && (p->nr_cpus_allowed > 1 || task_is_blocked(p)))
enqueue_pushable_task(rq, p);
}
--
2.40.0.rc1.284.g88254d51c5-goog
Powered by blists - more mailing lists