[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220308065110.636947-2-wenjian1@xiaomi.com>
Date: Tue, 8 Mar 2022 14:51:09 +0800
From: Jian Wen <wenjianhn@...il.com>
To: peterz@...radead.org
Cc: mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, linux-kernel@...r.kernel.org,
Jian Wen <wenjian1@...omi.com>
Subject: [PATCH 1/2] sched: explicitly distinguish between TASK_INTERRUPTIBLE and TASK_UNINTERRUPTIBLE
It is impossible to be a task of two different states, which are
TASK_UNINTERRUPTIBLE and TASK_INTERRUPTIBLE, at the same time.
This patch makes the next one easier to review.
Signed-off-by: Jian Wen <wenjian1@...omi.com>
---
kernel/sched/deadline.c | 3 +--
kernel/sched/fair.c | 2 +-
kernel/sched/rt.c | 3 +--
3 files changed, 3 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index d2c072b0ef01..e6fe3b46432a 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1543,8 +1543,7 @@ update_stats_dequeue_dl(struct dl_rq *dl_rq, struct sched_dl_entity *dl_se,
if (state & TASK_INTERRUPTIBLE)
__schedstat_set(p->stats.sleep_start,
rq_clock(rq_of_dl_rq(dl_rq)));
-
- if (state & TASK_UNINTERRUPTIBLE)
+ else if (state & TASK_UNINTERRUPTIBLE)
__schedstat_set(p->stats.block_start,
rq_clock(rq_of_dl_rq(dl_rq)));
}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5146163bfabb..fcfb22c835e4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -989,7 +989,7 @@ update_stats_dequeue_fair(struct cfs_rq *cfs_rq, struct sched_entity *se, int fl
if (state & TASK_INTERRUPTIBLE)
__schedstat_set(tsk->stats.sleep_start,
rq_clock(rq_of(cfs_rq)));
- if (state & TASK_UNINTERRUPTIBLE)
+ else if (state & TASK_UNINTERRUPTIBLE)
__schedstat_set(tsk->stats.block_start,
rq_clock(rq_of(cfs_rq)));
}
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 7b4f4fbbb404..5c4160f8cb23 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1387,8 +1387,7 @@ update_stats_dequeue_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se,
if (state & TASK_INTERRUPTIBLE)
__schedstat_set(p->stats.sleep_start,
rq_clock(rq_of_rt_rq(rt_rq)));
-
- if (state & TASK_UNINTERRUPTIBLE)
+ else if (state & TASK_UNINTERRUPTIBLE)
__schedstat_set(p->stats.block_start,
rq_clock(rq_of_rt_rq(rt_rq)));
}
--
2.25.1
Powered by blists - more mailing lists