[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <166693933445.29415.8756864522023842425.tip-bot2@tip-bot2>
Date: Fri, 28 Oct 2022 06:42:14 -0000
From: "tip-bot2 for Waiman Long" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Waiman Long <longman@...hat.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched: Add __releases annotations to affine_move_task()
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 5584e8ac2c68280e5ac31d231c23cdb7dfa225db
Gitweb: https://git.kernel.org/tip/5584e8ac2c68280e5ac31d231c23cdb7dfa225db
Author: Waiman Long <longman@...hat.com>
AuthorDate: Thu, 22 Sep 2022 14:00:37 -04:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Thu, 27 Oct 2022 11:01:21 +02:00
sched: Add __releases annotations to affine_move_task()
affine_move_task() assumes task_rq_lock() has been called and it does
an implicit task_rq_unlock() before returning. Add the appropriate
__releases annotations to make this clear.
A typo error in comment is also fixed.
Signed-off-by: Waiman Long <longman@...hat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/20220922180041.1768141-2-longman@redhat.com
---
kernel/sched/core.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 069da4a..f6f2807 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2690,6 +2690,8 @@ void release_user_cpus_ptr(struct task_struct *p)
*/
static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flags *rf,
int dest_cpu, unsigned int flags)
+ __releases(rq->lock)
+ __releases(p->pi_lock)
{
struct set_affinity_pending my_pending = { }, *pending = NULL;
bool stop_pending, complete = false;
@@ -2999,7 +3001,7 @@ err_unlock:
/*
* Restrict the CPU affinity of task @p so that it is a subset of
- * task_cpu_possible_mask() and point @p->user_cpu_ptr to a copy of the
+ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
* old affinity mask. If the resulting mask is empty, we warn and walk
* up the cpuset hierarchy until we find a suitable mask.
*/
Powered by blists - more mailing lists