[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241012165523.2856425-1-jstultz@google.com>
Date: Sat, 12 Oct 2024 09:54:37 -0700
From: John Stultz <jstultz@...gle.com>
To: LKML <linux-kernel@...r.kernel.org>
Cc: John Stultz <jstultz@...gle.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>, Metin Kaya <metin.kaya@....com>,
kernel test robot <lkp@...el.com>
Subject: [PATCH] locking: Fix warning from missing argument documentation
The kernel test robot complained the commit 8d8fcb8c6a67
("locking/mutex: Remove wakeups from under mutex::wait_lock"),
currently only in Peter's git tree, didn't update the kernel doc
for the new wake_q argument added.
So fix this up.
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@...hat.com>
Cc: Metin Kaya <metin.kaya@....com>
Fixes: 8d8fcb8c6a67 ("locking/mutex: Remove wakeups from under mutex::wait_lock")
Reported-by: kernel test robot <lkp@...el.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202410121433.jYN4ypTb-lkp@intel.com/
Signed-off-by: John Stultz <jstultz@...gle.com>
---
kernel/locking/rtmutex.c | 2 ++
kernel/locking/rtmutex_api.c | 1 +
2 files changed, 3 insertions(+)
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 8ada6567a141..c7de80ee1f9d 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1680,6 +1680,7 @@ static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
* @state: The task state for sleeping
* @chwalk: Indicator whether full or partial chainwalk is requested
* @waiter: Initializer waiter for blocking
+ * @wake_q: The wake_q to wake tasks after we release the wait_lock
*/
static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
struct ww_acquire_ctx *ww_ctx,
@@ -1815,6 +1816,7 @@ static __always_inline int __rt_mutex_lock(struct rt_mutex_base *lock,
/**
* rtlock_slowlock_locked - Slow path lock acquisition for RT locks
* @lock: The underlying RT mutex
+ * @wake_q: The wake_q to wake tasks after we release the wait_lock
*/
static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock,
struct wake_q_head *wake_q)
diff --git a/kernel/locking/rtmutex_api.c b/kernel/locking/rtmutex_api.c
index 747f2da16037..2bc14c049a64 100644
--- a/kernel/locking/rtmutex_api.c
+++ b/kernel/locking/rtmutex_api.c
@@ -275,6 +275,7 @@ void __sched rt_mutex_proxy_unlock(struct rt_mutex_base *lock)
* @lock: the rt_mutex to take
* @waiter: the pre-initialized rt_mutex_waiter
* @task: the task to prepare
+ * @wake_q: the wake_q to wake tasks after we release the wait_lock
*
* Starts the rt_mutex acquire; it enqueues the @waiter and does deadlock
* detection. It does not wait, see rt_mutex_wait_proxy_lock() for that.
--
2.47.0.rc1.288.g06298d1525-goog
Powered by blists - more mailing lists