[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9c6099ba-8d95-f2f6-fe36-7333f706c0fc@huaweicloud.com>
Date: Fri, 25 Oct 2024 15:50:19 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Muchun Song <songmuchun@...edance.com>, axboe@...nel.dk
Cc: mingo@...hat.com, peterz@...radead.org, chengming.zhou@...ux.dev,
muchun.song@...ux.dev, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, tj@...nel.org, "yukuai (C)"
<yukuai3@...wei.com>
Subject: Re: [PATCH] block: refactor rq_qos_wait()
Hi,
+CC Tejun
在 2024/10/24 12:35, Muchun Song 写道:
> When rq_qos_wait() is first introduced, it is easy to understand. But
> with some bug fixes applied, it is not easy for newcomers to understand
> the whole logic under those fixes. In this patch, rq_qos_wait() is
> refactored and more comments are added for better understanding. There
> are 4 points for the improvement:
>
> 1) Use waitqueue_active() instead of wq_has_sleeper() to eliminate
> unnecessary memory barrier in wq_has_sleeper() which is supposed
> to be used in waker side. In this case, we do need the barrier.
> So use the cheaper one to locklessly test for waiters on the queue.
>
> 2) There is already a macro DEFINE_WAIT_FUNC() to declare a
> wait_queue_entry with a specified waking function. But there is not
> a counterpart for initializing one wait_queue_entry with a
> specified waking function. So introducing init_wait_func() for
> this, which also could be used elsewhere (like filemap.c). It can
> be used in rq_qos_wait() to use default_wake_function() to wake up
> waiters, which could remove ->task field from rq_qos_wait_data.
I think it's better to cook point 2 as a seperate patch.
Whether or not, this patch LGTM.
Reviewed-by: Yu Kuai <yukuai3@...wei.com>
>
> 3) Remove acquire_inflight_cb() logic for the first waiter out of the
> while loop to make the code clear.
>
> 4) Add more comments to explain how to sync with different waiters and
> the waker.
>
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
> block/blk-rq-qos.c | 82 +++++++++++++++++++++++++++++---------------
> include/linux/wait.h | 6 ++--
> 2 files changed, 58 insertions(+), 30 deletions(-)
>
> diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
> index 9b0aa7dd6779f..5d995d389eaf5 100644
> --- a/block/blk-rq-qos.c
> +++ b/block/blk-rq-qos.c
> @@ -196,7 +196,6 @@ bool rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle)
>
> struct rq_qos_wait_data {
> struct wait_queue_entry wq;
> - struct task_struct *task;
> struct rq_wait *rqw;
> acquire_inflight_cb_t *cb;
> void *private_data;
> @@ -218,7 +217,20 @@ static int rq_qos_wake_function(struct wait_queue_entry *curr,
> return -1;
>
> data->got_token = true;
> - wake_up_process(data->task);
> + /*
> + * autoremove_wake_function() removes the wait entry only when it
> + * actually changed the task state. We want the wait always removed.
> + * Remove explicitly and use default_wake_function().
> + */
> + default_wake_function(curr, mode, wake_flags, key);
> + /*
> + * Note that the order of operations is important as finish_wait()
> + * tests whether @curr is removed without grabbing the lock. This
> + * should be the last thing to do to make sure we will not have a
> + * UAF access to @data. And the semantics of memory barrier in it
> + * also make sure the waiter will see the latest @data->got_token
> + * once list_empty_careful() in finish_wait() returns true.
> + */
> list_del_init_careful(&curr->entry);
> return 1;
> }
> @@ -244,41 +256,55 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data,
> cleanup_cb_t *cleanup_cb)
> {
> struct rq_qos_wait_data data = {
> - .wq = {
> - .func = rq_qos_wake_function,
> - .entry = LIST_HEAD_INIT(data.wq.entry),
> - },
> - .task = current,
> - .rqw = rqw,
> - .cb = acquire_inflight_cb,
> - .private_data = private_data,
> + .rqw = rqw,
> + .cb = acquire_inflight_cb,
> + .private_data = private_data,
> + .got_token = false,
> };
> - bool has_sleeper;
> + bool first_waiter;
>
> - has_sleeper = wq_has_sleeper(&rqw->wait);
> - if (!has_sleeper && acquire_inflight_cb(rqw, private_data))
> + /*
> + * If there are no waiters in the waiting queue, try to increase the
> + * inflight counter if we can. Otherwise, prepare for adding ourselves
> + * to the waiting queue.
> + */
> + if (!waitqueue_active(&rqw->wait) && acquire_inflight_cb(rqw, private_data))
> return;
>
> - has_sleeper = !prepare_to_wait_exclusive(&rqw->wait, &data.wq,
> + init_wait_func(&data.wq, rq_qos_wake_function);
> + first_waiter = prepare_to_wait_exclusive(&rqw->wait, &data.wq,
> TASK_UNINTERRUPTIBLE);
> + /*
> + * Make sure there is at least one inflight process; otherwise, waiters
> + * will never be woken up. Since there may be no inflight process before
> + * adding ourselves to the waiting queue above, we need to try to
> + * increase the inflight counter for ourselves. And it is sufficient to
> + * guarantee that at least the first waiter to enter the waiting queue
> + * will re-check the waiting condition before going to sleep, thus
> + * ensuring forward progress.
> + */
> + if (!data.got_token && first_waiter && acquire_inflight_cb(rqw, private_data)) {
> + finish_wait(&rqw->wait, &data.wq);
> + /*
> + * We raced with rq_qos_wake_function() getting a token,
> + * which means we now have two. Put our local token
> + * and wake anyone else potentially waiting for one.
> + *
> + * Enough memory barrier in list_empty_careful() in
> + * finish_wait() is paired with list_del_init_careful()
> + * in rq_qos_wake_function() to make sure we will see
> + * the latest @data->got_token.
> + */
> + if (data.got_token)
> + cleanup_cb(rqw, private_data);
> + return;
> + }
> +
> + /* we are now relying on the waker to increase our inflight counter. */
> do {
> - /* The memory barrier in set_task_state saves us here. */
> if (data.got_token)
> break;
> - if (!has_sleeper && acquire_inflight_cb(rqw, private_data)) {
> - finish_wait(&rqw->wait, &data.wq);
> -
> - /*
> - * We raced with rq_qos_wake_function() getting a token,
> - * which means we now have two. Put our local token
> - * and wake anyone else potentially waiting for one.
> - */
> - if (data.got_token)
> - cleanup_cb(rqw, private_data);
> - return;
> - }
> io_schedule();
> - has_sleeper = true;
> set_current_state(TASK_UNINTERRUPTIBLE);
> } while (1);
> finish_wait(&rqw->wait, &data.wq);
> diff --git a/include/linux/wait.h b/include/linux/wait.h
> index 8aa3372f21a08..b008ca42b5903 100644
> --- a/include/linux/wait.h
> +++ b/include/linux/wait.h
> @@ -1206,14 +1206,16 @@ int autoremove_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, i
>
> #define DEFINE_WAIT(name) DEFINE_WAIT_FUNC(name, autoremove_wake_function)
>
> -#define init_wait(wait) \
> +#define init_wait_func(wait, function) \
> do { \
> (wait)->private = current; \
> - (wait)->func = autoremove_wake_function; \
> + (wait)->func = function; \
> INIT_LIST_HEAD(&(wait)->entry); \
> (wait)->flags = 0; \
> } while (0)
>
> +#define init_wait(wait) init_wait_func(wait, autoremove_wake_function)
> +
> typedef int (*task_call_f)(struct task_struct *p, void *arg);
> extern int task_call_func(struct task_struct *p, task_call_f func, void *arg);
>
>
Powered by blists - more mailing lists