[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4d4f0a3f-6906-0c9b-1b56-22b9ff8795d4@bytedance.com>
Date: Thu, 5 Jan 2023 15:28:17 +0800
From: hanjinke <hanjinke.666@...edance.com>
To: Tejun Heo <tj@...nel.org>
Cc: josef@...icpanda.com, axboe@...nel.dk, cgroups@...r.kernel.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
yinxin.x@...edance.com
Subject: Re: [External] Re: [PATCH v3] blk-throtl: Introduce sync and async
queues for blk-throtl
在 2023/1/5 上午6:11, Tejun Heo 写道:
> Hello,
>
> On Mon, Dec 26, 2022 at 09:05:05PM +0800, Jinke Han wrote:
>> static void throtl_pending_timer_fn(struct timer_list *t);
>> +static inline struct bio *throtl_qnode_bio_list_pop(struct throtl_qnode *qn);
>
> Just define it before the first usage? Also, I think it'd be fine to let the
> compiler decide whether to inline.
>
>> +#define BLK_THROTL_SYNC(bio) (bio->bi_opf & (REQ_SYNC | REQ_META | REQ_PRIO))
>
> Nitpick but the above is used only in one place. Does it help to define it
> as a macro?
>
>> +/**
>> + * throtl_qnode_bio_peek - peek a bio for a qn
>> + * @qn: the qnode to peek from
>> + *
>> + * For read qn, just peek bio from the SYNC queue and return.
>> + * For write qn, we first ask the next_to_disp for bio and will pop a bio
>> + * to fill it if it's NULL. The next_to_disp is used to pin the bio for
>> + * next to dispatch. It is necessary. In the dispatching process, a peeked
>> + * bio may can't be dispatched due to lack of budget and has to wait, the
>> + * dispatching process may give up and the spin lock of the request queue
>> + * will be released. New bio may be queued in as the spin lock were released.
>> + * When it's time to dispatch the waiting bio, another bio may be selected to
>> + * check the limit and may be dispatched. If the dispatched bio is smaller
>> + * than the waiting bio, the bandwidth may be hard to satisfied as we may
>> + * trim the slice after each dispatch.
>> + * So pinning the next_to_disp to make sure that the waiting bio and the
>> + * dispatched one later always the same one in case that the spin lock of
>> + * queue was released and re-holded.
>
> Can you please format it better and proof-read it. I can mostly understand
> what it's saying but it can be improved quite a bit. Can you elaborate the
> starvation scenario further? What about the [a]sync queue split makes this
> more likely?
>
>> +/**
>> + * throtl_qnode_bio_pop: pop a bio from sync/async queue
>> + * @qn: the qnode to pop a bio from
>> + *
>> + * For write io qn, the target queue to pop was determined by the disp_sync_cnt.
>> + * Try to pop bio from target queue, fetch the bio and return it when it is not
>> + * empty. If the target queue empty, pop bio from another queue instead.
>
> How about:
>
> For reads, always pop from the ASYNC queue. For writes, target SYNC
> or ASYNC queue based on disp_sync_cnt. If empty, try the other
> queue.
>
>> +static inline struct bio *throtl_qnode_bio_list_pop(struct throtl_qnode *qn)
>> +{
>> + struct bio *bio;
>> + int from = SYNC;
>> +
>> + if (qn->disp_sync_cnt == THROTL_SYNC_FACTOR)
>> + from = ASYNC;
>
> ?: often is less readable but I wonder whether it'd be more readable here:
>
> from = qn->disp_sync_cnt == THROTL_SYNC_FACTOR ? ASYNC : SYNC;
>
>> +
>> + bio = bio_list_pop(&qn->bios[from]);
>> + if (!bio) {
>> + from = 1 - from;
>> + bio = bio_list_pop(&qn->bios[from]);
>> + }
>> +
>> + if ((qn->disp_sync_cnt < THROTL_SYNC_FACTOR) &&
>> + (from == SYNC))
>
> Why the line break? Also, this may be more personal preference but I'm not
> sure the parentheses are helping much here.
>
>> + qn->disp_sync_cnt++;
>> + else
>> + qn->disp_sync_cnt = 0;
>> +
>> + return bio;
>> +}
>
> Thanks.
>
Thanks. Your suggestion is detailed and helpful. I will accept it and
send the v4.
Thanks.
Powered by blists - more mailing lists