[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <463e4707-d05c-7114-b04a-86b6ca01a234@huaweicloud.com>
Date: Wed, 23 Apr 2025 11:36:03 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Matt Fleming <mfleming@...udflare.com>, Yu Kuai <yukuai1@...weicloud.com>
Cc: Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...nel.dk>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team <kernel-team@...udflare.com>, "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: 10x I/O await times in 6.12
Hi,
在 2025/04/22 18:45, Matt Fleming 写道:
> On Tue, 22 Apr 2025 at 04:03, Yu Kuai <yukuai1@...weicloud.com> wrote:
>>
>> So, either preempt takes a long time, or generate lots of bio to plug
>> takes a long time can both results in larger iostat IO latency. I still
>> think delay setting request start_time to blk_mq_flush_plug_list() might
>> be a reasonable fix.
>
> I'll try out your proposed fix also. Is it not possible for a task to
> be preempted during a blk_mq_flush_plug_list() call, e.g. in the
> driver layer?
Let's focus on your regression first, preempt during flush plug doesn't
introduce new gap, rq->start_time_ns is before that already before
caching time in plug.
>
> I understand that you might not want to issue I/O on preempt, but
> that's a distinct problem from clearing the cached ktime no? There is
> no upper bound on the amount of time a task might be scheduled out due
> to preempt which means there is no limit to the staleness of that
> value. I would assume the only safe thing to do (like is done for
> various other timestamps) is reset it when the task gets scheduled
> out.
Yes, it's resonable to clear cached time for the preempt case. What
I'm concerned is that even the task never scheduled out, the time can
still stale for milliseconds. I think that is possilble if a lots of bio
are endup in the same roud of plug(a lot of IO merge).
Thanks,
Kuai
>
> .
>
Powered by blists - more mailing lists