[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGis_TV7gq1fHM0YFz798G91poeKQWYo2cZq0eEo7ydT1Qen+A@mail.gmail.com>
Date: Tue, 22 Apr 2025 11:45:04 +0100
From: Matt Fleming <mfleming@...udflare.com>
To: Yu Kuai <yukuai1@...weicloud.com>
Cc: Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel-team <kernel-team@...udflare.com>,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: 10x I/O await times in 6.12
On Tue, 22 Apr 2025 at 04:03, Yu Kuai <yukuai1@...weicloud.com> wrote:
>
> So, either preempt takes a long time, or generate lots of bio to plug
> takes a long time can both results in larger iostat IO latency. I still
> think delay setting request start_time to blk_mq_flush_plug_list() might
> be a reasonable fix.
I'll try out your proposed fix also. Is it not possible for a task to
be preempted during a blk_mq_flush_plug_list() call, e.g. in the
driver layer?
I understand that you might not want to issue I/O on preempt, but
that's a distinct problem from clearing the cached ktime no? There is
no upper bound on the amount of time a task might be scheduled out due
to preempt which means there is no limit to the staleness of that
value. I would assume the only safe thing to do (like is done for
various other timestamps) is reset it when the task gets scheduled
out.
Powered by blists - more mailing lists