[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGis_TXyPtFiE=pLrLRh1MV3meE4aETi6z36NWLrMkYKkcjGNQ@mail.gmail.com>
Date: Mon, 21 Apr 2025 19:35:24 +0100
From: Matt Fleming <mfleming@...udflare.com>
To: Keith Busch <kbusch@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel-team <kernel-team@...udflare.com>
Subject: Re: 10x I/O await times in 6.12
On Mon, 21 Apr 2025 at 16:22, Keith Busch <kbusch@...nel.org> wrote:
>
> On Mon, Apr 21, 2025 at 09:53:10AM +0100, Matt Fleming wrote:
> > Hey there,
> >
> > We're moving to 6.12 at Cloudflare and noticed that write await times
> > in iostat are 10x what they were in 6.6. After a bit of bpftracing
> > (script to find all plug times above 10ms below), it seems like this
> > is an accounting error caused by the plug->cur_ktime optimisation
> > rather than anything more material.
> >
> > It appears as though a task can enter __submit_bio() with ->plug set
> > and a very stale cur_ktime value on the order of milliseconds. Is this
> > expected behaviour? It looks like it leads to inaccurate I/O times.
>
> There are places with a block plug that call cond_resched(), which
> doesn't invalidate the plug's cached ktime. You could end up with a
> stale ktime if your process is scheduled out.
Is that intentional? I know the cached time is invalidated when
calling schedule(). Does the invalidation need to get pushed down into
__schedule_loop()?
Powered by blists - more mailing lists