[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wjmFw1EBOVAN8vffPDHKJH84zZOtwZrLpE=Tn2MD6kEgQ@mail.gmail.com>
Date: Mon, 18 Apr 2022 15:01:33 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Jens Axboe <axboe@...nel.dk>
Cc: Zhihao Cheng <chengzhihao1@...wei.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Al Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@....de>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
yukuai3@...wei.com
Subject: Re: [PATCH v2] fs-writeback: writeback_sb_inodes:Recalculate 'wrote' according skipped pages
On Mon, Apr 18, 2022 at 2:16 PM Jens Axboe <axboe@...nel.dk> wrote:
>
> So as far as I can tell, we really have two options:
>
> 1) Don't preempt a task that has a plug active
> 2) Flush for any schedule out, not just going to sleep
>
> 1 may not be feasible if we're queueing lots of IO, which then leaves 2.
> Linus, do you remember what your original patch here was motivated by?
> I'm assuming it was an effiency thing, but do we really have a lot of
> cases of IO submissions being preempted a lot and hence making the plug
> less efficient than it should be at merging IO? Seems unlikely, but I
> could be wrong.
No, it goes all the way back to 2011, my memory for those kinds of
details doesn't go that far back.
That said, it clearly is about preemption, and I wonder if we had an
actual bug there.
IOW, it might well not just in the "gather up more IO for bigger
requests" thing, but about "the IO plug is per-thread and doesn't have
locking because of that".
So doing plug flushing from a preemptible kernel context might race
with it all being set up.
Explicit io_schedule() etc obviously doesn't have that issue.
Linus
Powered by blists - more mailing lists