[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200825004203.GJ12131@dread.disaster.area>
Date: Tue, 25 Aug 2020 10:42:03 +1000
From: Dave Chinner <david@...morbit.com>
To: Brian Foster <bfoster@...hat.com>
Cc: Christoph Hellwig <hch@...radead.org>,
Ritesh Harjani <riteshh@...ux.ibm.com>,
Anju T Sudhakar <anju@...ux.vnet.ibm.com>,
darrick.wong@...cle.com, linux-xfs@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
willy@...radead.org
Subject: Re: [PATCH] iomap: Fix the write_count in iomap_add_to_ioend().
On Mon, Aug 24, 2020 at 11:48:41AM -0400, Brian Foster wrote:
> On Mon, Aug 24, 2020 at 04:04:17PM +0100, Christoph Hellwig wrote:
> > On Mon, Aug 24, 2020 at 10:28:23AM -0400, Brian Foster wrote:
> > > Do I understand the current code (__bio_try_merge_page() ->
> > > page_is_mergeable()) correctly in that we're checking for physical page
> > > contiguity and not necessarily requiring a new bio_vec per physical
> > > page?
> >
> >
> > Yes.
> >
>
> Ok. I also realize now that this occurs on a kernel without commit
> 07173c3ec276 ("block: enable multipage bvecs"). That is probably a
> contributing factor, but it's not clear to me whether it's feasible to
> backport whatever supporting infrastructure is required for that
> mechanism to work (I suspect not).
>
> > > With regard to Dave's earlier point around seeing excessively sized bio
> > > chains.. If I set up a large memory box with high dirty mem ratios and
> > > do contiguous buffered overwrites over a 32GB range followed by fsync, I
> > > can see upwards of 1GB per bio and thus chains on the order of 32+ bios
> > > for the entire write. If I play games with how the buffered overwrite is
> > > submitted (i.e., in reverse) however, then I can occasionally reproduce
> > > a ~32GB chain of ~32k bios, which I think is what leads to problems in
> > > I/O completion on some systems. Granted, I don't reproduce soft lockup
> > > issues on my system with that behavior, so perhaps there's more to that
> > > particular issue.
> > >
> > > Regardless, it seems reasonable to me to at least have a conservative
> > > limit on the length of an ioend bio chain. Would anybody object to
> > > iomap_ioend growing a chain counter and perhaps forcing into a new ioend
> > > if we chain something like more than 1k bios at once?
> >
> > So what exactly is the problem of processing a long chain in the
> > workqueue vs multiple small chains? Maybe we need a cond_resched()
> > here and there, but I don't see how we'd substantially change behavior.
> >
>
> The immediate problem is a watchdog lockup detection in bio completion:
>
> NMI watchdog: Watchdog detected hard LOCKUP on cpu 25
>
> This effectively lands at the following segment of iomap_finish_ioend():
>
> ...
> /* walk each page on bio, ending page IO on them */
> bio_for_each_segment_all(bv, bio, iter_all)
> iomap_finish_page_writeback(inode, bv->bv_page, error);
>
> I suppose we could add a cond_resched(), but is that safe directly
> inside of a ->bi_end_io() handler? Another option could be to dump large
> chains into the completion workqueue, but we may still need to track the
> length to do that. Thoughts?
We have ioend completion merging that will run the compeltion once
for all the pending ioend completions on that inode. IOWs, we do not
need to build huge chains at submission time to batch up completions
efficiently. However, huge bio chains at submission time do cause
issues with writeback fairness, pinning GBs of ram as unreclaimable
for seconds because they are queued for completion while we are
still submitting the bio chain and submission is being throttled by
the block layer writeback throttle, etc. Not to mention the latency
of stable pages in a situation like this - a mmap() write fault
could stall for many seconds waiting for a huge bio chain to finish
submission and run completion processing even when the IO for the
given page we faulted on was completed before the page fault
occurred...
Hence I think we really do need to cap the length of the bio
chains here so that we start completing and ending page writeback on
large writeback ranges long before the writeback code finishes
submitting the range it was asked to write back.
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists