lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 4 Jun 2024 12:53:19 -0400
From: Josef Bacik <josef@...icpanda.com>
To: Bernd Schubert <bernd.schubert@...tmail.fm>
Cc: Miklos Szeredi <miklos@...redi.hu>,
	Jingbo Xu <jefflexu@...ux.alibaba.com>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	lege.wang@...uarmicro.com,
	"Matthew Wilcox (Oracle)" <willy@...radead.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [HELP] FUSE writeback performance bottleneck

On Tue, Jun 04, 2024 at 04:13:25PM +0200, Bernd Schubert wrote:
> 
> 
> On 6/4/24 12:02, Miklos Szeredi wrote:
> > On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@...tmail.fm> wrote:
> > 
> >> Back to the background for the copy, so it copies pages to avoid
> >> blocking on memory reclaim. With that allocation it in fact increases
> >> memory pressure even more. Isn't the right solution to mark those pages
> >> as not reclaimable and to avoid blocking on it? Which is what the tmp
> >> pages do, just not in beautiful way.
> > 
> > Copying to the tmp page is the same as marking the pages as
> > non-reclaimable and non-syncable.
> > 
> > Conceptually it would be nice to only copy when there's something
> > actually waiting for writeback on the page.
> > 
> > Note: normally the WRITE request would be copied to userspace along
> > with the contents of the pages very soon after starting writeback.
> > After this the contents of the page no longer matter, and we can just
> > clear writeback without doing the copy.
> > 
> > But if the request gets stuck in the input queue before being copied
> > to userspace, then deadlock can still happen if the server blocks on
> > direct reclaim and won't continue with processing the queue.   And
> > sync(2) will also block in that case.>
> > So we'd somehow need to handle stuck WRITE requests.   I don't see an
> > easy way to do this "on demand", when something actually starts
> > waiting on PG_writeback.  Alternatively the page copy could be done
> > after a timeout, which is ugly, but much easier to implement.
> 
> I think the timeout method would only work if we have already allocated
> the pages, under memory pressure page allocation might not work well.
> But then this still seems to be a workaround, because we don't take any
> less memory with these copied pages.
> I'm going to look into mm/ if there isn't a better solution.

I've thought a bit about this, and I still don't have a good solution, so I'm
going to throw out my random thoughts and see if it helps us get to a good spot.

1. Generally we are moving away from GFP_NOFS/GFP_NOIO to instead use
   memalloc_*_save/memalloc_*_restore, so instead the process is marked being in
   these contexts.  We could do something similar for FUSE, tho this gets hairy
   with things that async off request handling to other threads (which is all of
   the FUSE file systems we have internally).  We'd need to have some way to
   apply this to an entire process group, but this could be a workable solution.

2. Per-request timeouts.  This is something we're planning on tackling for other
   reasons, but it could fit nicely here to say "if this fuse fs has a
   per-request timeout, skip the copy".  That way we at least know we're upper
   bound on how long we would be "deadlocked".  I don't love this approach
   because it's still a deadlock until the timeout elapsed, but it's an idea.

3. Since we're limiting writeout per the BDI, we could just say FUSE is special,
   only one memory reclaim related writeout at a time.  We flag when we're doing
   a write via memory reclaim, and then if we try to trigger writeout via memory
   reclaim again we simply reject it to avoid the deadlock.  This has the
   downside of making it so non-fuse related things that may be triggering
   direct reclaim through FUSE means they'll reclaim something else, and if the
   dirty pages from FUSE are the ones causing the problem we could spin a bunch
   evicting pages that we don't care about and thrashing a bit.

As I said all of these have downsides, I think #1 is probably the most workable,
but I haven't thought about it super thoroughly. Thanks,

Josef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ