lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 30 May 2023 11:18:51 -0700
From:   Chris Li <chrisl@...nel.org>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Domenico Cerasuolo <cerasuolodomenico@...il.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        sjenning@...hat.com, ddstreet@...e.org, vitaly.wool@...sulko.com,
        yosryahmed@...gle.com, kernel-team@...com
Subject: Re: [PATCH] mm: zswap: shrink until can accept

On Tue, May 30, 2023 at 11:55:19AM -0400, Johannes Weiner wrote:
> On Tue, May 30, 2023 at 07:51:23AM -0700, Chris Li wrote:
> > Thanks for pointing out -ENOMEM shouldn't be persistent.
> > Points taken.
> > 
> > The original point of not retrying the persistent error
> > still holds.
> 
> Okay, but what persistent errors are you referring to?

Maybe ENOMEM is a bad example. How about if the swap device
just went bad and can't complete new IO writes?

> Aside from -ENOMEM, writeback_entry will fail on concurrent swap
> invalidation or a racing swapin fault. In both cases we should
> absolutely keep trying other entries until the goal is met.

How about a narrower fix recognizing those error cases and making
the inner loop continue in those errors?

> > > Should it be fixed before merging this patch? I don't think the
> > > ordering matters. Right now the -ENOMEM case invokes OOM, so it isn't
> > > really persistent either. Retrying a few times in that case certainly
> > > doesn't seem to make things worse.
> > 
> > If you already know the error is persistent, retrying is wasting
> > CPU. It can pertancial hold locks during the retry, which can
> > slow someone else down.
> 
> That's a bit of a truism. How does this pertain to the zswap reclaim
> situation?

See the above narrower fix alternative.
> 
> > > > > As I was writing to Yosry, the differentiation would be a great improvement
> > > > > here, I just have a patch set in the queue that moves the inner reclaim loop
> > > > > from the zpool driver up to zswap. With that, updating the error handling
> > > > > would be more convenient as it would be done in one place instead of three.i
> > > > 
> > > > This has tricky complications as well. The current shrink interface
> > > > doesn't support continuing from the previous error position. If you want
> > > > to avoid a repeat attempt if the page has a writeback error, you kinda
> > > > of need a way to skip that page.
> > > 
> > > A page that fails to reclaim is put back to the tail of the LRU, so
> > > for all intents and purposes it will be skipped. In the rare and
> > 
> > Do you mean the page is treated as hot again?
> > 
> > Wouldn't that be undesirable from the app's point of view?
> 
> That's current backend LRU behavior. Is it optimal? That's certainly
> debatable. But it's tangential to this patch. The point is that
> capping retries to a fixed number of failures works correctly as a
> safety precaution and introduces no (new) undesirable behavior.
> 
> It's entirely moot once we refactor the backend page LRU to the zswap
> entry LRU. The only time we'll fail to reclaim an entry is if we race
> with something already freeing it, so it doesn't really matter where
> we put it.

Agree with you there. A bit side tracked.

> > > extreme case where it's the only page left on the list, I again don't
> > > see how retrying a few times will make the situation worse.
> > > 
> > > In practice, IMO there is little upside in trying to be more
> > > discerning about the error codes. Simple seems better here.
> > 
> > Just trying to think about what should be the precise loop termination
> > condition here.
> > 
> > I still feel blindly trying a few times is a very imprecise condition.
> 
> The precise termination condition is when can_accept() returns true
> again. The safety cap is only added as precaution to avoid infinite
> loops if something goes wrong or unexpected, now or in the future.

In my mind, that statement already suggests can_accept() is not
*precise*, considering the avoid infinite loop.
e.g. Do we know what is the optimal cap value and why that value
is optical?

Putting the definition of precise aside, I do see the unconditional
retry can have unwanted effects.

Chris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ