lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230530185451.GA101722@cmpxchg.org>
Date:   Tue, 30 May 2023 14:54:51 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Chris Li <chrisl@...nel.org>
Cc:     Domenico Cerasuolo <cerasuolodomenico@...il.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        sjenning@...hat.com, ddstreet@...e.org, vitaly.wool@...sulko.com,
        yosryahmed@...gle.com, kernel-team@...com
Subject: Re: [PATCH] mm: zswap: shrink until can accept

On Tue, May 30, 2023 at 11:18:51AM -0700, Chris Li wrote:
> On Tue, May 30, 2023 at 11:55:19AM -0400, Johannes Weiner wrote:
> > On Tue, May 30, 2023 at 07:51:23AM -0700, Chris Li wrote:
> > > Thanks for pointing out -ENOMEM shouldn't be persistent.
> > > Points taken.
> > > 
> > > The original point of not retrying the persistent error
> > > still holds.
> > 
> > Okay, but what persistent errors are you referring to?
> 
> Maybe ENOMEM is a bad example. How about if the swap device
> just went bad and can't complete new IO writes?

This is actually outside the scope of zswap, and handled by the
swapcache (end_swap_bio_write).

Once the IO is submitted, zswap will ax its copy and leave the rest to
the swapcache. It behaves the same way as if zswap had never been
involved to begin with when the swap out fails on IO errors.

>From a zswap perspective, there are no persistent errors in moving a
zswap entry back into the swapcache. Not just currently, but generally.

> > Aside from -ENOMEM, writeback_entry will fail on concurrent swap
> > invalidation or a racing swapin fault. In both cases we should
> > absolutely keep trying other entries until the goal is met.
> 
> How about a narrower fix recognizing those error cases and making
> the inner loop continue in those errors?

Right, I just I don't really see the value proposition of this
complication, and I see some downsides (see below). No single entry
error should ever cause us to stop the wider reclaim loop.

> > > > extreme case where it's the only page left on the list, I again don't
> > > > see how retrying a few times will make the situation worse.
> > > > 
> > > > In practice, IMO there is little upside in trying to be more
> > > > discerning about the error codes. Simple seems better here.
> > > 
> > > Just trying to think about what should be the precise loop termination
> > > condition here.
> > > 
> > > I still feel blindly trying a few times is a very imprecise condition.
> > 
> > The precise termination condition is when can_accept() returns true
> > again. The safety cap is only added as precaution to avoid infinite
> > loops if something goes wrong or unexpected, now or in the future.
> 
> In my mind, that statement already suggests can_accept() is not
> *precise*, considering the avoid infinite loop.
> e.g. Do we know what is the optimal cap value and why that value
> is optical?

Oh but it is precise. That's the goal we want to accomplish.

The cap is just so that in case something unexpectedly goes wrong (a
bug), we fail gracefully and don't lock up the machine. The same
reason we prefer WARN_ONs over BUG_ONs if we can, to avoid
crashes. That's really all there is too it, and it strikes me as
reasonable and robust design choice. It's fine to limp along or be
suboptimal after such a bug happens; the bar is avoiding an infinite
loop, nothing else.

Your suggestion of whitelisting certain errors is more complicated,
but also less robust: in case an entry error does by some accident
become persistent for the whole LRU, we're locking up the host. We'd
rather catch a bug like this by seeing spikes in the reclaim failure
rate than by losing production machines.

> Putting the definition of precise aside, I do see the unconditional
> retry can have unwanted effects.

I hope I could address this above. But if not, please share your
concerns.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ