lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 7 Dec 2023 16:42:59 -0800
From:   Nhat Pham <nphamcs@...il.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     tj@...nel.org, lizefan.x@...edance.com, hannes@...xchg.org,
        cerasuolodomenico@...il.com, yosryahmed@...gle.com,
        sjenning@...hat.com, ddstreet@...e.org, vitaly.wool@...sulko.com,
        mhocko@...nel.org, roman.gushchin@...ux.dev, shakeelb@...gle.com,
        muchun.song@...ux.dev, hughd@...gle.com, corbet@....net,
        konrad.wilk@...cle.com, senozhatsky@...omium.org, rppt@...nel.org,
        linux-mm@...ck.org, kernel-team@...a.com,
        linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
        david@...t.cz, chrisl@...nel.org
Subject: Re: [PATCH v6] zswap: memcontrol: implement zswap writeback disabling

>
> This does seem to be getting down into the weeds.  How would a user
> know (or even suspect) that these things are happening to them?  Perhaps
> it would be helpful to tell people where to go look to determine this.

When I test this feature during its development, I primarily just look
at the swapin/major fault counters to see if I'm experiencing swapping
IO, and when writeback is disabled, if the IO is still there. We can
also poll these counters overtime and plot it/compute their rate of
change. I just assumed this is usually the standard practice, and not
very zswap-specific in general, so I did not specify in the zswap
documentation.

>
> Also, it would be quite helpful of the changelog were to give us some
> idea of how important this tunable is.  What sort of throughput
> differences might it cause and under what circumstances?

For the most part, this feature is motivated by internal parties who
have already established their opinions regarding swapping - the
workloads that are highly sensitive to IO, and especially those who
are using servers with really slow disk performance (for instance,
massive but slow HDDs). For these folks, it's impossible to convince
them to even entertain zswap if swapping also comes as a packaged
deal. Writeback disabling is quite a useful feature in these
situations - on a mixed workloads deployment, they can disable
writeback for the more IO-sensitive workloads, and enable writeback
for other background workloads.

(Maybe we should include the paragraph above as part of the changelog?)

I don't have any concrete numbers though - any numbers I can pull out
are from highly artificial tasks that only serve to test the
correctness aspect of the implementation. zswap.writeback disablement
would of course be faster in these situations (up to 33%!!!!) - but
that's basically just saying HDD is slow. Which is not very
informative or surprising, so I did not include it in the changelog.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ