lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK8P3a2Gz5H_fcNtW0yCCjO1cRNa0nyd568sDYR0nNphu49YqQ@mail.gmail.com>
Date:   Mon, 9 Mar 2020 14:33:26 +0100
From:   Arnd Bergmann <arnd@...db.de>
To:     Russell King - ARM Linux admin <linux@...linux.org.uk>
Cc:     Nishanth Menon <nm@...com>,
        Santosh Shilimkar <santosh.shilimkar@...cle.com>,
        Tero Kristo <t-kristo@...com>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        Michal Hocko <mhocko@...e.com>,
        Rik van Riel <riel@...riel.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Santosh Shilimkar <ssantosh@...nel.org>,
        Dave Chinner <david@...morbit.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        Yafang Shao <laoar.shao@...il.com>,
        Al Viro <viro@...iv.linux.org.uk>,
        Johannes Weiner <hannes@...xchg.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        kernel-team@...com, Kishon Vijay Abraham I <kishon@...com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Roman Gushchin <guro@...com>
Subject: Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker LRU

On Sun, Mar 8, 2020 at 3:20 PM Russell King - ARM Linux admin
<linux@...linux.org.uk> wrote:
> On Sun, Mar 08, 2020 at 11:58:52AM +0100, Arnd Bergmann wrote:
> > On Fri, Mar 6, 2020 at 9:36 PM Nishanth Menon <nm@...com> wrote:
> > > On 13:11-20200226, santosh.shilimkar@...cle.com wrote:
>
> > - extend zswap to use all the available high memory for swap space
> >   when highmem is disabled.
>
> I don't think that's a good idea.  Running debian stable kernels on my
> 8GB laptop, I have problems when leaving firefox running long before
> even half the 16GB of swap gets consumed - the entire machine slows
> down very quickly when it starts swapping more than about 2 or so GB.
> It seems either the kernel has become quite bad at selecting pages to
> evict.
>
> It gets to the point where any git operation has a battle to fight
> for RAM, despite not touching anything else other than git.
>
> The behaviour is much like firefox is locking memory into core, but
> that doesn't seem to be what's actually going on.  I've never really
> got to the bottom of it though.
>
> This is with 64-bit kernel and userspace.

I agree there is something going wrong on your machine, but I
don't really see how that relates to my suggestion.

> So, I'd suggest that trading off RAM available through highmem for VM
> space available through zswap is likely a bad idea if you have a
> workload that requires 4GB of RAM on a 32-bit machine.

Aside from every workload being different, I was thinking of
these general observations:

- If we are looking at a future without highmem, then it's better to use
  the extra memory for something than not using it. zswap seems like
  a reasonable use.

- A lot of embedded systems are configured to have no swap at all,
  which can be for good or not-so-good reasons. Having some
  swap space available often improves things, even if it comes
  out of RAM.

- A particularly important case to optimize for is 2GB of RAM with
  LPAE enabled. With CONFIG_VMSPLIT_2G and highmem, this
  leads to the paradox -ENOMEM when 256MB of highmem are
  full while plenty of lowmem is available. With highmem disabled,
  you avoid that at the cost of losing 12% of RAM.

- With 4GB+ of RAM and CONFIG_VMSPLIT_2G or
  CONFIG_VMSPLIT_3G, using gigabytes of RAM for swap
  space would usually be worse than highmem, but once
  we have VMSPLIT_4G_4G, it's the same situation as above
  with 6% of RAM used for zswap instead of highmem.

       Arnd

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ