[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod5LBi5V6q1uHUTSNnLz64HbD499a+OZvdYsUcmcWSt8Jg@mail.gmail.com>
Date: Wed, 27 Apr 2022 16:36:22 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Minchan Kim <minchan@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>, Roman Gushchin <guro@...com>,
Seth Jennings <sjenning@...hat.com>,
Dan Streetman <ddstreet@...e.org>,
Linux MM <linux-mm@...ck.org>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Kernel Team <kernel-team@...com>
Subject: Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes@...xchg.org> wrote:
>
> On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > Hi Johannes,
> > >
> > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > Currently it requires poking at debugfs to figure out the size and
> > > > population of the zswap cache on a host. There are no counters for
> > > > reads and writes against the cache. As a result, it's difficult to
> > > > understand zswap behavior on production systems.
> > > >
> > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > >
> > > > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> > > > ---
> > > > fs/proc/meminfo.c | 7 +++++++
> > > > include/linux/swap.h | 5 +++++
> > > > include/linux/vm_event_item.h | 4 ++++
> > > > mm/vmstat.c | 4 ++++
> > > > mm/zswap.c | 13 ++++++-------
> > > > 5 files changed, 26 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > --- a/fs/proc/meminfo.c
> > > > +++ b/fs/proc/meminfo.c
> > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > >
> > > > show_val_kb(m, "SwapTotal: ", i.totalswap);
> > > > show_val_kb(m, "SwapFree: ", i.freeswap);
> > > > +#ifdef CONFIG_ZSWAP
> > > > + seq_printf(m, "Zswap: %8lu kB\n",
> > > > + (unsigned long)(zswap_pool_total_size >> 10));
> > > > + seq_printf(m, "Zswapped: %8lu kB\n",
> > > > + (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > + (PAGE_SHIFT - 10));
> > > > +#endif
> > >
> > > I agree it would be very handy to have the memory consumption in meminfo
> > >
> > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > >
> > > If we really go this Zswap only metric instead of general term
> > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > in this patchset. Do you think that's better idea instead of
> > > introducing general term like "Compressed:" or something else?
> >
> > I'm fine with changing it to Compressed. If somebody cares about a
> > more detailed breakdown, we can add Zswap, Zram subsets as needed.
>
> It does raise the question what to do about cgroup, though. Should the
> control files (memory.zswap.current & memory.zswap.max) apply to zram
> in the future? If so, we should rename them, too.
>
> I'm not too familiar with zram, maybe you can provide some
> background. AFAIU, Google uses zram quite widely; all the more
> confusing why there is no container support for it yet.
>
> Could you shed some light?
>
I can shed light on the datacenter workloads. We use cgroup (still on
v1) and zswap. For the workloads/applications, the swap (or zswap) is
transparent in the sense that they are charged exactly the same
irrespective of how much their memory is zswapped-out. Basically the
applications see the same usage which is actually v1's
memsw.usage_in_bytes. We dynamically increase the swap size if it is
low, so we are not really worried about one job hogging the swap
space.
Regarding stats we actually do have them internally representing
compressed size and number of pages in zswap. The compressed size is
actually used for OOM victim selection. The memsw or v2's swap usage
in the presence of compression based swap does not actually tell how
much memory can potentially be released by evicting a job. For example
if there are two jobs 'A' and 'B'. Both of them have 100 pages
compressed but A's 100 pages are compressed to let's say 10 pages
while B's 100 pages are compressed to 70 pages. It is preferable to
kill B as that will release 70 pages. (This is a very simplified
explanation of what we actually do).
Powered by blists - more mailing lists