[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZveLM6EINpVWwJZD@casper.infradead.org>
Date: Sat, 28 Sep 2024 05:50:59 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
hannes@...xchg.org, nphamcs@...il.com, chengming.zhou@...ux.dev,
usamaarif642@...il.com, shakeel.butt@...ux.dev,
ryan.roberts@....com, ying.huang@...el.com, 21cnbao@...il.com,
akpm@...ux-foundation.org, nanhai.zou@...el.com,
wajdi.k.feghali@...el.com, vinodh.gopal@...el.com
Subject: Re: [PATCH v8 5/8] mm: zswap: Modify zswap_stored_pages to be
atomic_long_t.
On Fri, Sep 27, 2024 at 07:57:49PM -0700, Yosry Ahmed wrote:
> On Fri, Sep 27, 2024 at 7:16 PM Kanchana P Sridhar
> <kanchana.p.sridhar@...el.com> wrote:
> >
> > For zswap_store() to support large folios, we need to be able to do
> > a batch update of zswap_stored_pages upon successful store of all pages
> > in the folio. For this, we need to add folio_nr_pages(), which returns
> > a long, to zswap_stored_pages.
>
> Do we really need this? A lot of places in the kernel assign the
> result of folio_nr_pages() to an int (thp_nr_pages(),
> split_huge_pages_all(), etc). I don't think we need to worry about
> folio_nr_pages() exceeding INT_MAX for a while.
You'd be surprised. Let's assume we add support for PUD-sized pages
(personally I think this is too large to make sense, but some people can't
be told). On arm64, we can have a 64kB page size, so that's 13 bits per
level for a total of 2^26 pages per PUD. That feels uncomfortable close
to 2^32 to me.
Anywhere you've found that's using an int to store folio_nr_pages() is
somewhere we should probably switch to long. And this, btw, is why I've
moved from using an int to store folio_size() to using size_t. A PMD is
already 512MB (with a 64KB page size), and so a PUD will be 4TB.
thp_nr_pages() is not a good example. I'll be happy when we kill it;
we're actually almost there.
Powered by blists - more mailing lists