[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190204133300.GA21860@bombadil.infradead.org>
Date: Mon, 4 Feb 2019 05:33:00 -0800
From: Matthew Wilcox <willy@...radead.org>
To: Uladzislau Rezki <urezki@...il.com>
Cc: Michal Hocko <mhocko@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
Thomas Garnier <thgarnie@...gle.com>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
Steven Rostedt <rostedt@...dmis.org>,
Joel Fernandes <joelaf@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH 1/1] mm/vmalloc: convert vmap_lazy_nr to atomic_long_t
On Mon, Feb 04, 2019 at 11:49:56AM +0100, Uladzislau Rezki wrote:
> On Fri, Feb 01, 2019 at 01:45:28PM +0100, Michal Hocko wrote:
> > On Thu 31-01-19 17:24:52, Uladzislau Rezki (Sony) wrote:
> > > vmap_lazy_nr variable has atomic_t type that is 4 bytes integer
> > > value on both 32 and 64 bit systems. lazy_max_pages() deals with
> > > "unsigned long" that is 8 bytes on 64 bit system, thus vmap_lazy_nr
> > > should be 8 bytes on 64 bit as well.
> >
> > But do we really need 64b number of _pages_? I have hard time imagine
> > that we would have that many lazy pages to accumulate.
> >
> That is more about of using the same type of variables thus the same size
> in 32/64 bit address space.
>
> <snip>
> static void free_vmap_area_noflush(struct vmap_area *va)
> {
> int nr_lazy;
>
> nr_lazy = atomic_add_return((va->va_end - va->va_start) >> PAGE_SHIFT,
> &vmap_lazy_nr);
> ...
> if (unlikely(nr_lazy > lazy_max_pages()))
> try_purge_vmap_area_lazy();
> <snip>
>
> va_end/va_start are "unsigned long" whereas atomit_t(vmap_lazy_nr) is "int".
> The same with lazy_max_pages(), it returns "unsigned long" value.
>
> Answering your question, in 64bit, the "vmalloc" address space is ~8589719406
> pages if PAGE_SIZE is 4096, i.e. a regular 4 byte integer is not enough to hold
> it. I agree it is hard to imagine, but it also depends on physical memory a
> system has, it has to be terabytes. I am not sure if such systems exists.
There are certainly systems with more than 16TB of memory out there.
The question is whether we want to allow individual vmaps of 16TB.
We currently have a 32TB vmap space (on x86-64), so that's one limit.
Should we restrict it further to avoid this ever wrapping past a 32-bit
limit?
Powered by blists - more mailing lists