[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190204180626.danletd4uh3rxnyd@pc636>
Date: Mon, 4 Feb 2019 19:06:26 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Uladzislau Rezki <urezki@...il.com>,
Michal Hocko <mhocko@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
Thomas Garnier <thgarnie@...gle.com>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
Steven Rostedt <rostedt@...dmis.org>,
Joel Fernandes <joelaf@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH 1/1] mm/vmalloc: convert vmap_lazy_nr to atomic_long_t
Hello, Matthew.
On Mon, Feb 04, 2019 at 05:33:00AM -0800, Matthew Wilcox wrote:
> On Mon, Feb 04, 2019 at 11:49:56AM +0100, Uladzislau Rezki wrote:
> > On Fri, Feb 01, 2019 at 01:45:28PM +0100, Michal Hocko wrote:
> > > On Thu 31-01-19 17:24:52, Uladzislau Rezki (Sony) wrote:
> > > > vmap_lazy_nr variable has atomic_t type that is 4 bytes integer
> > > > value on both 32 and 64 bit systems. lazy_max_pages() deals with
> > > > "unsigned long" that is 8 bytes on 64 bit system, thus vmap_lazy_nr
> > > > should be 8 bytes on 64 bit as well.
> > >
> > > But do we really need 64b number of _pages_? I have hard time imagine
> > > that we would have that many lazy pages to accumulate.
> > >
> > That is more about of using the same type of variables thus the same size
> > in 32/64 bit address space.
> >
> > <snip>
> > static void free_vmap_area_noflush(struct vmap_area *va)
> > {
> > int nr_lazy;
> >
> > nr_lazy = atomic_add_return((va->va_end - va->va_start) >> PAGE_SHIFT,
> > &vmap_lazy_nr);
> > ...
> > if (unlikely(nr_lazy > lazy_max_pages()))
> > try_purge_vmap_area_lazy();
> > <snip>
> >
> > va_end/va_start are "unsigned long" whereas atomit_t(vmap_lazy_nr) is "int".
> > The same with lazy_max_pages(), it returns "unsigned long" value.
> >
> > Answering your question, in 64bit, the "vmalloc" address space is ~8589719406
> > pages if PAGE_SIZE is 4096, i.e. a regular 4 byte integer is not enough to hold
> > it. I agree it is hard to imagine, but it also depends on physical memory a
> > system has, it has to be terabytes. I am not sure if such systems exists.
>
> There are certainly systems with more than 16TB of memory out there.
> The question is whether we want to allow individual vmaps of 16TB.
Honestly saying, i do not know. But what i see is we are allowed to
do individual mapping as much as physical memory we have. If i do not
miss something.
>
> We currently have a 32TB vmap space (on x86-64), so that's one limit.
> Should we restrict it further to avoid this ever wrapping past a 32-bit
> limit?
We can restrict vmap space to 1 << 32 pages in 64 bit systems, but then
probably all archs have to follow that rule and patched accordingly. Apart
of that i am not sure how KASAN calculates start point for its allocation,
i mean offset within VMALLOC_START - VMALLOC_END address space. The same
regarding kernel module mapping space(if built to allocate in vmalloc space).
Also, since atomic_t is integer it can be negative, therefore we have to
use casting to "unsigned int" everywhere if deal with "vmap_lazy_nr".
Thank you.
--
Vlad Rezki
Powered by blists - more mailing lists