[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZCO/gTgw9PUuU+mG@MiWiFi-R3L-srv>
Date: Wed, 29 Mar 2023 12:33:05 +0800
From: Baoquan He <bhe@...hat.com>
To: Uladzislau Rezki <urezki@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
Lorenzo Stoakes <lstoakes@...il.com>,
Christoph Hellwig <hch@...radead.org>,
Matthew Wilcox <willy@...radead.org>,
Dave Chinner <david@...morbit.com>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>
Subject: Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
On 03/28/23 at 02:34pm, Uladzislau Rezki wrote:
......
> > > @@ -2003,8 +2037,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > > bitmap_set(vb->used_map, 0, (1UL << order));
> > > INIT_LIST_HEAD(&vb->free_list);
> > >
> > > - vb_idx = addr_to_vb_idx(va->va_start);
> > > - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
> > > + vbq = addr_to_vbq(va->va_start);
> > > + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
> >
> > Using va->va_start as index to access xarray may cost extra memory.
> > Imagine we got a virtual address at VMALLOC_START, its region is
> > [VMALLOC_START, VMALLOC_START+4095]. In the xarray, its sequence order
> > is 0. While with va->va_start, it's 0xffffc90000000000UL on x86_64 with
> > level4 paging mode. That means for the first page size vmalloc area,
> > storing it into xarray need about 10 levels of xa_node, just for the one
> > page size. With the old addr_to_vb_idx(), its index is 0. Only one level
> > height is needed. One xa_node is about 72bytes, it could take more time
> > and memory to access va->va_start. Not sure if my understanding is correct.
> >
> > static unsigned long addr_to_vb_idx(unsigned long addr)
> > {
> > addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
> > addr /= VMAP_BLOCK_SIZE;
> > return addr;
> > }
> >
> If the size of array depends on index "length", then, indeed it will require
> more memory. From the other hand we can keep the old addr_to_vb_idx() function
> in order to "cut" a va->va_start index.
Yeah, the extra 10 levels of xa_node is unnecessary if we keep the old
addr_to_vb_idx(). And the prolonged path will cost more time to reach the
wanted leaf node. E.g on x86_64 with 4 level paging mode, vmalloc area
is 32TB. With the old calculation, its index range is [0, 8M], 4 level
heights of xa_node at most is enough to cover.
Powered by blists - more mailing lists