[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZG82ch1AdrAbpkJ6@pc636>
Date: Thu, 25 May 2023 12:20:34 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Dave Chinner <david@...morbit.com>
Cc: Uladzislau Rezki <urezki@...il.com>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, Baoquan He <bhe@...hat.com>,
Lorenzo Stoakes <lstoakes@...il.com>,
Christoph Hellwig <hch@...radead.org>,
Matthew Wilcox <willy@...radead.org>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
"Paul E . McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
linux-xfs@...r.kernel.org
Subject: Re: [PATCH 0/9] Mitigate a vmap lock contention
On Thu, May 25, 2023 at 07:56:56AM +1000, Dave Chinner wrote:
> On Wed, May 24, 2023 at 11:50:12AM +0200, Uladzislau Rezki wrote:
> > On Wed, May 24, 2023 at 03:04:28AM +0900, Hyeonggon Yoo wrote:
> > > On Tue, May 23, 2023 at 05:12:30PM +0200, Uladzislau Rezki wrote:
> > > And I would like to ask some side questions:
> > >
> > > 1. Is vm_[un]map_ram() API still worth with this patchset?
> > >
> > It is up to community to decide. As i see XFS needs it also. Maybe in
> > the future it can be removed(who knows). If the vmalloc code itself can
> > deliver such performance as vm_map* APIs.
>
> vm_map* APIs cannot be replaced with vmalloc, they cover a very
> different use case. i.e. vmalloc allocates mapped memory,
> vm_map_ram() maps allocated memory....
>
> > vm_map_ram() and friends interface was added because of vmalloc drawbacks.
>
> No. vm_map*() were scalability improvements added in 2009 to replace
> on vmap() and vunmap() to avoid global lock contention in the vmap
> allocator that XFS had been working around for years with it's own
> internal vmap cache....
>
> commit 95f8e302c04c0b0c6de35ab399a5551605eeb006
> Author: Nicholas Piggin <npiggin@...il.com>
> Date: Tue Jan 6 14:43:09 2009 +1100
>
> [XFS] use scalable vmap API
>
> Implement XFS's large buffer support with the new vmap APIs. See the vmap
> rewrite (db64fe02) for some numbers. The biggest improvement that comes from
> using the new APIs is avoiding the global KVA allocation lock on every call.
>
> Signed-off-by: Nick Piggin <npiggin@...e.de>
> Reviewed-by: Christoph Hellwig <hch@...radead.org>
> Signed-off-by: Lachlan McIlroy <lachlan@....com>
>
> vmap/vunmap() themselves were introduce in 2.5.32 (2002) and before
> that XFS was using remap_page_array() and vfree() in exactly the
> same way it uses vm_map_ram() and vm_unmap_ram() today....
>
> XFS has a long, long history of causing virtual memory allocator
> scalability and contention problems. As you can see, this isn't our
> first rodeo...
>
Let me be more specific, sorry it looks like there is misunderstanding.
I am talking about removing of vb_alloc()/vb_free() per-cpu stuff. If
alloc_vmap_area() gives same performance:
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d50c551592fc..a1687bbdad30 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2503,12 +2503,6 @@ void vm_unmap_ram(const void *mem, unsigned int count)
kasan_poison_vmalloc(mem, size);
- if (likely(count <= VMAP_MAX_ALLOC)) {
- debug_check_no_locks_freed(mem, size);
- vb_free(addr, size);
- return;
- }
-
va = find_unlink_vmap_area(addr);
if (WARN_ON_ONCE(!va))
return;
@@ -2539,12 +2533,6 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
unsigned long addr;
void *mem;
- if (likely(count <= VMAP_MAX_ALLOC)) {
- mem = vb_alloc(size, GFP_KERNEL);
- if (IS_ERR(mem))
- return NULL;
- addr = (unsigned long)mem;
- } else {
struct vmap_area *va;
va = alloc_vmap_area(size, PAGE_SIZE,
VMALLOC_START, VMALLOC_END,
@@ -2554,7 +2542,6 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
addr = va->va_start;
mem = (void *)addr;
- }
if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
pages, PAGE_SHIFT) < 0) {
+ other related parts.
--
Uladzislau Rezki
Powered by blists - more mailing lists