[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210322144042.GO1719932@casper.infradead.org>
Date: Mon, 22 Mar 2021 14:40:42 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Christoph Hellwig <hch@....de>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Namjae Jeon <namjae.jeon@...sung.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-cifs@...r.kernel.org,
linux-cifsd-devel@...ts.sourceforge.net, smfrench@...il.com,
hyc.lee@...il.com, viro@...iv.linux.org.uk, hch@...radead.org,
ronniesahlberg@...il.com, aurelien.aptel@...il.com,
aaptel@...e.com, sandeen@...deen.net, dan.carpenter@...cle.com,
colin.king@...onical.com, rdunlap@...radead.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Steve French <stfrench@...rosoft.com>
Subject: Re: [PATCH 3/5] cifsd: add file operations
On Mon, Mar 22, 2021 at 02:57:18PM +0100, Christoph Hellwig wrote:
> On Mon, Mar 22, 2021 at 06:03:21PM +0900, Sergey Senozhatsky wrote:
> > On (21/03/22 08:15), Matthew Wilcox wrote:
> > >
> > > What's the scenario for which your allocator performs better than slub
> > >
> >
> > IIRC request and reply buffers can be up to 4M in size. So this stuff
> > just allocates a number of fat buffers and keeps them around so that
> > it doesn't have to vmalloc(4M) for every request and every response.
>
> Do we have any data suggesting it is faster than vmalloc?
Oh, I have no trouble believing it's faster than vmalloc. Here's
the fast(!) path that always has memory available, never does retries.
I'm calling out the things I perceive as expensive on the right hand side.
Also, I'm taking the 4MB size as the example.
vmalloc()
__vmalloc_node()
__vmalloc_node_range()
__get_vm_area_node()
[allocates vm_struct]
alloc_vmap_area()
[allocates vmap_area]
[takes free_vmap_area_lock]
__alloc_vmap_area()
find_vmap_lowest_match
[walks free_vmap_area_root]
[takes vmap_area_lock]
__vmalloc_area_node()
... array_size is 8KiB, we call __vmalloc_node
__vmalloc_node
[everything we did above, all over again,
two more allocations, two more lock acquire]
alloc_pages_node(), 1024 times
vmap_pages_range_noflush()
vmap_range_noflush()
[allocate at least two pages for PTEs]
There's definitely some low handling fruit here. __vmalloc_area_node()
should probably call kvmalloc_node() instead of __vmalloc_node() for
table sizes > 4KiB. But a lot of this is inherent to how vmalloc works,
and we need to put a cache in front of it. Just not this one.
Powered by blists - more mailing lists