[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPhsuW6B3O_kWjWwr+UfYO3WRkznFqBNtcecFCSECBSiZBJDsA@mail.gmail.com>
Date: Thu, 18 May 2023 11:47:46 -0700
From: Song Liu <song@...nel.org>
To: Kent Overstreet <kent.overstreet@...ux.dev>
Cc: Mike Rapoport <rppt@...nel.org>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
x86@...nel.org
Subject: Re: [RFC PATCH 1/5] mm: intorduce __GFP_UNMAPPED and unmapped_alloc()
On Thu, May 18, 2023 at 10:24 AM Kent Overstreet
<kent.overstreet@...ux.dev> wrote:
>
> On Thu, May 18, 2023 at 10:00:39AM -0700, Song Liu wrote:
> > On Thu, May 18, 2023 at 9:48 AM Kent Overstreet
> > <kent.overstreet@...ux.dev> wrote:
> > >
> > > On Thu, May 18, 2023 at 09:33:20AM -0700, Song Liu wrote:
> > > > I am working on patches based on the discussion in [1]. I am planning to
> > > > send v1 for review in a week or so.
> > >
> > > Hey Song, I was reviewing that thread too,
> > >
> > > Are you taking a different approach based on Thomas's feedback? I think
> > > he had some fair points in that thread.
> >
> > Yes, the API is based on Thomas's suggestion, like 90% from the discussions.
> >
> > >
> > > My own feeling is that the buddy allocator is our tool for allocating
> > > larger variable sized physically contiguous allocations, so I'd like to
> > > see something based on that - I think we could do a hybrid buddy/slab
> > > allocator approach, like we have for regular memory allocations.
> >
> > I am planning to implement the allocator based on this (reuse
> > vmap_area logic):
>
> Ah, you're still doing vmap_area approach.
>
> Mike's approach looks like it'll be _much_ lighter weight and higher
> performance, to me. vmalloc is known to be slow compared to the buddy
> allocator, and with Mike's approach we're only modifying mappings once
> per 2 MB chunk.
>
> I don't see anything in your code for sub-page sized allocations too, so
> perhaps I should keep going with my slab allocator.
The vmap_area approach handles sub-page allocations. In 5/5 of set [2],
we showed that multiple BPF programs share the same page with some
kernel text (_etext).
> Could you share your thoughts on your approach vs. Mike's? I'm newer to
> this area of the code than you two so maybe there's an angle I've missed
> :)
AFAICT, tree based solution (vmap_area) is more efficient than bitmap
based solution.
First, for 2MiB page with 64B chunk size, we need a bitmap of
2MiB / 64B = 32k bit = 4k bytes
While the tree based solution can adapt to the number of allocations within
This 2MiB page. Also, searching a free range within 4kB of bitmap may
actually be slower than searching in the tree.
Second, bitmap based solution cannot handle > 2MiB allocation cleanly,
while tree based solution can. For example, if a big driver uses 3MiB, the
tree based allocator can allocate 4MiB for it, and use the rest 1MiB for
smaller allocations.
Thanks,
Song
[2] https://lore.kernel.org/linux-mm/20221107223921.3451913-6-song@kernel.org/
Powered by blists - more mailing lists