[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z1-riu65--CviPba@casper.infradead.org>
Date: Mon, 16 Dec 2024 04:24:42 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Uladzislau Rezki <urezki@...il.com>
Cc: Kefeng Wang <wangkefeng.wang@...wei.com>, zuoze <zuoze1@...wei.com>,
gustavoars@...nel.org, akpm@...ux-foundation.org,
linux-hardening@...r.kernel.org, linux-mm@...ck.org,
keescook@...omium.org
Subject: Re: [PATCH -next] mm: usercopy: add a debugfs interface to bypass
the vmalloc check.
On Wed, Dec 04, 2024 at 09:51:07AM +0100, Uladzislau Rezki wrote:
> I think, when i have more free cycles, i will check it from performance
> point of view. Because i do not know how much a maple tree is efficient
> when it comes to lookups, insert and removing.
Maple tree has a fanout of around 8-12 at each level, while an rbtree has
a fanout of two (arguably 3, since we might find the node). Let's say you
have 1000 vmalloc areas. A perfectly balanced rbtree would have 9 levels
(and might well be 11+ levels if imperfectly balanced -- and part of the
advantage of rbtrees over AVL trees is that they can be less balanced
so need fewer rotations). A perfectly balanced maple tree would have
only 3 levels.
Addition/removal is more expensive. We biased the implementation heavily
towards lookup, so we chose to keep it very compact. Most users (and
particularly the VMA tree which was our first client) do more lookups
than modifications; a real application takes many more pagefaults than
it does calls to mmap/munmap/mprotect/etc.
> As an RCU safe data structure, yes, a searching is improved in a way there
> is no need in taking spinlock. As a noted earlier i do not know if a maple
> tree allows to find a data when instead of key, it is associated with, we
> pass something that is withing a searchable area: [va_start:va_end].
That's what maple trees do; they store non-overlapping ranges. So you
can look up any address in a range and it will return you the pointer
associated with that range. Just like you'd want for a page fault ;-)
Powered by blists - more mailing lists