lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 3 Feb 2022 17:29:07 +0000
From:   Liam Howlett <liam.howlett@...cle.com>
To:     Mark Hemment <markhemm@...glemail.com>
CC:     "maple-tree@...ts.infradead.org" <maple-tree@...ts.infradead.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v5 07/70] Maple Tree: Add new data structure

* Mark Hemment <markhemm@...glemail.com> [220203 11:02]:
> On Wed, 2 Feb 2022 at 02:42, Liam Howlett <liam.howlett@...cle.com> wrote:
> >
> > From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
> >
> > The maple tree is an RCU-safe range based B-tree designed to use modern
> > processor cache efficiently.  There are a number of places in the kernel
> > that a non-overlapping range-based tree would be beneficial, especially
> > one with a simple interface.  The first user that is covered in this
> > patch set is the vm_area_struct, where three data structures are
> > replaced by the maple tree: the augmented rbtree, the vma cache, and the
> > linked list of VMAs in the mm_struct.  The long term goal is to reduce
> > or remove the mmap_sem contention.
> >
> > The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf
> > nodes.  With the increased branching factor, it is significantly shorter than
> > the rbtree so it has fewer cache misses.  The removal of the linked list
> > between subsequent entries also reduces the cache misses and the need to pull
> > in the previous and next VMA during many tree alterations.
> >
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> > ---
> >  Documentation/core-api/index.rst              |    1 +
> >  Documentation/core-api/maple_tree.rst         |  196 +
> >  MAINTAINERS                                   |   12 +
> >  include/linux/maple_tree.h                    |  673 ++
> >  include/trace/events/maple_tree.h             |  123 +
> >  lib/Kconfig.debug                             |    9 +
> >  lib/Makefile                                  |    3 +-
> >  lib/maple_tree.c                              | 6943 +++++++++++++++++
> >  tools/testing/radix-tree/.gitignore           |    2 +
> >  tools/testing/radix-tree/Makefile             |   13 +-
> >  tools/testing/radix-tree/generated/autoconf.h |    1 +
> >  tools/testing/radix-tree/linux/maple_tree.h   |    7 +
> >  tools/testing/radix-tree/maple.c              |   59 +
> >  .../radix-tree/trace/events/maple_tree.h      |    3 +
> >  14 files changed, 8042 insertions(+), 3 deletions(-)
> >  create mode 100644 Documentation/core-api/maple_tree.rst
> >  create mode 100644 include/linux/maple_tree.h
> >  create mode 100644 include/trace/events/maple_tree.h
> >  create mode 100644 lib/maple_tree.c
> >  create mode 100644 tools/testing/radix-tree/linux/maple_tree.h
> >  create mode 100644 tools/testing/radix-tree/maple.c
> >  create mode 100644 tools/testing/radix-tree/trace/events/maple_tree.h
> 
> ...
> > +Allocating Nodes
> > +----------------
> > +
> > +Allocations are usually handled internally to the tree, however if allocations
> > +need to occur before a write occurs then calling mas_entry_count() will
> > +allocate the worst-case number of needed nodes to insert the provided number of
> > +ranges.  This also causes the tree to enter mass insertion mode.  Once
> > +insertions are complete calling mas_destroy() on the maple state will free the
> > +unused allocations.
> 
> mas_entry_count() has been renamed to mas_expected_entries().
> (Note: test code is still using mas_entry_count()).

Thanks, I just discovered the test code issue on my side.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ