lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20181206181954.GG11220@vader>
Date:   Thu, 6 Dec 2018 10:19:54 -0800
From:   Omar Sandoval <osandov@...ndov.com>
To:     Steven Sistare <steven.sistare@...cle.com>
Cc:     mingo@...hat.com, peterz@...radead.org, subhra.mazumdar@...cle.com,
        dhaval.giani@...cle.com, daniel.m.jordan@...cle.com,
        pavel.tatashin@...rosoft.com, matt@...eblueprint.co.uk,
        umgwanakikbuti@...il.com, riel@...hat.com, jbacik@...com,
        juri.lelli@...hat.com, valentin.schneider@....com,
        vincent.guittot@...aro.org, quentin.perret@....com,
        linux-kernel@...r.kernel.org, Jens Axboe <axboe@...com>
Subject: Re: [PATCH v3 01/10] sched: Provide sparsemask, a reduced contention
 bitmap

On Thu, Dec 06, 2018 at 11:07:46AM -0500, Steven Sistare wrote:
> On 11/27/2018 8:19 PM, Omar Sandoval wrote:
> > On Tue, Nov 27, 2018 at 10:16:56AM -0500, Steven Sistare wrote:
> >> On 11/9/2018 7:50 AM, Steve Sistare wrote:
> >>> From: Steve Sistare <steve.sistare@...cle.com>
> >>>
> >>> Provide struct sparsemask and functions to manipulate it.  A sparsemask is
> >>> a sparse bitmap.  It reduces cache contention vs the usual bitmap when many
> >>> threads concurrently set, clear, and visit elements, by reducing the number
> >>> of significant bits per cacheline.  For each 64 byte chunk of the mask,
> >>> only the first K bits of the first word are used, and the remaining bits
> >>> are ignored, where K is a creation time parameter.  Thus a sparsemask that
> >>> can represent a set of N elements is approximately (N/K * 64) bytes in
> >>> size.
> >>>
> >>> Signed-off-by: Steve Sistare <steven.sistare@...cle.com>
> >>> ---
> >>>  include/linux/sparsemask.h | 260 +++++++++++++++++++++++++++++++++++++++++++++
> >>>  lib/Makefile               |   2 +-
> >>>  lib/sparsemask.c           | 142 +++++++++++++++++++++++++
> >>>  3 files changed, 403 insertions(+), 1 deletion(-)
> >>>  create mode 100644 include/linux/sparsemask.h
> >>>  create mode 100644 lib/sparsemask.c
> >>
> >> Hi Peter and Ingo,
> >>   I need your opinion: would you prefer that I keep the new sparsemask type, 
> >> or fold it into the existing sbitmap type?  There is some overlap between the 
> >> two, but mostly in trivial one line functions. The main differences are:
> > 
> > Adding Jens and myself.
> > 
> >>   * sparsemask defines iterators that allow an inline loop body, like cpumask,
> >>   whereas the sbitmap iterator forces us to define a callback function for
> >>   the body, which is awkward.
> >>
> >>   * sparsemask is slightly more efficient.  The struct and variable length
> >>   bitmap are allocated contiguously,
> > 
> > That just means you have the pointer indirection elsewhere :) The users
> > of sbitmap embed it in whatever structure they have.
>  
> Yes, the sparsemask can be embedded in one place, but in my use case I also cache
> pointers to the mask from elsewhere, and those sites incur the cost of 2 indirections
> to perform bitmap operations.
> 
> >>   and sbitmap uses an extra field "depth"
> >>   per bitmap cacheline.
> > 
> > The depth field is memory which would otherwise be unused, and it's only
> > used for sbitmap_get(), so it doesn't have any cost if you're using it
> > like a cpumask.
> > 
> >>   * The order of arguments is different for the sparsemask accessors and
> >>   sbitmap accessors.  sparsemask mimics cpumask which is used extensively
> >>   in the sched code.
> >>
> >>   * Much of the sbitmap code supports queueing, sleeping, and waking on bit
> >>   allocation, which is N/A for scheduler load load balancing.  However, we
> >>   can call the basic functions which do not use queueing.
> >>
> >> I could add the sparsemask iterators to sbitmap (90 lines), and define
> >> a thin layer to change the argument order to mimic cpumask, but that
> >> essentially recreates sparsemask.
> > 
> > We only use sbitmap_for_each_set() in a few places. Maybe a for_each()
> > style macro would be cleaner for those users, too, in which case I
> > wouldn't be opposed to changing it. The cpumask argument order thing is
> > a annoying, though.
> > 
> >> Also, pushing sparsemask into sbitmap would limit our freedom to evolve the
> >> type to meet the future needs of sched, as sbitmap has its own maintainer,
> >> and is used by drivers, so changes to its API and ABI will be frowned upon.
> > 
> > It's a generic data structure, so of course Jens and I have no problem
> > with changing it to meet more needs :) Personally, I'd prefer to only
> > have one datastructure for this, but I suppose it depends on whether
> > Peter and Ingo think the argument order is important enough.
> 
> The argument order is a minor thing, not a blocker to adoption, but efficiency 
> is important in the core scheduler code.  I actually did the work to write a
> for_each macro with inline body to sbitmap, and converted my patches to use sbitmap.
> But then I noticed your very recent patch adding the cleared word to each cacheline, 
> which must be loaded and ANDed with each bitset word in the for_each traversal,
> adding more overhead which we don't need for the scheduler use case, on top of the
> extra indirection noted above. You might add more such things in the future (a
> "deferred set" word?) to support the needs of the block drivers who are the 
> intended clients of sbitmap.
> 
> Your sbitmap is more than a simple bitmap abstraction, and for the scheduler we
> just need simple.  Therefore, I propose to trim sparsemask to the bare minimum,
> and move it to kernel/sched for use 
> by sched only.
>   It was 400 lines, but will
> be 200, and 80 of those are comments.
> 
> If anyone objects, please speak now.

Yes, after the recent changes, I think it's reasonable to have a
separate implementation for sched.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ