lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7a3e87ac-db63-27c5-8490-2330637e59b1@oracle.com>
Date:   Tue, 27 Nov 2018 10:16:56 -0500
From:   Steven Sistare <steven.sistare@...cle.com>
To:     mingo@...hat.com, peterz@...radead.org
Cc:     subhra.mazumdar@...cle.com, dhaval.giani@...cle.com,
        daniel.m.jordan@...cle.com, pavel.tatashin@...rosoft.com,
        matt@...eblueprint.co.uk, umgwanakikbuti@...il.com,
        riel@...hat.com, jbacik@...com, juri.lelli@...hat.com,
        valentin.schneider@....com, vincent.guittot@...aro.org,
        quentin.perret@....com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 01/10] sched: Provide sparsemask, a reduced contention
 bitmap

On 11/9/2018 7:50 AM, Steve Sistare wrote:
> From: Steve Sistare <steve.sistare@...cle.com>
> 
> Provide struct sparsemask and functions to manipulate it.  A sparsemask is
> a sparse bitmap.  It reduces cache contention vs the usual bitmap when many
> threads concurrently set, clear, and visit elements, by reducing the number
> of significant bits per cacheline.  For each 64 byte chunk of the mask,
> only the first K bits of the first word are used, and the remaining bits
> are ignored, where K is a creation time parameter.  Thus a sparsemask that
> can represent a set of N elements is approximately (N/K * 64) bytes in
> size.
> 
> Signed-off-by: Steve Sistare <steven.sistare@...cle.com>
> ---
>  include/linux/sparsemask.h | 260 +++++++++++++++++++++++++++++++++++++++++++++
>  lib/Makefile               |   2 +-
>  lib/sparsemask.c           | 142 +++++++++++++++++++++++++
>  3 files changed, 403 insertions(+), 1 deletion(-)
>  create mode 100644 include/linux/sparsemask.h
>  create mode 100644 lib/sparsemask.c

Hi Peter and Ingo,
  I need your opinion: would you prefer that I keep the new sparsemask type, 
or fold it into the existing sbitmap type?  There is some overlap between the 
two, but mostly in trivial one line functions. The main differences are:

  * sparsemask defines iterators that allow an inline loop body, like cpumask,
  whereas the sbitmap iterator forces us to define a callback function for
  the body, which is awkward.

  * sparsemask is slightly more efficient.  The struct and variable length
  bitmap are allocated contiguously, and sbitmap uses an extra field "depth"
  per bitmap cacheline.

  * The order of arguments is different for the sparsemask accessors and
  sbitmap accessors.  sparsemask mimics cpumask which is used extensively
  in the sched code.

  * Much of the sbitmap code supports queueing, sleeping, and waking on bit
  allocation, which is N/A for scheduler load load balancing.  However, we
  can call the basic functions which do not use queueing.

I could add the sparsemask iterators to sbitmap (90 lines), and define
a thin layer to change the argument order to mimic cpumask, but that
essentially recreates sparsemask.

Also, pushing sparsemask into sbitmap would limit our freedom to evolve the
type to meet the future needs of sched, as sbitmap has its own maintainer,
and is used by drivers, so changes to its API and ABI will be frowned upon.

FWIW, here is the amount of code involved:

include/linux/sbitmap.h
  250 lines basic operations
  284 lines for queueing
  ---
  534 lines total

lib/sbitmap.c
  201 lines basic operations
  380 lines for queueing
  ---
  581 lines total

include/linux/sparsemask.h
  260 lines total
  https://lkml.org/lkml/2018/11/9/1176

lib/sparsemask.c
  142 lines total
  https://lkml.org/lkml/2018/11/9/1176

- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ