lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5790661b-869c-68bd-86fa-62f580e84be1@uwaterloo.ca>
Date:   Wed, 21 Jul 2021 15:55:59 -0400
From:   Thierry Delisle <tdelisle@...terloo.ca>
To:     Peter Oskolkov <posk@...k.io>
CC:     Peter Oskolkov <posk@...gle.com>, Andrei Vagin <avagin@...gle.com>,
        Ben Segall <bsegall@...gle.com>, Jann Horn <jannh@...gle.com>,
        Jim Newsome <jnewsome@...project.org>,
        Joel Fernandes <joel@...lfernandes.org>,
        <linux-api@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Paul Turner <pjt@...gle.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Buhr <pabuhr@...terloo.ca>
Subject: Re: [RFC PATCH 4/4 v0.3] sched/umcg: RFC: implement UMCG syscalls

 > Yes, this is naturally supported in the current patchset on the kernel
 > side, and is supported in libumcg (to be posted, later when the kernel
 > side is settled); internally at Google, some applications use
 > different "groups" of workers/servers per NUMA node.

Good to know. Cforall has the same feature, where we refer to these groups
as "clusters". https://doi.org/10.1002/spe.2925 (Section 7)

 > Please see the attached atomic_stack.h file - I use it in my tests,
 > things seem to be working. Specifically, atomic_stack_gc does the
 > cleanup. For the kernel side of things, see the third patch in this
 > patchset.

I don't believe the atomic_stack_gc function is robust enough to be 
offering
any guarantee. I believe that once a node is unlinked, its next pointer 
should
be reset immediately, e.g., by writing 0xDEADDEADDEADDEAD. Do your tests 
work
if the next pointer is reset immediately on reclaimed nodes?

As far as I can tell, the reclaimed nodes in atomic_stack_gc still contain
valid next fields. I believe there is a race which can lead to the kernel
reading reclaimed nodes. If atomic_stack_gc does not reset the fields, 
this bug
could be hidden in the testing.

An more aggressive test is to put each node in a different page and 
remove read
permissions when the node is reclaimed. I'm not sure this applies when the
kernel is the one reading.


 > To keep the kernel side light and simple. To also protect the kernel
 > from spinning if userspace misbehaves. Basically, the overall approach
 > is to delegate most of the work to the userspace, and keep the bare
 > minimum in the kernel.

I'll try to keep this in mind then.


After some thought, I'll suggest a scheme to significantly reduce 
complexity.
As I understand, the idle_workers_ptr are linked to form one or more
Multi-Producer Single-Consumer queues. If each head is augmented with a 
single
volatile tid-sized word, servers that want to go idle can simply write 
their id
in the word. When the kernel adds something to the idle_workers_ptr 
list, it
simply does an XCHG with 0 or any INVALID_TID. This scheme only supports 
one
server blocking per idle_workers_ptr list. To keep the "kernel side 
light and
simple", you can simply ask that any extra servers must synchronize 
among each
other to pick which server is responsible for wait on behalf of everyone.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ