lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 24 Jun 2021 10:33:07 +0100
From:   Valentin Schneider <valentin.schneider@....com>
To:     Will Deacon <will@...nel.org>
Cc:     linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Qais Yousef <qais.yousef@....com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Quentin Perret <qperret@...gle.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        kernel-team@...roid.com
Subject: Re: [PATCH 2/2] sched: Plug race between SCA, hotplug and migration_cpu_stop()

Hi,

On 22/06/21 14:51, Will Deacon wrote:
> Hi Valentin,
>
> I've been looking at this on and off and I'm afraid I'm still not convinced,
> almost certainly due to my own ignorance, but hey :)
>

Nah, this thing is nasty, I still haven't regrown the hair I lost from last
time I stared at it.

> On Wed, Jun 02, 2021 at 08:43:31PM +0100, Valentin Schneider wrote:
>> Harumph...
>>
>> So something like all CPUs but one are running their take_cpu_down()
>> callback because one is still running migration_cpu_stop(), i.e.:
>>
>>   CPU0                   CPU1                ...             CPUN
>>   <stopper>              <stopper>                           <stopper>
>>     migration_cpu_stop()   take_cpu_down()@MULTI_STOP_PREPARE    take_cpu_down()@MULTI_STOP_PREPARE
>>
>> If CPU0 hits that else if (pending) condition, it'll queue a
>> migration_cpu_stop() elsewhere (say CPU1), then run the take_cpu_down()
>> callback which follows in its work->list.
>>
>> If the CPU being brought down is anything else than CPU1, it shouldn't
>> really matter. If it *is* CPU1, then I think we've got some guarantees.
>>
>> Namely, there's no (read: there shouldn't be any) way for a task to
>> still be on CPU1 at this point; per sched_cpu_wait_empty(),
>> migration-disabled tasks and pcpu kthreads must vacate the CPU before it
>> then (migrate_disable regions must be bounded, and pcpu kthreads are
>> expected to be moved away by their respective owners).
>
> I agree with that, but the stopper is still running on CPU1 and so
> migration_cpu_stop() could still queue work there after sched_cpu_wait_empty()
> has returned but before stop_machine_park(), afaict.
>

Right, the stopper should flush its work before parking itself - in the
above scenario migration_cpu_stop() is already on CPU1's work list when it
finishes running take_cpu_down(), so that should run before any parking
happens.

> Actually, it looks like migration_cpu_stop() ignores the return value of
> stop_one_cpu_nowait(), so if the stopper thread has been parked I think
> we'll quietly do nothing there as well.
>

There's a handful of loosely related cogs (including those added by the
patch) that *I think* give us sufficient guarantees this can't happen. Let
me try to structure this somehow, and please point out any inane bit.

1) Per the synchronize_rcu() in CPUHP_AP_ACTIVE.teardown(), the return
   value of is_cpu_allowed(p, cpu) is stable within a migration_cpu_stop()
   invocation for any p allowed on *active* CPUs.
2) Per the serialization of stopper callbacks, the return value of
   is_cpu_allowed(p, cpu) is stable within a migration_cpu_stop()
   invocation for any p allowed on *online* CPUs: migration_cpu_stop()
   has to complete before any CPU can do an online -> !online transition.

  (I would also add here that I don't expect any task to be migrated to an
  !active && online dying CPU via migration_cpu_stop(): the only tasks
  allowed on such CPUs are pcpu kthreads, which will be parked rather than
  migrated, and migrate_disabled() tasks which, unsurprisingly, aren't
  migrated - they just get to *stay* on their CPU longer than regular tasks)

3) Per 1) + 2), the move_queued_task() call in migration_cpu_stop() cannot
   silently fail.


4) No task other than a hotplug thread or a stopper thread may run on a
   CPU beyond CPUHP_AP_SCHED_WAIT_EMPTY.teardown()
5) Per 4), no (queued) task p passed to migration_cpu_stop() can have
   task_cpu(p) be a dying CPU with hotplug state below
   CPUHP_AP_SCHED_WAIT_EMPTY.
6) Per 5), migration_cpu_stop() cannot invoke stop_one_cpu_nowait() on a
   CPU with a parked stopper.
7) Per 6), migration_cpu_stop() self-queuing cannot be silently discarded
   and will always end up executed

Per 3) + 7), we're good?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ