[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190904114929.GV2386@hirez.programming.kicks-ass.net>
Date: Wed, 4 Sep 2019 13:49:29 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: paulmck <paulmck@...ux.ibm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Oleg Nesterov <oleg@...hat.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
"Russell King, ARM Linux" <linux@...linux.org.uk>,
Chris Metcalf <cmetcalf@...hip.com>,
Chris Lameter <cl@...ux.com>, Kirill Tkhai <tkhai@...dex.ru>,
Mike Galbraith <efault@....de>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [RFC PATCH 1/2] Fix: sched/membarrier: p->mm->membarrier_state
racy load
On Wed, Sep 04, 2019 at 01:28:19PM +0200, Peter Zijlstra wrote:
> @@ -196,6 +198,17 @@ static int membarrier_register_global_expedited(void)
> */
> smp_mb();
> } else {
> + struct task_struct *g, *t;
> +
> + read_lock(&tasklist_lock);
> + do_each_thread(g, t) {
> + if (t->mm == mm) {
> + atomic_or(MEMBARRIER_STATE_GLOBAL_EXPEDITED,
> + &t->membarrier_state);
> + }
> + } while_each_thread(g, t);
> + read_unlock(&tasklist_lock);
> +
> /*
> * For multi-mm user threads, we need to ensure all
> * future scheduler executions will observe the new
Arguably, because this is exposed to unpriv users and a potential
preemption latency issue, we could do it in 3 passes:
- RCU, mark all found lacking, count
- RCU, mark all found lacking, count
- if count of last pass, tasklist_lock
That way, it becomes much harder to trigger the bad case.
Do we worry about that?
Powered by blists - more mailing lists