lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200720153855.GS10769@hirez.programming.kicks-ass.net>
Date:   Mon, 20 Jul 2020 17:38:55 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Oleg Nesterov <oleg@...hat.com>
Cc:     Jiri Slaby <jirislaby@...nel.org>,
        Christian Brauner <christian.brauner@...ntu.com>,
        christian@...uner.io, "Eric W. Biederman" <ebiederm@...ssion.com>,
        Linux kernel mailing list <linux-kernel@...r.kernel.org>,
        Mel Gorman <mgorman@...e.de>,
        Dave Jones <davej@...emonkey.org.uk>,
        Paul Gortmaker <paul.gortmaker@...driver.com>
Subject: Re: 5.8-rc*: kernel BUG at kernel/signal.c:1917

On Mon, Jul 20, 2020 at 05:35:15PM +0200, Oleg Nesterov wrote:
> On 07/20, Oleg Nesterov wrote:
> >
> > On 07/20, Peter Zijlstra wrote:
> > >
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -4193,9 +4193,6 @@ static void __sched notrace __schedule(bool preempt)
> > >  	local_irq_disable();
> > >  	rcu_note_context_switch(preempt);
> > >
> > > -	/* See deactivate_task() below. */
> > > -	prev_state = prev->state;
> > > -
> > >  	/*
> > >  	 * Make sure that signal_pending_state()->signal_pending() below
> > >  	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
> > > @@ -4223,7 +4220,8 @@ static void __sched notrace __schedule(bool preempt)
> > >  	 * We must re-load prev->state in case ttwu_remote() changed it
> > >  	 * before we acquired rq->lock.
> > >  	 */
> > > -	if (!preempt && prev_state && prev_state == prev->state) {
> > > +	prev_state = prev->state;
> > > +	if (!preempt && prev_state) {
> >
> > Heh ;) Peter, you know what? I did the same change and tried to understand
> > why it is wrong and what have I missed.
> >
> > Thanks, now I can relax. But my head hurts too, I'll probably try to re-read
> > this code and other emails from you tomorrow.
> 
> Yes, I can no longer read this code today ;)
> 
> but now it seems to me that (in theory) we need READ_ONCE(prev->state) here
> and probably WRITE_ONCE(on_rq) in deactivate_task() to ensure ctrl-dep?
> 
> Probably not, I got lost.
> Probably not, I got lost.
> Probably not, I got lost.

So, task_struct::state is declared volatile (we should probably 'fix'
that some day), so that doesn't require READ_ONCE() -- in fact, that
caused a bunch of re-reads in the old code which made the loadavg race
more likely.

->on_rq is only ever written 0,1,2, there's no possibe store-tearing.
But possibly, yes, WRITE_ONCE() would be nicer.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ