lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 3 Jun 2019 18:19:22 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Oleg Nesterov <oleg@...hat.com>
Cc:     Jens Axboe <axboe@...nel.dk>, Qian Cai <cai@....pw>,
        akpm@...ux-foundation.org, hch@....de, gkohli@...eaurora.org,
        mingo@...hat.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] block: fix a crash in do_task_dead()

On Mon, Jun 03, 2019 at 06:09:53PM +0200, Oleg Nesterov wrote:
> On 06/03, Peter Zijlstra wrote:
> >
> > It now also has concurrency on wakeup; but afaict that's harmless, we'll
> > get racing stores of p->state = TASK_RUNNING, much the same as if there
> > was a remote wakeup vs a wait-loop terminating early.
> >
> > I suppose the tracepoint consumers might have to deal with some
> > artifacts there, but that's their problem.
> 
> I guess you mean that trace_sched_waking/wakeup can be reported twice if
> try_to_wake_up(current) races with ttwu_remote(). And ttwu_stat().

Right, one local one remote, and you get them things twice.

> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -1990,6 +1990,28 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
> > >  	unsigned long flags;
> > >  	int cpu, success = 0;
> > >  
> > > +	if (p == current) {
> > > +		/*
> > > +		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
> > > +		 * == smp_processor_id()'. Together this means we can special
> > > +		 * case the whole 'p->on_rq && ttwu_remote()' case below
> > > +		 * without taking any locks.
> > > +		 *
> > > +		 * In particular:
> > > +		 *  - we rely on Program-Order guarantees for all the ordering,
> > > +		 *  - we're serialized against set_special_state() by virtue of
> > > +		 *    it disabling IRQs (this allows not taking ->pi_lock).
> > > +		 */
> > > +		if (!(p->state & state))
> > > +			goto out;
> > > +
> > > +		success = 1;
> > > +		trace_sched_waking(p);
> > > +		p->state = TASK_RUNNING;
> > > +		trace_sched_woken(p);
>                 ^^^^^^^^^^^^^^^^^
> trace_sched_wakeup(p) ?

Uhm,, yah.

> I see nothing wrong... but probably this is because I don't fully understand
> this change. In particular, I don't really understand who else can benefit from
> this optimization...

Pretty much every wait-loop, where the wakeup happens from IRQ context
on the same CPU, before we've hit schedule().

Now, I've no idea if that's many, but I much prefer to keep this magic
inside try_to_wake_up() than spreading it around.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ