lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 Apr 2017 09:48:40 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v2] sched: Have do_idle() call __schedule() without
 enabling preemption

On Thu, 13 Apr 2017 10:44:53 +0200
Peter Zijlstra <peterz@...radead.org> wrote:

> On Wed, Apr 12, 2017 at 02:27:44PM -0400, Steven Rostedt wrote:
> > + * schedule_idle() is similar to schedule_preempt_disable() except
> > + * that it never enables preemption.  
> 
> That's not right. The primary distinction is that it doesn't call
> sched_submit_work().

That has nothing to do with fixing synchronize_rcu_tasks(), which is
the entire point of my patch, thus it is *not* the primary distinction.
Keeping schedule from enabling preemption and calling functions is the
bug fix. Not calling sched_submit_work() is just an added optimization
benefit.

The point of the patch is to stop idle from enabling preemption,
because it doesn't need to, as sched_submit_work() is a nop for it.
I'll update my change log to mention that.

> 
> And because that function is a no-op for the idle thread, the idle
> thread can do without calling that and therefore avoid the preemption
> window.
> 
> You also need a few words about fake idle threads, search play_idle()
> callers.

Thanks, this is the first I heard of these. I'll go look at them.

> 
> You could also make schedule_idle() more robust by adding a WARN for the
> blk_schedule_flush_plug() condition.

Why? The call to schedule_preempt_disabled() never got that far when
coming from do_idle().

static inline void sched_submit_work(struct task_struct *tsk)
{
	if (!tsk->state || tsk_is_pi_blocked(tsk))
		return;
	/*
	 * If we are going to sleep and we have plugged IO queued,
	 * make sure to submit it to avoid deadlocks.
	 */
	if (blk_needs_flush_plug(tsk))
		blk_schedule_flush_plug(tsk);
}

Isn't tsk->state always zero for the idle task?

A better case would be WARN_ON(tsk->state)


> 
> 
> You Changelog is still entirely long and rambling but fails to mention
> the fundamental important stuff :-(


Remember, this patch is to fix a bug and not to optimize idle, although
that is an added benefit. The bug I am fixing, which is in linux-next
now, is that the idle thread breaks synchronize_rcu_tasks() when
calling schedule() with preemption enabled. That's what my ramblings in
the change log are talking about.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ