lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1343289850.26034.79.camel@twins>
Date:	Thu, 26 Jul 2012 10:04:10 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Tejun Heo <tj@...nel.org>
Cc:	Peter Boonstoppel <pboonstoppel@...dia.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Paul Gortmaker <paul.gortmaker@...driver.com>,
	Henrique de Moraes Holschuh <ibm-acpi@....eng.br>,
	Andy Walls <awalls@...metrocast.net>,
	Diwakar Tundlam <dtundlam@...dia.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH 1/1] kthread: disable preemption during complete()

On Wed, 2012-07-25 at 15:40 -0700, Tejun Heo wrote:
> (cc'ing Oleg and Peter)

Right, if you're playing games with preemption, always add the rt and
sched folks.. added mingo and tglx.

> On Wed, Jul 25, 2012 at 03:35:32PM -0700, Peter Boonstoppel wrote:
> > After a kthread is created it signals the requester using complete()
> > and enters TASK_UNINTERRUPTIBLE. However, since complete() wakes up
> > the requesting thread this can cause a preemption. The preemption will
> > not remove the task from the runqueue (for that schedule() has to be
> > invoked directly).
> > 
> > This is a problem if directly after kthread creation you try to do a
> > kthread_bind(), which will block in HZ steps until the thread is off
> > the runqueue.
> > 
> > This patch disables preemption during complete(), since we call
> > schedule() directly afterwards, so it will correctly enter
> > TASK_UNINTERRUPTIBLE. This speeds up kthread creation/binding during
> > cpu hotplug significantly.

tglx has patches that make the kthread create/destroy stuff from hotplug
go away.. that seems like the better approach.


> > Signed-off-by: Peter Boonstoppel <pboonstoppel@...dia.com>
> > ---
> >  kernel/kthread.c |   11 +++++++++++
> >  1 files changed, 11 insertions(+), 0 deletions(-)
> > 
> > diff --git a/kernel/kthread.c b/kernel/kthread.c
> > index b579af5..757d8dd 100644
> > --- a/kernel/kthread.c
> > +++ b/kernel/kthread.c
> > @@ -16,6 +16,7 @@
> >  #include <linux/mutex.h>
> >  #include <linux/slab.h>
> >  #include <linux/freezer.h>
> > +#include <linux/preempt.h>
> >  #include <trace/events/sched.h>
> >  
> >  static DEFINE_SPINLOCK(kthread_create_lock);
> > @@ -113,7 +114,17 @@ static int kthread(void *_create)
> >  	/* OK, tell user we're spawned, wait for stop or wakeup */
> >  	__set_current_state(TASK_UNINTERRUPTIBLE);
> >  	create->result = current;
> > +
> > +	/*
> > +	 * Disable preemption so we enter TASK_UNINTERRUPTIBLE after
> > +	 * complete() instead of possibly being preempted. This speeds
> > +	 * up clients that do a kthread_bind() directly after
> > +	 * creation.
> > +	 */
> > +	preempt_disable();
> 
> Shouldn't this happen before setting current state to UNINTERRUPTIBLE?
> What prevents preemption happening right above preempt_disable()?

Nothing, it also doesn't matter that much, you could get preempted right
before preempt_disable() and end up in the same place.

The main thing is avoiding the wakeup preemption from the complete()
because we're going to sleep right after anyway.

The comment doesn't really make that clear.

> >  	complete(&create->done);
> > +	preempt_enable_no_resched();
> > +
> >  	schedule();

Other than that it seems fine, although I know tglx just loves new
preempt_enable_no_resched() sites ;-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ