lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1368146829.7373.146.camel@gandalf.local.home>
Date:	Thu, 09 May 2013 20:47:09 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	"Paul E. McKenney" <paulmck@...ibm.com>,
	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...ux.intel.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] x86/sched/dynamic-ticks: Call new
 schedule_irq_disable() for scheduling in entry_64.S

On Fri, 2013-05-10 at 02:22 +0200, Frederic Weisbecker wrote:
>  
> > @@ -654,6 +653,7 @@ sysret_check:
> >  	LOCKDEP_SYS_EXIT
> >  	DISABLE_INTERRUPTS(CLBR_NONE)
> >  	TRACE_IRQS_OFF
> > +sysret_check_irqsoff:
> >  	movl TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET),%edx
> >  	andl %edi,%edx
> >  	jnz  sysret_careful
> > @@ -675,12 +675,10 @@ sysret_check:
> >  sysret_careful:
> >  	bt $TIF_NEED_RESCHED,%edx
> >  	jnc sysret_signal
> > -	TRACE_IRQS_ON
> > -	ENABLE_INTERRUPTS(CLBR_NONE)
> >  	pushq_cfi %rdi
> > -	SCHEDULE_USER
> > +	call schedule_irq_disabled
> >  	popq_cfi %rdi
> > -	jmp sysret_check
> > +	jmp sysret_check_irqsoff
> 
> Don't we still want to call LOCKDEP_SYS_EXIT?, the previous
> task scheduled might have taken some locks.

Sure, but would that still catch this? I thought LOCKDEP_SYS_EXIT just
cared about going into user land if the current task had locks. Where it
shouldn't here. And you are concerned about the previous task. Well, the
scheduler should have caught that, as you shouldn't have any spin locks
there. And it doesn't matter if the previous task had a mutex. The
previous task is still in kernel space.

>  
> > +/*
> > + * this is the entry point to schedule() from kernel irq off context.
> > + * Note, that this is called and return with irqs disabled. This will
> > + * protect us against recursive calling from irq.
> > + */
> > +asmlinkage void __sched schedule_irq_disabled(void)
> > +{
> > +	enum ctx_state prev_state;
> > +
> > +	prev_state = exception_enter();
> 
> So why have you turned user_enter to exception_enter?
> Is it to make it work anywhere and not just preemption on
> user resume time?

Pretty much. I just copied the way preempt_schedule_irq() did this.

> 
> > +
> > +	local_irq_enable();
> > +	__schedule();
> 
> Are you sure it's fine to call __schedule() instead of schedule()?
> I don't know much about the blk work to handle there but it may be
> worth considering.

Note, this isn't a call to schedule that was made by normal kernel
space. This is very much like a preemption. In fact, it is a preemption
and my first patch used preempt_schedule_irq() instead, but as that's
made for CONFIG_PREEMPT enabled, it seemed a bit hacky to take that out
of that context.

This is only called when the task is being preempted. But because its
going to user land, its OK. The blk code there is more worried about
something that did some work, set itself to TASK_UNINTERRUPTIBLE and
then called schedule() (think completions). That is, the blk code that
needs to be submitted will be the code that wakes up this task.

We should never be in anything but TASK_RUNNING when going into user
space.

> 
> > +	local_irq_disable();
> > +
> > +	exception_exit(prev_state);
> > +}
> 
> The patch looks pretty good otherwise.

Thanks!

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ