lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1248939385.6391.7.camel@twins>
Date:	Thu, 30 Jul 2009 09:36:25 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Gregory Haskins <ghaskins@...ell.com>
Cc:	linux-kernel@...r.kernel.org, rostedt@...dmis.org, mingo@...e.hu,
	akpm@...ux-foundation.org, tglx@...utronix.de
Subject: Re: [PATCH] sched: enhance the pre/post scheduling logic

On Wed, 2009-07-29 at 11:08 -0400, Gregory Haskins wrote:

> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 3ab08e4..df14cae 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1045,7 +1045,6 @@ struct sched_class {
>  			      struct rq *busiest, struct sched_domain *sd,
>  			      enum cpu_idle_type idle);
>  	void (*pre_schedule) (struct rq *this_rq, struct task_struct *task);
> -	int (*needs_post_schedule) (struct rq *this_rq);
>  	void (*post_schedule) (struct rq *this_rq);
>  	void (*task_wake_up) (struct rq *this_rq, struct task_struct *task);

awesome, one method less ;-)

> +#ifdef CONFIG_SMP
> +
> +/* assumes rq->lock is held */
> +static inline void pre_schedule(struct rq *rq, struct task_struct *prev)
> +{
> +	if (prev->sched_class->pre_schedule)
> +		prev->sched_class->pre_schedule(rq, prev);
> +}
> +
> +/* rq->lock is NOT held, but preemption is disabled */
> +static inline void post_schedule(struct rq *rq)
> +{
> +	if (rq->post_schedule) {
> +		unsigned long flags;
> +
> +		spin_lock_irqsave(&rq->lock, flags);
> +		if (rq->curr->sched_class->post_schedule)
> +			rq->curr->sched_class->post_schedule(rq);
> +		spin_unlock_irqrestore(&rq->lock, flags);
> +
> +		rq->post_schedule = 0;
> +	}
> +}
> +
> +#else
>  
> -	return post_schedule;
> +static inline void pre_schedule(struct rq *rq, struct task_struct *p)
> +{
> +}
> +
> +static inline void post_schedule(struct rq *rq)
> +{
>  }
>  
> +#endif

Wouldn't you sleep much safer at night if both versions were to check
those assumptions under SCHED_DEBUG? :-)

> @@ -2844,14 +2873,14 @@ asmlinkage void schedule_tail(struct task_struct *prev)
>  	__releases(rq->lock)
>  {
>  	struct rq *rq = this_rq();
> -	int post_schedule;
>  
> -	post_schedule = finish_task_switch(rq, prev);
> +	finish_task_switch(rq, prev);
>  
> -#ifdef CONFIG_SMP
> -	if (post_schedule)
> -		current->sched_class->post_schedule(rq);
> -#endif
> +	/*
> +	 * FIXME: do we need to worry about rq being invalidated by the
> +	 * task_switch?
> +	 */
> +	post_schedule(rq);
>  
>  #ifdef __ARCH_WANT_UNLOCKED_CTXSW
>  	/* In this case, finish_task_switch does not reenable preemption */

You know I really can't take patches with FIXME's in ;-)

I think only switch_to() messes with your stacks, finish_task_switch()
should be safe, but double check me.


OK, so I stuck the patch in anyway.. 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ