lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170531160636.4zbbzjjbhhcxep7w@rob-hp-laptop>
Date:   Wed, 31 May 2017 11:06:36 -0500
From:   Rob Herring <robh@...nel.org>
To:     Nicolas Pitre <nicolas.pitre@...aro.org>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [6/7] sched/rt: make it configurable

On Mon, May 29, 2017 at 05:03:01PM -0400, Nicolas Pitre wrote:
> On most small systems where user space is tightly controlled, the realtime
> scheduling class can often be dispensed with to reduce the kernel footprint.
> Let's make it configurable.
> 
> Signed-off-by: Nicolas Pitre <nico@...aro.org>
> ---

>  static inline int rt_prio(int prio)
>  {
> -	if (unlikely(prio < MAX_RT_PRIO))
> +	if (IS_ENABLED(CONFIG_SCHED_RT) && unlikely(prio < MAX_RT_PRIO))
>  		return 1;
>  	return 0;
>  }

>  #ifdef CONFIG_PREEMPT_NOTIFIERS
>  	INIT_HLIST_HEAD(&p->preempt_notifiers);
> @@ -3716,13 +3720,18 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
>  		p->sched_class = &dl_sched_class;
>  	} else
>  #endif
> +#ifdef CONFIG_SCHED_RT
>  	if (rt_prio(prio)) {

This ifdef is not necessary since rt_prio is conditioned on 
CONFIG_SCHED_RT already.

>  		if (oldprio < prio)
>  			queue_flag |= ENQUEUE_HEAD;
>  		p->sched_class = &rt_sched_class;
> -	} else {
> +	} else
> +#endif
> +	{
> +#ifdef CONFIG_SCHED_RT
>  		if (rt_prio(oldprio))
>  			p->rt.timeout = 0;
> +#endif
>  		p->sched_class = &fair_sched_class;
>  	}
>  
> @@ -3997,6 +4006,23 @@ static int __sched_setscheduler(struct task_struct *p,
>  
>  	/* May grab non-irq protected spin_locks: */
>  	BUG_ON(in_interrupt());
> +
> +	/*
> +	 * When the RT scheduling class is disabled, let's make sure kernel threads
> +	 * wanting RT still get lowest nice value to give them highest available
> +	 * priority rather than simply returning an error. Obviously we can't test
> +	 * rt_policy() here as it is always false in that case.
> +	 */
> +	if (!IS_ENABLED(CONFIG_SCHED_RT) && !user &&
> +	    (policy == SCHED_FIFO || policy == SCHED_RR)) {
> +		static const struct sched_attr k_attr = {
> +			.sched_policy = SCHED_NORMAL,
> +			.sched_nice = MIN_NICE,
> +		};
> +		attr = &k_attr;
> +		policy = SCHED_NORMAL;
> +	}
> +
>  recheck:
>  	/* Double check policy once rq lock held: */
>  	if (policy < 0) {
> @@ -5726,7 +5752,9 @@ void __init sched_init_smp(void)
>  	sched_init_granularity();
>  	free_cpumask_var(non_isolated_cpus);
>  
> +#ifdef CONFIG_SCHED_RT
>  	init_sched_rt_class();
> +#endif

You can do an empty inline function for !CONFIG_SCHED_RT.

>  #ifdef CONFIG_SCHED_DL
>  	init_sched_dl_class();
>  #endif

And here in the earlier patch.

> @@ -5832,7 +5860,9 @@ void __init sched_init(void)
>  	}
>  #endif /* CONFIG_CPUMASK_OFFSTACK */
>  
> +#ifdef CONFIG_SCHED_RT
>  	init_rt_bandwidth(&def_rt_bandwidth, global_rt_period(), global_rt_runtime());
> +#endif

And so on...

Rob

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ