lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 15 Oct 2007 13:42:04 -0400 (EDT)
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Gregory Haskins <ghaskins@...ell.com>
cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	RT <linux-rt-users@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 1/7] RT: Add a per-cpu rt_overload indication



--
On Fri, 12 Oct 2007, Gregory Haskins wrote:

> The system currently evaluates all online CPUs whenever one or more enters
> an rt_overload condition.  This suffers from scalability limitations as
> the # of online CPUs increases.  So we introduce a cpumask to track
> exactly which CPUs need RT balancing.
>
> Signed-off-by: Gregory Haskins <ghaskins@...ell.com>
> CC: Peter W. Morreale <pmorreale@...ell.com>

Acked-by: Steven Rostedt <rostedt@...dmis.org>

This patch seems reasonable to me. I'll pull it into the queue unless
there's any NACKs.

-- Steve

> ---
>
>  kernel/sched.c |   12 +++++++++---
>  1 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 86ff36d..0a1ad0e 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -632,6 +632,7 @@ static inline struct rq *this_rq_lock(void)
>
>  #if defined(CONFIG_PREEMPT_RT) && defined(CONFIG_SMP)
>  static __cacheline_aligned_in_smp atomic_t rt_overload;
> +static cpumask_t rto_cpus;
>  #endif
>
>  static inline void inc_rt_tasks(struct task_struct *p, struct rq *rq)
> @@ -640,8 +641,11 @@ static inline void inc_rt_tasks(struct task_struct *p, struct rq *rq)
>  	if (rt_task(p)) {
>  		rq->rt_nr_running++;
>  # ifdef CONFIG_SMP
> -		if (rq->rt_nr_running == 2)
> +		if (rq->rt_nr_running == 2) {
> +			cpu_set(rq->cpu, rto_cpus);
> +			smp_wmb();
>  			atomic_inc(&rt_overload);
> +		}
>  # endif
>  	}
>  #endif
> @@ -654,8 +658,10 @@ static inline void dec_rt_tasks(struct task_struct *p, struct rq *rq)
>  		WARN_ON(!rq->rt_nr_running);
>  		rq->rt_nr_running--;
>  # ifdef CONFIG_SMP
> -		if (rq->rt_nr_running == 1)
> +		if (rq->rt_nr_running == 1) {
>  			atomic_dec(&rt_overload);
> +			cpu_clear(rq->cpu, rto_cpus);
> +		}
>  # endif
>  	}
>  #endif
> @@ -1624,7 +1630,7 @@ static void balance_rt_tasks(struct rq *this_rq, int this_cpu)
>  	 */
>  	next = pick_next_task(this_rq, this_rq->curr);
>
> -	for_each_online_cpu(cpu) {
> +	for_each_cpu_mask(cpu, rto_cpus) {
>  		if (cpu == this_cpu)
>  			continue;
>  		src_rq = cpu_rq(cpu);
>
>
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ