lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20120817202648.GA13304@linux.vnet.ibm.com>
Date:	Fri, 17 Aug 2012 13:26:48 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	linux-kernel@...r.kernel.org, mingo@...nel.org, pjt@...gle.com,
	tglx@...utronix.de, seto.hidetoshi@...fujitsu.com
Subject: Re: [PATCH RFC] sched: Make migration_call() safe for
 stop_machine()-free hotplug

On Thu, Aug 16, 2012 at 02:55:11PM -0700, Paul E. McKenney wrote:
> On Thu, Aug 16, 2012 at 12:17:10PM -0700, Paul E. McKenney wrote:
> 
> [ . . . ]
> 
> > Another attempted patch below.
> 
> But this time without the brain-dead "using smp_processor_id() in
> preemptible" bug.

And the below version passes moderate rcutorture testing.

							Thanx, Paul

> ------------------------------------------------------------------------
> 
> sched: Make migration_call() safe for stop_machine()-free hotplug
> 
> The CPU_DYING branch of migration_call() relies on the fact that
> CPU-hotplug offline operations use stop_machine().  This commit therefore
> attempts to remedy this situation by moving work to the CPU_DEAD
> notifier when the outgoing CPU is quiescent.  This requires a small
> change to migrate_nr_uninterruptible() to move counts to the current
> running CPU instead of a randomly selected CPU.
> 
> Signed-off-by: Paul E. McKenney <paul.mckenney@...aro.org>
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d325c4b..d09c4e0 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5303,12 +5303,12 @@ void idle_task_exit(void)
>   * While a dead CPU has no uninterruptible tasks queued at this point,
>   * it might still have a nonzero ->nr_uninterruptible counter, because
>   * for performance reasons the counter is not stricly tracking tasks to
> - * their home CPUs. So we just add the counter to another CPU's counter,
> + * their home CPUs. So we just add the counter to the running CPU's counter,
>   * to keep the global sum constant after CPU-down:
>   */
>  static void migrate_nr_uninterruptible(struct rq *rq_src)
>  {
> -	struct rq *rq_dest = cpu_rq(cpumask_any(cpu_active_mask));
> +	struct rq *rq_dest = cpu_rq(smp_processor_id());
>  
>  	rq_dest->nr_uninterruptible += rq_src->nr_uninterruptible;
>  	rq_src->nr_uninterruptible = 0;
> @@ -5613,9 +5613,19 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
>  		migrate_tasks(cpu);
>  		BUG_ON(rq->nr_running != 1); /* the migration thread */
>  		raw_spin_unlock_irqrestore(&rq->lock, flags);
> +		break;
>  
> -		migrate_nr_uninterruptible(rq);
> -		calc_global_load_remove(rq);
> +	case CPU_DEAD:
> +		{
> +			struct rq *dest_rq = cpu_rq(smp_processor_id());
> +
> +			local_irq_save(flags);
> +			raw_spin_lock(&dest_rq->lock);
> +			migrate_nr_uninterruptible(rq);
> +			calc_global_load_remove(rq);
> +			raw_spin_unlock(&dest_rq->lock);
> +			local_irq_restore(flags);
> +		}
>  		break;
>  #endif
>  	}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ