lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 27 Jan 2011 17:22:40 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Milton Miller <miltonm@....com>
Cc:	Anton Blanchard <anton@...ba.org>, xiaoguangrong@...fujitsu.com,
	mingo@...e.hu, jaxboe@...ionio.com, npiggin@...il.com,
	rusty@...tcorp.com.au, akpm@...ux-foundation.org,
	torvalds@...ux-foundation.org, paulmck@...ux.vnet.ibm.com,
	benh@...nel.crashing.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] consolidate writes in smp_call_funtion_interrupt

On Tue, 2011-01-18 at 15:06 -0600, Milton Miller wrote:

> Index: common/kernel/smp.c
> ===================================================================
> --- common.orig/kernel/smp.c	2011-01-17 20:16:18.000000000 -0600
> +++ common/kernel/smp.c	2011-01-17 20:17:50.000000000 -0600
> @@ -193,6 +193,7 @@ void generic_smp_call_function_interrupt
>  	 */
>  	list_for_each_entry_rcu(data, &call_function.queue, csd.list) {
>  		int refs;
> +		void (*func) (void *info);
>  
>  		/*
>  		 * Since we walk the list without any locks, we might
> @@ -212,24 +213,32 @@ void generic_smp_call_function_interrupt
>  		if (atomic_read(&data->refs) == 0)
>  			continue;
>  
> +		func = data->csd.func;			/* for later warn */
>  		data->csd.func(data->csd.info);
>  
> +		/*
> +		 * If the cpu mask is not still set then it enabled interrupts,
> +		 * we took another smp interrupt, and executed the function
> +		 * twice on this cpu.  In theory that copy decremented refs.
> +		 */
> +		if (!cpumask_test_and_clear_cpu(cpu, data->cpumask)) {
> +			WARN(1, "%pS enabled interrupts and double executed\n",
> +			     func);
> +			continue;
> +		}
> +
>  		refs = atomic_dec_return(&data->refs);
>  		WARN_ON(refs < 0);
>  
>  		if (refs)
>  			continue;
>  
> +		WARN_ON(!cpumask_empty(data->cpumask));
> +
> +		raw_spin_lock(&call_function.lock);
> +		list_del_rcu(&data->csd.list);
> +		raw_spin_unlock(&call_function.lock);
> +
>  		csd_unlock(&data->csd);
>  	}
>  


So after this we have:

list_for_each_entry_rcu()
rbd
  !->cpumask			  ->cpumask = 
rmb				wmb
  !->refs			  ->refs =
  ->func()			wmb
mb				  list_add_rcu()
  ->refs--
  if (!->refs)
    list_del_rcu()

So even if we see it as an old-ref, when we see a valid cpumask, valid
ref, we execute the function clear our cpumask bit and decrement the ref
and delete the entry, even though it might not yet be added?


(old-ref)
				->cpumask =
  if (!->cpumask)
				->refs = 
  if (!->refs)

  ->func()
  ->refs--
  if (!->refs)
    list_del_rcu()
				list_add_rcu()

Then what happens?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ