lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 22 Aug 2008 11:47:39 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Ingo Molnar <mingo@...e.hu>, Andi Kleen <andi@...stfloor.org>,
	"Pallipadi, Venkatesh" <venkatesh.pallipadi@...el.com>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Rusty Russell <rusty@...tcorp.com.au>
Subject: Re: [PATCH 1/2] smp_call_function: don't use lock in call_function_data

On Friday 22 August 2008 10:29, Jeremy Fitzhardinge wrote:
> There's no need for a lock in call_function_data, since it's only used
> to decrement-and-test a counter.  Use an atomic instead.

Actually I wanted to convert the cpu_clear operation to be non-atomic,
and keep it under the lock. Thus the spinlock would save one atomic
operation. I simply forgot about this after Jens took over the patchset.

We could get rid of that WARN_ON branch in 2.6.28 I expect, unless we
see it trigger.

>
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
> ---
>  kernel/smp.c |   17 ++++-------------
>  1 file changed, 4 insertions(+), 13 deletions(-)
>
> ===================================================================
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -10,6 +10,7 @@
>  #include <linux/rcupdate.h>
>  #include <linux/rculist.h>
>  #include <linux/smp.h>
> +#include <asm/atomic.h>
>
>  bool __read_mostly smp_single_ipi_queue = false;
>
> @@ -37,8 +38,7 @@
>
>  struct call_function_data {
>  	struct call_single_data csd;
> -	spinlock_t lock;
> -	unsigned int refs;
> +	atomic_t refs;
>  	cpumask_t cpumask;
>  	struct rcu_head rcu_head;
>  };
> @@ -125,21 +125,13 @@
>  	 */
>  	rcu_read_lock();
>  	list_for_each_entry_rcu(data, &queue->list, csd.list) {
> -		int refs;
> -
>  		if (!cpu_isset(cpu, data->cpumask))
>  			continue;
>
>  		data->csd.func(data->csd.info);
>
> -		spin_lock(&data->lock);
>  		cpu_clear(cpu, data->cpumask);
> -		WARN_ON(data->refs == 0);
> -		data->refs--;
> -		refs = data->refs;
> -		spin_unlock(&data->lock);
> -
> -		if (refs)
> +		if (!atomic_dec_and_test(&data->refs))
>  			continue;
>
>  		spin_lock(&queue->lock);
> @@ -379,10 +371,9 @@
>  		slowpath = 1;
>  	}
>
> -	spin_lock_init(&data->lock);
>  	data->csd.func = func;
>  	data->csd.info = info;
> -	data->refs = num_cpus;
> +	atomic_set(&data->refs, num_cpus);
>  	data->cpumask = mask;
>
>  	spin_lock_irqsave(&queue->lock, flags);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists