lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1233251741.4495.111.camel@laptop>
Date:	Thu, 29 Jan 2009 18:55:41 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Steven Rostedt <rostedt@...dmis.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Rusty Russell <rusty@...tcorp.com.au>, npiggin@...e.de,
	Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	Arjan van de Ven <arjan@...radead.org>,
	jens.axboe@...cle.com
Subject: Re: [PATCH -v2] use per cpu data for single cpu ipi calls

On Thu, 2009-01-29 at 18:47 +0100, Peter Zijlstra wrote:
> On Thu, 2009-01-29 at 09:21 -0800, Linus Torvalds wrote:
> > On Thu, 29 Jan 2009, Steven Rostedt wrote:
> > > 
> > > The caller must wait till the LOCK bit is cleared before setting
> > > it. When it is cleared, there is no IPI function using it.
> > > A spinlock is used to synchronize the setting of the bit between
> > > callers. Since only one callee can be called at a time, and it
> > > is the only thing to clear it, the IPI does not need to use
> > > any locking.
> > 
> > That spinlock cannot be right. It is provably wrong for so many reasons..
> > 
> > Think about it. We're talking about a per-CPU lock, which already makes no 
> > sense: we're only locking against our own CPU, and we've already disabled 
> > preemption for totally unrelated reasons.
> > 
> > And the only way locking can make sense against our own CPU is if we lock 
> > against interrupts - but the lock isn't actually irq-safe, so if you are 
> > trying to lock against interrupts, you are (a) doing it wrong (you should 
> > disable interrupts, not use a spinlock) and (b) causing a deadlock if it 
> > ever happens.
> 
> 
> > +                       else {
> > +                               data = &per_cpu(csd_data, cpu);
> > +                               spin_lock(&per_cpu(csd_data_lock, cpu));
> > +                               while (data->flags & CSD_FLAG_LOCK)
> > +                                       cpu_relax();
> > +                               data->flags = CSD_FLAG_LOCK;
> > +                               spin_unlock(&per_cpu(csd_data_lock, cpu));
> > +                       }
> 
> I think your argument would hold if he did:
> 
>   data = &__get_cpu_var(csd_data);
> 
> But now he's actually grabbing the remote cpu's csd, and thus needs
> atomicy around that remote csd -- which two cpus could contend for.

So the below should do

---
 kernel/smp.c |    6 +-----
 1 files changed, 1 insertions(+), 5 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 9bce851..9eead6c 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -201,8 +201,6 @@ void generic_smp_call_function_single_interrupt(void)
 }
 
 static DEFINE_PER_CPU(struct call_single_data, csd_data);
-static DEFINE_PER_CPU(spinlock_t, csd_data_lock) =
-	__SPIN_LOCK_UNLOCKED(csd_lock);
 
 /*
  * smp_call_function_single - Run a function on a specific CPU
@@ -259,12 +257,10 @@ int smp_call_function_single(int cpu, void (*func) (void *info), void *info,
 			if (data)
 				data->flags = CSD_FLAG_ALLOC;
 			else {
-				data = &per_cpu(csd_data, cpu);
-				spin_lock(&per_cpu(csd_data_lock, cpu));
+				data = &per_cpu(csd_data, me);
 				while (data->flags & CSD_FLAG_LOCK)
 					cpu_relax();
 				data->flags = CSD_FLAG_LOCK;
-				spin_unlock(&per_cpu(csd_data_lock, cpu));
 			}
 		} else {
 			data = &d;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ