lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1159372520.13500.7.camel@localhost.localdomain>
Date:	Wed, 27 Sep 2006 11:55:19 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Roland Kuhn <rkuhn@....physik.tu-muenchen.de>
Cc:	"Linux-Kernel@...r. Kernel. Org" <linux-kernel@...r.kernel.org>
Subject: Re: question about __raw_spin_lock()

Trying to catchup on LKML (only 5234 messages to go :P)

On Wed, 2006-09-06 at 10:25 +0200, Roland Kuhn wrote:
> Dear experts!
> 
> Trying to inform myself about the locking possibilities on i386, I  
> read linux/include/asm-i386/spinlock.h. I'm no expert on inline  
> assembly, so I'm asking myself what would happen on a very big system  
> if more than 128 threads are waiting on the same raw_spinlock_t.  
> Would the 129th locker erroneously succeed immediately?

Any machine today that has 129 processors had better be a 64 bit
machine. And this is not an issue on x86_64.  If you do have a 128+ CPU
i386 machine, then modify the kernel yourself :)

> 
> As a side note: I was looking because I have to implement a simple  
> lock between processes in shared memory, and unfortunately I cannot  
> use the NPTL :-( SysV semaphores presumably are much too heavy to  
> protect simple linked list operations (no scanning of the list under  
> the lock, just inserting on one end and removing from the other).  
> Does anybody have a better idea than spinning in user space---with a  
> nanosleep now and then---or does this have problems I'm not aware of?

userspace spinlocks are really bad.  Unless you pin all the threads that
share that lock to their own CPU, you can have one thread preempting
another that owns the lock, and then spin waiting for the owner to
release the lock, while the owner is waiting to run on the processor
that has the spinning thread.  In the RT world, this can spell death.

Look into using futexes.  They are fast userspace mutexes.  I believe
the pthread mutex code uses these.  They are also robust and can also
include priority inheritance.  They run in userspace unless there is
contention, which they then go into the kernel.

-- Steve


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ