lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 5 Jan 2017 14:24:36 -0500 From: Waiman Long <longman@...hat.com> To: Steven Rostedt <rostedt@...dmis.org> Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, Thomas Gleixner <tglx@...utronix.de>, "H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org, Daniel Bristot de Oliveira <bristot@...hat.com> Subject: Re: [RFC PATCH 0/7] locking/rtqspinlock: Realtime queued spinlocks On 01/05/2017 01:50 PM, Steven Rostedt wrote: > On Thu, 5 Jan 2017 12:07:21 -0500 > Waiman Long <longman@...hat.com> wrote: > > >> I do make the assumption that spinlock critical sections are behaving >> well enough. Apparently, that is not a valid assumption. I sent these >> RFC patches out to see if it was an idea worth pursuing. If not, I can >> drop these patches. Anyway, thanks for the feedback. > Yes, the assumption is incorrect. There are places that can hold a spin > lock for several hundreds of microseconds. If you can't preempt them, > you'll never get below several hundreds of microseconds in latency. > > And it would be hard to pick and choose (we already do this to decide > what can be a raw_spin_lock), because you need to audit all use cases > of a spin_lock as well as all the locks taken while holding that > spin_lock. > > -- Steve Thank for the information. It has come to my attention that scalability problem may be present in the -RT kernel because of the longer wait time in the raw_spin_lock side as the number of CPUs increases. I will look into this some more to see if my patch set can help under those circumstances. Cheers, Longman
Powered by blists - more mailing lists