lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue,  3 Jan 2017 13:00:23 -0500
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <hpa@...or.com>
Cc:     linux-kernel@...r.kernel.org, Waiman Long <longman@...hat.com>
Subject: [RFC PATCH 0/7] locking/rtqspinlock: Realtime queued spinlocks

This patchset introduces a new variant of queued spinlocks - the
realtime queued spinlocks. The purpose of this new variant is to
support real spinlock in a realtime environment where high priority
RT tasks should be allowed to complete its work ASAP. This means as
little waiting time for spinlocks as possible.

Non-RT tasks will wait for spinlocks in the MCS waiting queue as
usual. RT tasks and interrupts will spin directly on the spinlocks
and use the priority value in the pending byte to arbitrate who get
the lock first.

Patch 1 removes the unused spin_lock_bh_nested() API.

Patch 2 introduces the basic realtime queued spinlocks where the
pending byte is used for storing the priority of the highest priority
RT task that is waiting on the spinlock. All the RT tasks will spin
directly on the spinlock instead of waiting in the queue.

Patch 3 moves all interrupt context lock waiters to RT spinning.

Patch 4 overrides the spin_lock_nested() call with special code to
enabled RT lock spinning for nested spinlock.

Patch 5 handles priority boosting by periodically checking its priority
and unqueuing from the waiting queue and do RT spinning if applicable.

Patch 6 allows voluntary CPU preemption to happen when a CPU is
waiting for a spinlock.

Patch 7 enables event counts to be collected by the qspinlock stat
package so that we could monitor what have happened within the kernel.

With a locking microbenchmark running on a 2-socket 36-core E5-2699
v3 system, the elapsed times to complete 2M locking loop per non-RT
thread were as follows:

   # of threads   qspinlock   rt-qspinlock  % change
   ------------   ---------   ------------  --------
        2           0.29s        1.97s       +580%
	3           1.46s        2.05s        +40%
	4           1.81s        2.38s        +31%
	5           2.36s        2.87s        +22%
	6           2.73s        3.58s        +31%
	7           3.17s        3.74s        +18%
	8           3.67s        4.70s        +28%
	9           3.89s        5.28s        +36%
       10           4.35s        6.58s        +51%

The RT qspinlock is slower than the non-RT qspinlock which is expected.

This patchset hasn't included any patch to modify the call sites of
spin_lock_nested() to include the outer lock of the nest spinlock
pair yet. That will be included in a later version of this patchset
once it is determined that RT qspinlocks is worth pursuing.

Only minimal testing to build and boot the patched kernel was
done. More extensive testing will be done with later versions of
this patchset.

Waiman Long (7):
  locking/spinlock: Remove the unused spin_lock_bh_nested API
  locking/rtqspinlock: Introduce realtime queued spinlocks
  locking/rtqspinlock: Use static RT priority when in interrupt context
  locking/rtqspinlock: Override spin_lock_nested with special RT variants
  locking/rtqspinlock: Handle priority boosting
  locking/rtqspinlock: Voluntarily yield CPU when need_sched()
  locking/rtqspinlock: Enable collection of event counts

 arch/x86/Kconfig                 |  18 +-
 include/linux/spinlock.h         |  43 +++-
 include/linux/spinlock_api_smp.h |   9 +-
 include/linux/spinlock_api_up.h  |   1 -
 kernel/Kconfig.locks             |   9 +
 kernel/locking/qspinlock.c       |  51 +++-
 kernel/locking/qspinlock_rt.h    | 543 +++++++++++++++++++++++++++++++++++++++
 kernel/locking/qspinlock_stat.h  |  81 +++++-
 kernel/locking/spinlock.c        |   8 -
 9 files changed, 721 insertions(+), 42 deletions(-)
 create mode 100644 kernel/locking/qspinlock_rt.h

-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ