lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1524242934.5239.1.camel@gmx.de>
Date:   Fri, 20 Apr 2018 18:48:54 +0200
From:   Mike Galbraith <efault@....de>
To:     Peter Zijlstra <peterz@...radead.org>,
        Davidlohr Bueso <dave@...olabs.net>
Cc:     tglx@...utronix.de, mingo@...nel.org, longman@...hat.com,
        linux-kernel@...r.kernel.org, Davidlohr Bueso <dbues@...e.de>
Subject: Re: [PATCH 2/2] rtmutex: Reduce top-waiter blocking on a lock

On Fri, 2018-04-20 at 17:50 +0200, Peter Zijlstra wrote:
> On Tue, Apr 10, 2018 at 09:27:50AM -0700, Davidlohr Bueso wrote:
> > By applying well known spin-on-lock-owner techniques, we can avoid the
> > blocking overhead during the process of when the task is trying to take
> > the rtmutex. The idea is that as long as the owner is running, there is a
> > fair chance it'll release the lock soon, and thus a task trying to acquire
> > the rtmutex will better off spinning instead of blocking immediately after
> > the fastpath. This is similar to what we use for other locks, borrowed
> > from -rt. The main difference (due to the obvious realtime constraints)
> > is that top-waiter spinning must account for any new higher priority waiter,
> > and therefore cannot steal the lock and avoid any pi-dance. As such there
> > will be at most only one spinner waiter upon contended lock.
> > 
> > Conditions to stop spinning and block are simple:
> > 
> > (1) Upon need_resched()
> > (2) Current lock owner blocks
> > (3) The top-waiter has changed while spinning.
> > 
> > The unlock side remains unchanged as wake_up_process can safely deal with
> > calls where the task is not actually blocked (TASK_NORMAL). As such, there
> > is only unnecessary overhead dealing with the wake_q, but this allows us not
> > to miss any wakeups between the spinning step and the unlocking side.
> > 
> > Passes running the pi_stress program with increasing thread-group counts.
> 
> Is this similar to what we have in RT (which, IIRC, has an optimistic
> spinning implementation as well)?

For the RT spinlock replacement, the top waiter can spin.

> ISTR there being some contention over the exact semantics of (3) many
> years ago. IIRC the question was if an equal priority task was allowed
> to steal; because lock stealing can lead to fairness issues. One would
> expect two FIFO-50 tasks to be 'fair' wrt lock acquisition and not
> starve one another.
> 
> Therefore I think we only allowed higher prio tasks to steal and kept
> FIFO order for equal prioty tasks.

Yup, lateral steal is expressly forbidden for RT classes.

+#define STEAL_NORMAL  0
+#define STEAL_LATERAL 1
+
+static inline int
+rt_mutex_steal(struct rt_mutex *lock, struct rt_mutex_waiter *waiter, int mode)
+{
+       struct rt_mutex_waiter *top_waiter = rt_mutex_top_waiter(lock);
+
+       if (waiter == top_waiter || rt_mutex_waiter_less(waiter, top_waiter))
+               return 1;
+
+       /*
+        * Note that RT tasks are excluded from lateral-steals
+        * to prevent the introduction of an unbounded latency.
+        */
+       if (mode == STEAL_NORMAL || rt_task(waiter->task))
+               return 0;
+
+       return rt_mutex_waiter_equal(waiter, top_waiter);
+}
+

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ