lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 14 Apr 2017 14:43:30 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Alex Shi <alex.shi@...aro.org>
Cc:     peterz@...radead.org, mingo@...hat.com, corbet@....net,
        "open list:LOCKING PRIMITIVES" <linux-kernel@...r.kernel.org>,
        "open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
        Sebastian Siewior <bigeasy@...utronix.de>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 1/3] rtmutex: comments update

On Fri, 14 Apr 2017 16:52:11 +0800
Alex Shi <alex.shi@...aro.org> wrote:

> >> -Plist
> >> ------
> >> -
> >> -Before I go further and talk about how the PI chain is stored through lists
> >> -on both mutexes and processes, I'll explain the plist.  This is similar to
> >> -the struct list_head functionality that is already in the kernel.
> >> -The implementation of plist is out of scope for this document, but it is
> >> -very important to understand what it does.
> >> -
> >> -There are a few differences between plist and list, the most important one
> >> -being that plist is a priority sorted linked list.  This means that the
> >> -priorities of the plist are sorted, such that it takes O(1) to retrieve the
> >> -highest priority item in the list.  Obviously this is useful to store processes
> >> -based on their priorities.
> >> -
> >> -Another difference, which is important for implementation, is that, unlike
> >> -list, the head of the list is a different element than the nodes of a list.
> >> -So the head of the list is declared as struct plist_head and nodes that will
> >> -be added to the list are declared as struct plist_node.
> >> -
> >> +If the G process has highest priority in the chain, any right lock owners  
> > 
> > "any right lock owners" doesn't make sense. You mean owners to the
> > right side of the tree of G?  
> 
> Yes, how about this?
> +If the G process has highest priority in the chain, any rightside lock owners
> +in the tree branch need to increase its' priority as high as G.

If task G is the highest priority task in the chain, then all the tasks
up the chain (A and B in this example), must have their priorities
increased to that of G.

> 
> >   
> >> +need to increase its' priority as high as G.
> >>  

[..]

> >   
> >> +
> >>  
> >>  Waking up in the loop
> >>  ---------------------
> >>  
> >> -The schedule can then wake up for a few reasons.
> >> -  1) we were given pending ownership of the mutex.
> >> -  2) we received a signal and was TASK_INTERRUPTIBLE
> >> -  3) we had a timeout and was TASK_INTERRUPTIBLE
> >> -
> >> -In any of these cases, we continue the loop and once again try to grab the
> >> -ownership of the mutex.  If we succeed, we exit the loop, otherwise we continue
> >> -and on signal and timeout, will exit the loop, or if we had the mutex stolen
> >> -we just simply add ourselves back on the lists and go back to sleep.
> >> +The schedule can then wake up for a few reasons, included:  
> > 
> > s/few/couple of/
> > 
> > s/, included//  
> 
> Thanks!
> 
> I rewrite this section as following, any comments? :)
> 
> Waking up in the loop
> ---------------------
> 
> The schedule can then wake up for a couple of reasons:

The task can then wake up for a couple of reasons:

>   1) The previous lock owner released the lock, and we are top_waiter now

  and the task is now the top_waiter

>   2) we received a signal or timeout
> 
> For the first reason, we could get the lock in acquisition retry and back to 
> TASK_RUNNING state.

Actually that's not quite true.

In the first case, the task will try again to acquire the lock. If it
does, then it will take itself off the waiters tree and set itself back
to the TASK_RUNNING state. If the lock was acquired by another task
before this task could get the lock, then it will go back to sleep and
wait to be woken again.

> For the second reason, if task is in TASK_INTERRUPTIBLE 
> state, we will give up the lock acquisition, and also back to TASK_RUNNING. 

The second case is only applicable for tasks that are grabbing a mutex
that can wake up before getting the lock, either due to a signal or
a timeout (i.e. rt_mutex_timed_futex_lock()). When woken, it will try to
take the lock again, if it succeeds, then the task will return with the
lock held, otherwise it will return with -EINTR if the task was woken
by a signal, or -ETIMEDOUT if it timed out.

> Otherwise we will yield cpu and back to sleep.

Nuke the above sentence.

> 
> 
> >   
> >> +  1) we received a signal and was TASK_INTERRUPTIBLE
> >> +  2) we had a timeout and was TASK_INTERRUPTIBLE  
> > 
>

> >>  This document was originally written for 2.6.17-rc3-mm1
> >> +was updated on 4.11-rc4.
> >> diff --git a/Documentation/locking/rt-mutex.txt b/Documentation/locking/rt-mutex.txt
> >> index 243393d..1481f97 100644  
> > 
> > I'm not looking at the other document right now.  
> 
> May it's better to split this document to another patch.

Yes please.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ