lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4643CD44.1090903@gmail.com>
Date:	Fri, 11 May 2007 09:56:20 +0800
From:	Li Yu <raise.sail@...il.com>
To:	Steven Rostedt <srostedt@...hat.com>
CC:	LKML <linux-kernel@...r.kernel.org>,
	Esben Nielsen <nielsen.esben@...glemail.com>
Subject: Re: Hi, I have one question about rt_mutex.

Steven Rostedt wrote:
> Li Yu wrote:
>   
>>> Now since mutexes can be defined by user-land applications, we don't
>>>       
>> want a DOS
>>     
>>> type of application that nests large amounts of mutexes to create a large
>>> PI chain, and have the code holding spin locks while looking at a large
>>> amount of data. So to prevent this, the implementation not only implements
>>> a maximum lock depth, but also only holds at most two different locks at a
>>> time, as it walks the PI chain. More about this below.
>>>       
>> After read the implementation of rt_mutex_adjust_prio_chain(), I found
>> the we really require maximin lock depth (1024 default), but I can not
>> see the check for more same locks duplication. Does this doc is
>> inconsistent with code?
>>     
>
> Nope, the code and the doc are still the same.
>
> The thing that was most difficult in writing that document, was a way to
> talk about the user locks (futex - fast user mutex) and the kernel locks
> (spin_locks) without confusing the two.  The max depth is in reference
> to the user futex, but the comment about the "at most two different
> locks" is referencing the kernel's spin_locks.
>
> I don't remember talking about looking for "lock duplication", which I'm
> thinking you are referring to circular dead locks. I didn't cover that
> in the document and I believe I even mentioned that I would not cover
> the debug aspect of the code which would handle catching circular deadlocks.
>
> But back to the "no more than two kernel locks held". This is very
> important. Some PI implementations requires all locks in the PI chain to
> have their internal locks held (as in spin_locks).  But letting user
> space determine the number of spin locks held can cause large latencies
> for the rest of the system.  So we designed a method to only need to
> hold two internal spin_locks in the PI chain at a time.  The kernel
> doesn't care if the user application is abusing itself (holding too many
> of it's own user locks).  But the kernel does care if a user application
> can affect other non related applications.
>
> As Esben already mentioned, the PI chain even lets the locking user
> mutex schedule without holding any kernel locks.  This is very key. It
> keeps the latency down on setting up a PI chain which can be very expensive.
>
> Note: Esben helped a lot in the development of the final design of
> rtmutex.c.
>
> -- Steve
>   
First, Thanks for such good explanation from you two guru in time.

Er, I think these two locks which you said are task->pi_lock and
rt_mutex->wait_lock.

>The max depth is in reference to the user futex, but the comment
>about the "at most two different locks" is referencing the 
>kernel's spin_locks.

This sentence make the my world clear from now on ;)

However, I found the sys_futex() do not use rt_mutex, so what's mean of the user futex you said?
Even, I have not found any usage for rt_mutex in kernel code. Or, some beautiful story will happen in future?

Goodluck.

- Li Yu



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ