lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4DCFF186.1070404@gmail.com>
Date:	Sun, 15 May 2011 18:30:14 +0300
From:	Török Edwin <edwintorok@...il.com>
To:	Ingo Molnar <mingo@...e.hu>, Michel Lespinasse <walken@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	David Howells <dhowells@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Lucas De Marchi <lucas.demarchi@...fusion.mobi>,
	Randy Dunlap <randy.dunlap@...cle.com>,
	Gerald Schaefer <gerald.schaefer@...ibm.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: rw_semaphore down_write a lot faster if wrapped by mutex ?!

On 05/15/2011 05:34 PM, Török Edwin wrote:
> Hi semaphore/mutex maintainers,
> 
> Looks like rw_semaphore's down_write is not as efficient as it could be.
> It can have a latency in the miliseconds range, but if I wrap it in yet
> another mutex then it becomes faster (100 us range).
> 
> One difference I noticed betwen the rwsem and mutex, is that the mutex
> code does optimistic spinning. But adding something similar to the
> rw_sem code didn't improve timings (it made things worse).
> My guess is that this has to do something with excessive scheduler
> ping-pong (spurious wakeups, scheduling a task that won't be able to
> take the semaphore, etc.), I'm not sure what are the best tools to
> confirm/infirm this. perf sched/perf lock/ftrace ?

Hmm, with the added mutex the reader side of mmap_sem only sees one
contending locker at a time (the rest of write side contention is hidden
by the mutex), so this might give a better chance for the readers to
run, even in face of heavy write-side contention.
The up_write will see there are no more writers and always wake the
readers, whereas without the mutex it'll wake the other writer.

Perhaps rw_semaphore should have a flag to prefer waking readers over
writers, or take into consideration the number of readers waiting when
waking a reader vs a writer.

Waking a writer will cause additional latency, because more readers will
go to sleep:
 latency = (enqueued_readers / enqueued_writers) * (avg_write_hold_time
+ context_switch_time)

Whereas waking (all) the readers will delay the writer only by:
 latency = avg_reader_hold_time + context_switch_time

If the semaphore code could (approximately) measure these, then maybe it
would be able to better make a choice for future lock requests based on
(recent) lock contention history.

Best regards,
--Edwin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ