lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 22 Mar 2019 10:01:53 -0700 From: Linus Torvalds <torvalds@...ux-foundation.org> To: Waiman Long <longman@...hat.com> Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, Will Deacon <will.deacon@....com>, Thomas Gleixner <tglx@...utronix.de>, Linux List Kernel Mailing <linux-kernel@...r.kernel.org>, "linux-alpha@...r.kernel.org" <linux-alpha@...r.kernel.org>, "linux-alpha@...r.kernel.org" <linux-arm-kernel@...ts.infradead.org>, linux-c6x-dev@...ux-c6x.org, uclinux-h8-devel@...ts.sourceforge.jp, linux-hexagon@...r.kernel.org, linux-ia64@...r.kernel.org, linux-m68k <linux-m68k@...ts.linux-m68k.org>, linux-mips@...r.kernel.org, nios2-dev@...ts.rocketboards.org, openrisc@...ts.librecores.org, linux-parisc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org, linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org, Linux-sh list <linux-sh@...r.kernel.org>, sparclinux@...r.kernel.org, linux-um@...ts.infradead.org, linux-xtensa@...ux-xtensa.org, linux-arch <linux-arch@...r.kernel.org>, "the arch/x86 maintainers" <x86@...nel.org>, Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>, "H. Peter Anvin" <hpa@...or.com>, Davidlohr Bueso <dave@...olabs.net>, Andrew Morton <akpm@...ux-foundation.org>, Tim Chen <tim.c.chen@...ux.intel.com> Subject: Re: [PATCH v5 1/3] locking/rwsem: Remove arch specific rwsem files On Fri, Mar 22, 2019 at 7:30 AM Waiman Long <longman@...hat.com> wrote: > > 19 files changed, 133 insertions(+), 930 deletions(-) Lovely. And it all looks sane to me. So ack. The only comment I have is about __down_read_trylock(), which probably isn't critical enough to actually care about, but: > +static inline int __down_read_trylock(struct rw_semaphore *sem) > +{ > + long tmp; > + > + while ((tmp = atomic_long_read(&sem->count)) >= 0) { > + if (tmp == atomic_long_cmpxchg_acquire(&sem->count, tmp, > + tmp + RWSEM_ACTIVE_READ_BIAS)) { > + return 1; > + } > + } > + return 0; > +} So this seems to (a) read the line early (the whole cacheline in shared state issue) (b) read the line again unnecessarily in the while loop Now, (a) might be explained by "well, maybe we do trylock even with existing readers", although I continue to think that the case we should optimize for is simply the uncontended one, where we don't even have multiple readers. But (b) just seems silly. So I wonder if it shouldn't just be long tmp = 0; do { long new = atomic_long_cmpxchg_acquire(&sem->count, tmp, tmp + RWSEM_ACTIVE_READ_BIAS); if (likely(new == tmp)) return 1; tmp = new; } while (tmp >= 0); return 0; which would seem simpler and solve both issues. Hmm? But honestly, I didn't check what our uses of down_read_trylock() look like. We have more of them than I expected, and I _think_ the normal case is the "nobody else holds the lock", but that's just a gut feeling. Some of them _might_ be performance-critical. There's the one on mmap_sem in the fault handling path, for example. And yes, I'd expect the normal case to very much be "no other readers or writers" for that one. NOTE! The above code snippet is absolutely untested, and might be completely wrong. Take it as a "something like this" rather than anything else. Linus
Powered by blists - more mailing lists