lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 02 Dec 2015 13:02:44 -0500 (EST) From: David Miller <davem@...emloft.net> To: rweikusat@...ileactivedefense.com Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org Subject: Re: [RFC PATCH] af_unix: fix entry locking in unix_dgram_recvmsg From: Rainer Weikusat <rweikusat@...ileactivedefense.com> Date: Tue, 01 Dec 2015 17:02:33 +0000 > Rainer Weikusat <rw@...pelsaurus.mobileactivedefense.com> writes: > > [...] > >> Insofar I understand the comment in this code block correctly, >> >> err = mutex_lock_interruptible(&u->readlock); >> if (unlikely(err)) { >> /* recvmsg() in non blocking mode is supposed to return -EAGAIN >> * sk_rcvtimeo is not honored by mutex_lock_interruptible() >> */ >> err = noblock ? -EAGAIN : -ERESTARTSYS; >> goto out; >> } >> >> setting a receive timeout for an AF_UNIX datagram socket also doesn't >> work as intended because of this: In case of n readers with the same >> timeout, the nth reader will end up blocking n times the timeout. > > Test program which confirms this. It starts four concurrent reads on the > same socket with a receive timeout of 3s. This means the whole program > should take a little more than 3s to execute as each read should time > out at about the same time. But it takes 12s instead as the reads > pile up on the readlock mutex and each then gets its own timeout once it > could enter the receive loop. I'm fine with your changes. So with your patch, the "N * timeout" behavior, where N is the number of queues reading threads, no longer occurs? Do they all now properly get released at the appropriate timeout? -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists