lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 4 Jul 2018 16:08:33 +0200 From: Manfred Spraul <manfred@...orfullife.com> To: Dmitry Vyukov <dvyukov@...gle.com> Cc: Davidlohr Bueso <dave@...olabs.net>, syzbot <syzbot+2827ef6b3385deb07eaf@...kaller.appspotmail.com>, Andrew Morton <akpm@...ux-foundation.org>, "Eric W. Biederman" <ebiederm@...ssion.com>, Kees Cook <keescook@...omium.org>, LKML <linux-kernel@...r.kernel.org>, linux@...inikbrodowski.net, syzkaller-bugs <syzkaller-bugs@...glegroups.com>, Al Viro <viro@...iv.linux.org.uk>, Eric Dumazet <edumazet@...gle.com> Subject: Re: ipc/msg: zalloc struct msg_queue when creating a new msq Hello Dmitry, On 07/04/2018 12:03 PM, Dmitry Vyukov wrote: > On Wed, Jul 4, 2018 at 11:18 AM, Manfred Spraul > <manfred@...orfullife.com> wrote: >> >> There are 2 relevant values: kern_ipc_perm.id and kern_ipc_perm.seq. >> >> For kern_ipc_perm.id, it is possible to move the access to the codepath that >> hold the lock. >> >> For kern_ipc_perm.seq, there are two options: >> 1) set it before publication. >> 2) initialize to an invalid value, and correct that at the end. >> >> I'm in favor of option 2, it avoids that we must think about reducing the >> next sequence number or not: >> >> The purpose of the sequence counter is to minimize the risk that e.g. a >> semop() will write into a newly created array. >> I intentially write "minimize the risk", as it is by design impossible to >> guarantee that this cannot happen, e.g. if semop() sleeps at the instruction >> before the syscall. >> >> Therefore, we can set seq to ULONG_MAX, then ipc_checkid() will always fail >> and the corruption is avoided. >> >> What do you think? >> >> And, obviously: >> Just set seq to 0 is dangerous, as the first allocated sequence number is 0, >> and if that object is destroyed, then the newly created object has >> temporarily sequence number 0 as well. > Hi Manfred, > > It still looks fishy to me. This code published uninitialized uid's > for years (which lead not only to accidentally accessing wrong > objects, but also to privilege escalation). Now it publishes uninit > id/seq. The first proposed fix still did not make it correct. I can't > say that I see a but in your patch, but initializing id/seq in a racy > manner rings bells for me. Say, if we write/read seq ahead of id, can > reader still get access to a wrong object? > It all suggests some design flaw to me. Could ipc_idr_alloc() do full > initialization, i.e. also do what ipc_buildid() does? This would > ensure that we publish a fully constructed object in the first place. > We already have cleanup for ipc_idr_alloc(), which is idr_remove(), so > if we care about seq space conservation even in error conditions > (ENOMEM?), idr_remove() could accept an additional flag saying "this > object should not have been used by sane users yet, so retake its > seq". Did I get your concern about seq properly? You have convinced me, I'll rewrite the patch: 1) kern_ipc_perm.seq should be accessible under rcu_read_lock(), this means replacing ipc_build_id() with two functions: One that initializes kern_ipc_perm.seq, and one that would set kern_ipc_perm.id. 2) the accesses to kern_ipc_perm.id must be moved to the position where the lock is held. This is trivial. 3) we need a clear table that describes which variables can be accessed under rcu_read_lock() and which need ipc_lock_object(). e.g.: kern_ipc_perm.id would end up under ipc_lock_object, kern_ipc_perm.seq or the xuid fields can be read under rcu_read_lock(). Everything that can be accessed without ipc_lock_object must be initialized before publication of a new object. Or, as all access to kern_ipc_perm.id are in rare codepaths: I'll remove kern_ipc_perm.id entirely, and build the id on demand. Ok? -- Manfred
Powered by blists - more mailing lists