lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 17 Feb 2019 03:41:21 +0000 From: Arthur Gautier <baloo@...di.net> To: Al Viro <viro@...iv.linux.org.uk> Cc: Andy Lutomirski <luto@...capital.net>, Thomas Gleixner <tglx@...utronix.de>, Jann Horn <jannh@...gle.com>, the arch/x86 maintainers <x86@...nel.org>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, kernel list <linux-kernel@...r.kernel.org>, Pascal Bouchareine <pascal@...di.net> Subject: Re: [PATCH] x86: uaccess: fix regression in unsafe_get_user On Sat, Feb 16, 2019 at 11:47:02PM +0000, Al Viro wrote: > On Sat, Feb 16, 2019 at 02:50:15PM -0800, Andy Lutomirski wrote: > > > What is the actual problem? We’re not actually demand-faulting this data, are we? Are we just overrunning the buffer because the from_user helpers are too clever? Can we fix it for real by having the fancy helpers do *aligned* loads so that they don’t overrun the buffer? Heck, this might be faster, too. > > Unaligned _stores_ are not any cheaper, and you'd get one hell of > extra arithmetics from trying to avoid both. Check something > like e.g. memcpy() on alpha, where you really have to keep all > accesses aligned, both on load and on store side. > > Can't we just pad the buffers a bit? Making sure that name_buf > and symlink_buf are _not_ followed by unmapped pages shouldn't > be hard. Both are allocated by kmalloc(), so... We cannot change alignment rules here. The input buffer string we're reading is coming from an cpio formated file and the format is defined by cpio(5). Nothing much we can do there I'm afraid. Input buffer is defined to be 4-byte aligned. -- \o/ Arthur G Gandi.net
Powered by blists - more mailing lists