lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 14 Jul 2020 09:12:11 +0200 From: Christoph Hellwig <hch@....de> To: Geert Uytterhoeven <geert@...ux-m68k.org> Cc: Mark Rutland <mark.rutland@....com>, Christoph Hellwig <hch@....de>, Nick Hu <nickhu@...estech.com>, Greentime Hu <green.hu@...il.com>, Vincent Chen <deanbo422@...il.com>, Paul Walmsley <paul.walmsley@...ive.com>, Palmer Dabbelt <palmer@...belt.com>, Andrew Morton <akpm@...ux-foundation.org>, Linus Torvalds <torvalds@...ux-foundation.org>, linux-riscv <linux-riscv@...ts.infradead.org>, Linux-Arch <linux-arch@...r.kernel.org>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org> Subject: Re: [PATCH 5/6] uaccess: add force_uaccess_{begin,end} helpers On Mon, Jul 13, 2020 at 03:19:42PM +0200, Geert Uytterhoeven wrote: > > This used to set KERNEL_DS, and now it sets USER_DS, which looks wrong > > superficially. > > Thanks for noticing, and sorry for missing that myself. > > The same issue is present for SuperH: > > - set_fs(KERNEL_DS); > + oldfs = force_uaccess_begin(); > > So the patch description should be: > > "Add helpers to wraper the get_fs/set_fs magic for undoing any damage > done by set_fs(USER_DS)." > > and leave alone users setting KERNEL_DS? Yes, this was broken. Fixed for the next version. > > If the new behaviour is fine it suggests that the old behaviour was > > wrong, or that this is superfluous and could go entirely. > > > > Geert? > > Nope, on m68k, TLB cache operations operate on the current address space. > Hence to flush a kernel TLB entry, you have to switch to KERNEL_DS first. > > If we're guaranteed to be already using KERNEL_DS, I guess the > address space handling can be removed. But can we be sure? We can't be sure yet. But with a lot of my pending work we should be able to get there in the not too far future.
Powered by blists - more mailing lists