lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <202210071241.445289C5@keescook> Date: Fri, 7 Oct 2022 15:47:44 -0700 From: Kees Cook <keescook@...omium.org> To: "Jason A. Donenfeld" <Jason@...c4.com> Cc: linux-kernel@...r.kernel.org, patches@...ts.linux.dev, Andreas Noever <andreas.noever@...il.com>, Andrew Morton <akpm@...ux-foundation.org>, Andy Shevchenko <andriy.shevchenko@...ux.intel.com>, Borislav Petkov <bp@...en8.de>, Catalin Marinas <catalin.marinas@....com>, Christoph Böhmwalder <christoph.boehmwalder@...bit.com>, Christoph Hellwig <hch@....de>, Christophe Leroy <christophe.leroy@...roup.eu>, Daniel Borkmann <daniel@...earbox.net>, Dave Airlie <airlied@...hat.com>, Dave Hansen <dave.hansen@...ux.intel.com>, "David S . Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Florian Westphal <fw@...len.de>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>, "H . Peter Anvin" <hpa@...or.com>, Heiko Carstens <hca@...ux.ibm.com>, Helge Deller <deller@....de>, Herbert Xu <herbert@...dor.apana.org.au>, Huacai Chen <chenhuacai@...nel.org>, Hugh Dickins <hughd@...gle.com>, Jakub Kicinski <kuba@...nel.org>, "James E . J . Bottomley" <jejb@...ux.ibm.com>, Jan Kara <jack@...e.com>, Jason Gunthorpe <jgg@...pe.ca>, Jens Axboe <axboe@...nel.dk>, Johannes Berg <johannes@...solutions.net>, Jonathan Corbet <corbet@....net>, Jozsef Kadlecsik <kadlec@...filter.org>, KP Singh <kpsingh@...nel.org>, Marco Elver <elver@...gle.com>, Mauro Carvalho Chehab <mchehab@...nel.org>, Michael Ellerman <mpe@...erman.id.au>, Pablo Neira Ayuso <pablo@...filter.org>, Paolo Abeni <pabeni@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Richard Weinberger <richard@....at>, Russell King <linux@...linux.org.uk>, Theodore Ts'o <tytso@....edu>, Thomas Bogendoerfer <tsbogend@...ha.franken.de>, Thomas Gleixner <tglx@...utronix.de>, Thomas Graf <tgraf@...g.ch>, Ulf Hansson <ulf.hansson@...aro.org>, Vignesh Raghavendra <vigneshr@...com>, WANG Xuerui <kernel@...0n.name>, Will Deacon <will@...nel.org>, Yury Norov <yury.norov@...il.com>, dri-devel@...ts.freedesktop.org, kasan-dev@...glegroups.com, kernel-janitors@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, linux-block@...r.kernel.org, linux-crypto@...r.kernel.org, linux-doc@...r.kernel.org, linux-fsdevel@...r.kernel.org, linux-media@...r.kernel.org, linux-mips@...r.kernel.org, linux-mm@...ck.org, linux-mmc@...r.kernel.org, linux-mtd@...ts.infradead.org, linux-nvme@...ts.infradead.org, linux-parisc@...r.kernel.org, linux-rdma@...r.kernel.org, linux-s390@...r.kernel.org, linux-um@...ts.infradead.org, linux-usb@...r.kernel.org, linux-wireless@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org, loongarch@...ts.linux.dev, netdev@...r.kernel.org, sparclinux@...r.kernel.org, x86@...nel.org, Jan Kara <jack@...e.cz> Subject: Re: [PATCH v4 2/6] treewide: use prandom_u32_max() when possible On Fri, Oct 07, 2022 at 12:01:03PM -0600, Jason A. Donenfeld wrote: > Rather than incurring a division or requesting too many random bytes for > the given range, use the prandom_u32_max() function, which only takes > the minimum required bytes from the RNG and avoids divisions. I actually meant splitting the by-hand stuff by subsystem, but nearly all of these can be done mechanically too, so it shouldn't be bad. Notes below... > > [...] > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c > index 92bcc1768f0b..87203429f802 100644 > --- a/arch/arm64/kernel/process.c > +++ b/arch/arm64/kernel/process.c > @@ -595,7 +595,7 @@ unsigned long __get_wchan(struct task_struct *p) > unsigned long arch_align_stack(unsigned long sp) > { > if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) > - sp -= get_random_int() & ~PAGE_MASK; > + sp -= prandom_u32_max(PAGE_SIZE); > return sp & ~0xf; > } > @mask@ expression MASK; @@ - (get_random_int() & ~(MASK)) + prandom_u32_max(MASK) > diff --git a/arch/loongarch/kernel/vdso.c b/arch/loongarch/kernel/vdso.c > index f32c38abd791..8c9826062652 100644 > --- a/arch/loongarch/kernel/vdso.c > +++ b/arch/loongarch/kernel/vdso.c > @@ -78,7 +78,7 @@ static unsigned long vdso_base(void) > unsigned long base = STACK_TOP; > > if (current->flags & PF_RANDOMIZE) { > - base += get_random_int() & (VDSO_RANDOMIZE_SIZE - 1); > + base += prandom_u32_max(VDSO_RANDOMIZE_SIZE); > base = PAGE_ALIGN(base); > } > @minus_one@ expression FULL; @@ - (get_random_int() & ((FULL) - 1) + prandom_u32_max(FULL) > diff --git a/arch/parisc/kernel/vdso.c b/arch/parisc/kernel/vdso.c > index 63dc44c4c246..47e5960a2f96 100644 > --- a/arch/parisc/kernel/vdso.c > +++ b/arch/parisc/kernel/vdso.c > @@ -75,7 +75,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, > > map_base = mm->mmap_base; > if (current->flags & PF_RANDOMIZE) > - map_base -= (get_random_int() & 0x1f) * PAGE_SIZE; > + map_base -= prandom_u32_max(0x20) * PAGE_SIZE; > > vdso_text_start = get_unmapped_area(NULL, map_base, vdso_text_len, 0, 0); > These are more fun, but Coccinelle can still do them with a little Pythonic help: // Find a potential literal @literal_mask@ expression LITERAL; identifier randfunc =~ "get_random_int|prandom_u32|get_random_u32"; position p; @@ (randfunc()@p & (LITERAL)) // Add one to the literal. @script:python add_one@ literal << literal_mask.LITERAL; RESULT; @@ if literal.startswith('0x'): value = int(literal, 16) + 1 coccinelle.RESULT = cocci.make_expr("0x%x" % (value)) elif literal[0] in '123456789': value = int(literal, 10) + 1 coccinelle.RESULT = cocci.make_expr("%d" % (value)) else: print("I don't know how to handle: %s" % (literal)) // Replace the literal mask with the calculated result. @plus_one@ expression literal_mask.LITERAL; position literal_mask.p; expression add_one.RESULT; identifier FUNC; @@ - (FUNC()@p & (LITERAL)) + prandom_u32_max(RESULT) > diff --git a/drivers/mtd/tests/stresstest.c b/drivers/mtd/tests/stresstest.c > index cb29c8c1b370..d2faaca7f19d 100644 > --- a/drivers/mtd/tests/stresstest.c > +++ b/drivers/mtd/tests/stresstest.c > @@ -45,9 +45,8 @@ static int rand_eb(void) > unsigned int eb; > > again: > - eb = prandom_u32(); > /* Read or write up 2 eraseblocks at a time - hence 'ebcnt - 1' */ > - eb %= (ebcnt - 1); > + eb = prandom_u32_max(ebcnt - 1); > if (bbt[eb]) > goto again; > return eb; This can also be done mechanically: @multi_line@ identifier randfunc =~ "get_random_int|prandom_u32|get_random_u32"; identifier RAND; expression E; @@ - RAND = randfunc(); ... when != RAND - RAND %= (E); + RAND = prandom_u32_max(E); @collapse_ret@ type TYPE; identifier VAR; expression E; @@ { - TYPE VAR; - VAR = (E); - return VAR; + return E; } @drop_var@ type TYPE; identifier VAR; @@ { - TYPE VAR; ... when != VAR } > diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c > index 998dd2ac8008..f4944c4dee60 100644 > --- a/fs/ext2/ialloc.c > +++ b/fs/ext2/ialloc.c > @@ -277,8 +277,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent) > int best_ndir = inodes_per_group; > int best_group = -1; > > - group = prandom_u32(); > - parent_group = (unsigned)group % ngroups; > + parent_group = prandom_u32_max(ngroups); > for (i = 0; i < ngroups; i++) { > group = (parent_group + i) % ngroups; > desc = ext2_get_group_desc (sb, group, NULL); Okay, that one is too much for me -- checking that group is never used after the assignment removal is likely possible, but beyond my cocci know-how. :) > diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c > index f73e5eb43eae..36d5bc595cc2 100644 > --- a/fs/ext4/ialloc.c > +++ b/fs/ext4/ialloc.c > @@ -463,10 +463,9 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent, > hinfo.hash_version = DX_HASH_HALF_MD4; > hinfo.seed = sbi->s_hash_seed; > ext4fs_dirhash(parent, qstr->name, qstr->len, &hinfo); > - grp = hinfo.hash; > + parent_group = hinfo.hash % ngroups; > } else > - grp = prandom_u32(); > - parent_group = (unsigned)grp % ngroups; > + parent_group = prandom_u32_max(ngroups); > for (i = 0; i < ngroups; i++) { > g = (parent_group + i) % ngroups; > get_orlov_stats(sb, g, flex_size, &stats); Much less easy mechanically. :) > diff --git a/lib/test_hexdump.c b/lib/test_hexdump.c > index 0927f44cd478..41a0321f641a 100644 > --- a/lib/test_hexdump.c > +++ b/lib/test_hexdump.c > @@ -208,7 +208,7 @@ static void __init test_hexdump_overflow(size_t buflen, size_t len, > static void __init test_hexdump_overflow_set(size_t buflen, bool ascii) > { > unsigned int i = 0; > - int rs = (prandom_u32_max(2) + 1) * 16; > + int rs = prandom_u32_max(2) + 1 * 16; > > do { > int gs = 1 << i; This looks wrong. Cocci says: - int rs = (get_random_int() % 2 + 1) * 16; + int rs = (prandom_u32_max(2) + 1) * 16; > diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c > index 4f2f2d1bac56..56ffaa8dd3f6 100644 > --- a/lib/test_vmalloc.c > +++ b/lib/test_vmalloc.c > @@ -151,9 +151,7 @@ static int random_size_alloc_test(void) > int i; > > for (i = 0; i < test_loop_count; i++) { > - n = prandom_u32(); > - n = (n % 100) + 1; > - > + n = prandom_u32_max(n % 100) + 1; > p = vmalloc(n * PAGE_SIZE); > > if (!p) This looks wrong. Cocci says: - n = prandom_u32(); - n = (n % 100) + 1; + n = prandom_u32_max(100) + 1; > @@ -293,16 +291,12 @@ pcpu_alloc_test(void) > return -1; > > for (i = 0; i < 35000; i++) { > - unsigned int r; > - > - r = prandom_u32(); > - size = (r % (PAGE_SIZE / 4)) + 1; > + size = prandom_u32_max(PAGE_SIZE / 4) + 1; > > /* > * Maximum PAGE_SIZE > */ > - r = prandom_u32(); > - align = 1 << ((r % 11) + 1); > + align = 1 << (prandom_u32_max(11) + 1); > > pcpu[i] = __alloc_percpu(size, align); > if (!pcpu[i]) > @@ -393,14 +387,11 @@ static struct test_driver { > > static void shuffle_array(int *arr, int n) > { > - unsigned int rnd; > int i, j; > > for (i = n - 1; i > 0; i--) { > - rnd = prandom_u32(); > - > /* Cut the range. */ > - j = rnd % i; > + j = prandom_u32_max(i); > > /* Swap indexes. */ > swap(arr[i], arr[j]); Yup, agrees with Cocci on these. -- Kees Cook
Powered by blists - more mailing lists