lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <202310062111.809AB4E56@keescook> Date: Fri, 6 Oct 2023 21:12:06 -0700 From: Kees Cook <keescook@...omium.org> To: Lukas Loidolt <e1634039@...dent.tuwien.ac.at> Cc: linux-hardening@...r.kernel.org, linux-kernel@...r.kernel.org, Daniel Marth <daniel.marth@...o.tuwien.ac.at> Subject: Re: Missing cache considerations in randstruct performance feature On Sat, Oct 07, 2023 at 12:30:01AM +0200, Lukas Loidolt wrote: > In my tests, however, the performance version behaves more or less like the > full version of randstruct. Can you try this patch? commit d73a3244700d3c945cedea7e1fb7042243c41e08 Author: Kees Cook <keescook@...omium.org> AuthorDate: Fri Oct 6 21:09:28 2023 -0700 Commit: Kees Cook <keescook@...omium.org> CommitDate: Fri Oct 6 21:09:28 2023 -0700 randstruct: Fix gcc-plugin performance mode to stay in group The performance mode of the gcc-plugin randstruct was shuffling struct members outside of the cache-line groups. Limit the range to the specified group indexes. Cc: linux-hardening@...r.kernel.org Reported-by: Lukas Loidolt <e1634039@...dent.tuwien.ac.at> Closes: https://lore.kernel.org/all/f3ca77f0-e414-4065-83a5-ae4c4d25545d@student.tuwien.ac.at Signed-off-by: Kees Cook <keescook@...omium.org> diff --git a/scripts/gcc-plugins/randomize_layout_plugin.c b/scripts/gcc-plugins/randomize_layout_plugin.c index 951b74ba1b24..178831917f01 100644 --- a/scripts/gcc-plugins/randomize_layout_plugin.c +++ b/scripts/gcc-plugins/randomize_layout_plugin.c @@ -191,7 +191,7 @@ static void partition_struct(tree *fields, unsigned long length, struct partitio static void performance_shuffle(tree *newtree, unsigned long length, ranctx *prng_state) { - unsigned long i, x; + unsigned long i, x, index; struct partition_group size_group[length]; unsigned long num_groups = 0; unsigned long randnum; @@ -206,11 +206,14 @@ static void performance_shuffle(tree *newtree, unsigned long length, ranctx *prn } for (x = 0; x < num_groups; x++) { - for (i = size_group[x].start + size_group[x].length - 1; i > size_group[x].start; i--) { + for (index = size_group[x].length - 1; index > 0; index--) { tree tmp; + + i = size_group[x].start + index; if (DECL_BIT_FIELD_TYPE(newtree[i])) continue; randnum = ranval(prng_state) % (i + 1); + randnum += size_group[x].start; // we could handle this case differently if desired if (DECL_BIT_FIELD_TYPE(newtree[randnum])) continue; -- Kees Cook
Powered by blists - more mailing lists