lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <202310081002.F5A5D6F8A@keescook>
Date: Sun, 8 Oct 2023 10:04:00 -0700
From: Kees Cook <keescook@...omium.org>
To: Lukas Loidolt <e1634039@...dent.tuwien.ac.at>
Cc: linux-hardening@...r.kernel.org, linux-kernel@...r.kernel.org,
	Daniel Marth <daniel.marth@...o.tuwien.ac.at>
Subject: Re: Missing cache considerations in randstruct performance feature

On Sat, Oct 07, 2023 at 12:38:28PM +0200, Lukas Loidolt wrote:
> On 07.10.23 06:12, Kees Cook wrote:
> > On Sat, Oct 07, 2023 at 12:30:01AM +0200, Lukas Loidolt wrote:
> > > In my tests, however, the performance version behaves more or less like the
> > > full version of randstruct.
> > 
> > Can you try this patch?
> > 
> > 
> > commit d73a3244700d3c945cedea7e1fb7042243c41e08
> > Author:     Kees Cook <keescook@...omium.org>
> > AuthorDate: Fri Oct 6 21:09:28 2023 -0700
> > Commit:     Kees Cook <keescook@...omium.org>
> > CommitDate: Fri Oct 6 21:09:28 2023 -0700
> > 
> >      randstruct: Fix gcc-plugin performance mode to stay in group
> > 
> >      The performance mode of the gcc-plugin randstruct was shuffling struct
> >      members outside of the cache-line groups. Limit the range to the
> >      specified group indexes.
> > 
> >      Cc: linux-hardening@...r.kernel.org
> >      Reported-by: Lukas Loidolt <e1634039@...dent.tuwien.ac.at>
> >      Closes: https://lore.kernel.org/all/f3ca77f0-e414-4065-83a5-ae4c4d25545d@student.tuwien.ac.at
> >      Signed-off-by: Kees Cook <keescook@...omium.org>
> > 
> > diff --git a/scripts/gcc-plugins/randomize_layout_plugin.c b/scripts/gcc-plugins/randomize_layout_plugin.c
> > index 951b74ba1b24..178831917f01 100644
> > --- a/scripts/gcc-plugins/randomize_layout_plugin.c
> > +++ b/scripts/gcc-plugins/randomize_layout_plugin.c
> > @@ -191,7 +191,7 @@ static void partition_struct(tree *fields, unsigned long length, struct partitio
> > 
> >   static void performance_shuffle(tree *newtree, unsigned long length, ranctx *prng_state)
> >   {
> > -       unsigned long i, x;
> > +       unsigned long i, x, index;
> >          struct partition_group size_group[length];
> >          unsigned long num_groups = 0;
> >          unsigned long randnum;
> > @@ -206,11 +206,14 @@ static void performance_shuffle(tree *newtree, unsigned long length, ranctx *prn
> >          }
> > 
> >          for (x = 0; x < num_groups; x++) {
> > -               for (i = size_group[x].start + size_group[x].length - 1; i > size_group[x].start; i--) {
> > +               for (index = size_group[x].length - 1; index > 0; index--) {
> >                          tree tmp;
> > +
> > +                       i = size_group[x].start + index;
> >                          if (DECL_BIT_FIELD_TYPE(newtree[i]))
> >                                  continue;
> >                          randnum = ranval(prng_state) % (i + 1);
> > +                       randnum += size_group[x].start;
> >                          // we could handle this case differently if desired
> >                          if (DECL_BIT_FIELD_TYPE(newtree[randnum]))
> >                                  continue;
> > 
> > --
> > Kees Cook
> 
> I think, this is still missing a change in the randnum calculation to use index instead of i.
> Without that, randnum can be larger than the length of newtree, which crashes kernel compilation for me.

Oops, yes, I missed that while refactoring my patch to reduce lines
changed.

> 
> diff --git a/scripts/gcc-plugins/randomize_layout_plugin.c b/scripts/gcc-plugins/randomize_layout_plugin.c
> index 178831917f01..4b4627e3f2ce 100644
> --- a/scripts/gcc-plugins/randomize_layout_plugin.c
> +++ b/scripts/gcc-plugins/randomize_layout_plugin.c
> @@ -212,7 +212,7 @@ static void performance_shuffle(tree *newtree, unsigned long length, ranctx *prn
>                         i = size_group[x].start + index;
>                         if (DECL_BIT_FIELD_TYPE(newtree[i]))
>                                 continue;
> -                       randnum = ranval(prng_state) % (i + 1);
> +                       randnum = ranval(prng_state) % (index + 1);
>                         randnum += size_group[x].start;
>                         // we could handle this case differently if desired
>                         if (DECL_BIT_FIELD_TYPE(newtree[randnum]))
> 
> 
> The patch seems to work after that though. For the previous example, I now get the following layout:
> 
> func1 (offset: 0)
> func3 (offset: 8)
> func4 (offset: 16)
> func6 (offset: 24)
> func7 (offset: 32)
> func8 (offset: 40)
> func5 (offset: 48)
> func2 (offset: 56)
> func10 (offset: 64)
> func9 (offset: 72)
> 
> Regarding the shuffling of groups/partitions (rather than just the randomization of structure members within each partition), I'm not sure if that was intended at some point, but it might be worth looking into.

Yeah, this is also clearly not working.

> I'd assume it would improve randomization without sacrificing performance, and it's also what the clang implementation of randstruct does.

Thanks for testing!

-Kees

-- 
Kees Cook

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ