[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202002060353.A6A064A@keescook>
Date: Thu, 6 Feb 2020 03:56:36 -0800
From: Kees Cook <keescook@...omium.org>
To: Andy Lutomirski <luto@...nel.org>
Cc: Kristen Carlson Accardi <kristen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Arjan van de Ven <arjan@...ux.intel.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
X86 ML <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>
Subject: Re: [RFC PATCH 08/11] x86: Add support for finer grained KASLR
On Wed, Feb 05, 2020 at 05:17:11PM -0800, Andy Lutomirski wrote:
> On Wed, Feb 5, 2020 at 2:39 PM Kristen Carlson Accardi
> <kristen@...ux.intel.com> wrote:
> >
> > At boot time, find all the function sections that have separate .text
> > sections, shuffle them, and then copy them to new locations. Adjust
> > any relocations accordingly.
> >
>
> > + sort(base, num_syms, sizeof(int), kallsyms_cmp, kallsyms_swp);
>
> Hah, here's a huge bottleneck. Unless you are severely
> memory-constrained, never do a sort with an expensive swap function
> like this. Instead allocate an array of indices that starts out as
> [0, 1, 2, ...]. Sort *that* where the swap function just swaps the
> indices. Then use the sorted list of indices to permute the actual
> data. The result is exactly one expensive swap per item instead of
> one expensive swap per swap.
I think there are few places where memory-vs-speed need to be examined.
I remain surprised about how much memory the entire series already uses
(58MB in my local tests), but I suspect this is likely dominated by the
two factors: a full copy of the decompressed kernel, and that the
"allocator" in the image doesn't really implement free():
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/decompress/mm.h#n55
--
Kees Cook
Powered by blists - more mailing lists