[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200806153258.GB2131635@gmail.com>
Date: Thu, 6 Aug 2020 17:32:58 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Kristen Carlson Accardi <kristen@...ux.intel.com>
Cc: keescook@...omium.org, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, arjan@...ux.intel.com, x86@...nel.org,
linux-kernel@...r.kernel.org, kernel-hardening@...ts.openwall.com,
rick.p.edgecombe@...el.com
Subject: Re: [PATCH v4 00/10] Function Granular KASLR
* Kristen Carlson Accardi <kristen@...ux.intel.com> wrote:
> Function Granular Kernel Address Space Layout Randomization (fgkaslr)
> ---------------------------------------------------------------------
>
> This patch set is an implementation of finer grained kernel address space
> randomization. It rearranges your kernel code at load time
> on a per-function level granularity, with only around a second added to
> boot time.
This is a very nice feature IMO, and it should be far more effective
at randomizing the kernel, due to the sheer number of randomization
bits that kernel function granular randomization presents.
If this is a good approximation of fg-kaslr randomization depth:
thule:~/tip> grep ' [tT] ' /proc/kallsyms | wc -l
88488
... then that's 80K bits of randomization instead of the mere handful
of kaslr bits we have today. Very nice!
> In order to hide our new layout, symbols reported through
> /proc/kallsyms will be displayed in a random order.
Neat. :-)
> Performance Impact
> ------------------
> * Run time
> The performance impact at run-time of function reordering varies by workload.
> Using kcbench, a kernel compilation benchmark, the performance of a kernel
> build with finer grained KASLR was about 1% slower than a kernel with standard
> KASLR. Analysis with perf showed a slightly higher percentage of
> L1-icache-load-misses. Other workloads were examined as well, with varied
> results. Some workloads performed significantly worse under FGKASLR, while
> others stayed the same or were mysteriously better. In general, it will
> depend on the code flow whether or not finer grained KASLR will impact
> your workload, and how the underlying code was designed. Because the layout
> changes per boot, each time a system is rebooted the performance of a workload
> may change.
I'd guess that the biggest performance impact comes from tearing apart
'groups' of functions that particular workloads are using.
In that sense it might be worthwile to add a '__kaslr_group' function
tag to key functions, which would keep certain performance critical
functions next to each other.
This shouldn't really be a problem, as even with generous amount of
grouping the number of randomization bits is incredibly large.
> Future work could identify hot areas that may not be randomized and either
> leave them in the .text section or group them together into a single section
> that may be randomized. If grouping things together helps, one other thing to
> consider is that if we could identify text blobs that should be grouped together
> to benefit a particular code flow, it could be interesting to explore
> whether this security feature could be also be used as a performance
> feature if you are interested in optimizing your kernel layout for a
> particular workload at boot time. Optimizing function layout for a particular
> workload has been researched and proven effective - for more information
> read the Facebook paper "Optimizing Function Placement for Large-Scale
> Data-Center Applications" (see references section below).
I'm pretty sure the 'grouping' solution would address any real
slowdowns.
I'd also suggest allowing the passing in of a boot-time pseudo-random
generator seed number, which would allow the creation of a
pseudo-randomized but repeatable layout across reboots.
> Image Size
> ----------
> Adding additional section headers as a result of compiling with
> -ffunction-sections will increase the size of the vmlinux ELF file.
> With a standard distro config, the resulting vmlinux was increased by
> about 3%. The compressed image is also increased due to the header files,
> as well as the extra relocations that must be added. You can expect fgkaslr
> to increase the size of the compressed image by about 15%.
What is the increase of the resulting raw kernel image? Additional
relocations might increase its size (unless I'm missing something) -
it would be nice to measure this effect. I'd expect this to be really
low.
vmlinux or compressed kernel size doesn't really matter on x86-64,
it's a boot time only expense well within typical system resource
limits.
> Disabling
> ---------
> Disabling normal KASLR using the nokaslr command line option also disables
> fgkaslr. It is also possible to disable fgkaslr separately by booting with
> fgkaslr=off on the commandline.
I'd suggest to also add a 'nofgkaslr' boot option if it doesn't yet
exist, to keep usage symmetric with kaslr.
Likewise, there should probably be a 'kaslr=off' option as well.
The less random our user interfaces are, the better ...
> arch/x86/boot/compressed/Makefile | 9 +-
> arch/x86/boot/compressed/fgkaslr.c | 811 ++++++++++++++++++
> arch/x86/boot/compressed/kaslr.c | 4 -
> arch/x86/boot/compressed/misc.c | 157 +++-
> arch/x86/boot/compressed/misc.h | 30 +
> arch/x86/boot/compressed/utils.c | 11 +
> arch/x86/boot/compressed/vmlinux.symbols | 17 +
> arch/x86/include/asm/boot.h | 15 +-
> arch/x86/kernel/vmlinux.lds.S | 17 +-
> arch/x86/lib/kaslr.c | 18 +-
> arch/x86/tools/relocs.c | 143 ++-
> arch/x86/tools/relocs.h | 4 +-
> arch/x86/tools/relocs_common.c | 15 +-
> include/asm-generic/vmlinux.lds.h | 18 +-
> include/linux/decompress/mm.h | 12 +-
> include/uapi/linux/elf.h | 1 +
> init/Kconfig | 26 +
> kernel/kallsyms.c | 163 +++-
> kernel/module.c | 81 ++
> tools/objtool/elf.c | 8 +-
> 26 files changed, 1670 insertions(+), 85 deletions(-)
> create mode 100644 Documentation/security/fgkaslr.rst
> create mode 100644 arch/x86/boot/compressed/fgkaslr.c
> create mode 100644 arch/x86/boot/compressed/utils.c
> create mode 100644 arch/x86/boot/compressed/vmlinux.symbols
This looks surprisingly lean overall.
Thanks,
Ingo
Powered by blists - more mailing lists