[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1d249326-e3dd-9c9d-7b53-2fffeb39bfb4@kernel.org>
Date: Fri, 16 Jun 2023 21:13:22 -0700
From: Andy Lutomirski <luto@...nel.org>
To: Kent Overstreet <kent.overstreet@...ux.dev>,
Kees Cook <keescook@...omium.org>
Cc: Johannes Thumshirn <Johannes.Thumshirn@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-bcachefs@...r.kernel.org" <linux-bcachefs@...r.kernel.org>,
Kent Overstreet <kent.overstreet@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>,
"hch@...radead.org" <hch@...radead.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-hardening@...r.kernel.org" <linux-hardening@...r.kernel.org>
Subject: Re: [PATCH 07/32] mm: Bring back vmalloc_exec
On 5/16/23 14:20, Kent Overstreet wrote:
> On Tue, May 16, 2023 at 02:02:11PM -0700, Kees Cook wrote:
>> For something that small, why not use the text_poke API?
>
> This looks like it's meant for patching existing kernel text, which
> isn't what I want - I'm generating new functions on the fly, one per
> btree node.
Dynamically generating code is a giant can of worms.
Kees touched on a basic security thing: a linear address mapped W+X is a big
no-no. And that's just scratching the surface -- ideally we would have a
strong protocol for generating code: the code is generated in some
extra-secure context, then it's made immutable and double-checked, then
it becomes live. (And we would offer this to userspace, some day.)
Just having a different address for the W and X aliases is pretty weak.
(When x86 modifies itself at boot or for static keys, it changes out the
page tables temporarily.)
And even beyond security, we have correctness. x86 is a fairly
forgiving architecture. If you go back in time about 20 years, modify
some code *at the same linear address at which you intend to execute
it*, and jump to it, it works. It may even work if you do it through
an alias (the manual is vague). But it's not 20 years ago, and you have
multiple cores. This does *not* work with multiple CPUs -- you need to
serialize on the CPU executing the modified code. On all the but the
very newest CPUs, you need to kludge up the serialization, and that's
sloooooooooooooow. Very new CPUs have the SERIALIZE instruction, which
is merely sloooooow.
(The manual is terrible. It's clear that a way to do this without
serializing must exist, because that's what happens when code is paged
in from a user program.)
And remember that x86 is the forgiving architecture. Other
architectures have their own rules that may involve all kinds of
terrifying cache management. IIRC ARM (32-bit) is really quite nasty in
this regard. I've seen some references suggesting that RISC-V has a
broken design of its cache management and this is a real mess.
x86 low level stuff on Linux gets away with it because the
implementation is conservative and very slow, but it's very rarely invoked.
eBPF gets away with it in ways that probably no one really likes, but
also no one expects eBPF to load programs particularly quickly.
You are proposing doing this when a btree node is loaded. You could
spend 20 *thousand* cycles, on *each CPU*, the first time you access
that node, not to mention the extra branch to decide whether you need to
spend those 20k cycles. Or you could use IPIs.
Or you could just not do this. I think you should just remove all this
dynamic codegen stuff, at least for now.
>
> I'm working up a new allocator - a (very simple) slab allocator where
> you pass a buffer, and it gives you a copy of that buffer mapped
> executable, but not writeable.
>
> It looks like we'll be able to convert bpf, kprobes, and ftrace
> trampolines to it; it'll consolidate a fair amount of code (particularly
> in bpf), and they won't have to burn a full page per allocation anymore.
>
> bpf has a neat trick where it maps the same page in two different
> locations, one is the executable location and the other is the writeable
> location - I'm stealing that.
>
> external api will be:
>
> void *jit_alloc(void *buf, size_t len, gfp_t gfp);
> void jit_free(void *buf);
> void jit_update(void *buf, void *new_code, size_t len); /* update an existing allocation */
Based on the above, I regret to inform you that jit_update() will either
need to sync all cores via IPI or all cores will need to check whether a
sync is needed and do it themselves.
That IPI could be, I dunno, 500k cycles? 1M cycles? Depends on what
cores are asleep at the time. (I have some old Sandy Bridge machines
where, if you tick all the boxes wrong, you might spend tens of
milliseconds doing this due to power savings gone wrong.) Or are you
planning to implement a fancy mostly-lockless thing to track which cores
actually need the IPI so you can avoid waking up sleeping cores?
Sorry to be a party pooper.
--Andy
P.S. I have given some thought to how to make a JIT API that was
actually (somewhat) performant. It's nontrivial, and it would involve
having at least phone calls and possibly actual meetings with people who
understand the microarchitecture of various CPUs to get all the details
hammered out and documented properly.
I don't think it would be efficient for teeny little functions like
bcachefs wants, but maybe? That would be even more complex and messy.
Powered by blists - more mailing lists