[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8118eabf-de9c-41a7-9892-3a1a5bd2071c@redhat.com>
Date: Tue, 2 Apr 2024 12:43:21 +0200
From: David Hildenbrand <david@...hat.com>
To: York Jasper Niebuhr <yjnworkstation@...il.com>, akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org, willy@...radead.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: init_mlocked_on_free_v3
On 29.03.24 15:56, York Jasper Niebuhr wrote:
> Implements the "init_mlocked_on_free" boot option. When this boot option
> is enabled, any mlock'ed pages are zeroed on free. If
> the pages are munlock'ed beforehand, no initialization takes place.
> This boot option is meant to combat the performance hit of
> "init_on_free" as reported in commit 6471384af2a6 ("mm: security:
> introduce init_on_alloc=1 and init_on_free=1 boot options"). With
> "init_mlocked_on_free=1" only relevant data is freed while everything
> else is left untouched by the kernel. Correspondingly, this patch
> introduces no performance hit for unmapping non-mlock'ed memory. The
> unmapping overhead for purely mlocked memory was measured to be
> approximately 13%. Realistically, most systems mlock only a fraction of
> the total memory so the real-world system overhead should be close to
> zero.
>
> Optimally, userspace programs clear any key material or other
> confidential memory before exit and munlock the according memory
> regions. If a program crashes, userspace key managers fail to do this
> job. Accordingly, no munlock operations are performed so the data is
> caught and zeroed by the kernel. Should the program not crash, all
> memory will ideally be munlocked so no overhead is caused.
>
> CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON can be set to enable
> "init_mlocked_on_free" by default.
>
> Signed-off-by: York Jasper Niebuhr <yjnworkstation@...il.com>
I'm not convinced that this is the right approach.
You seem to be focused on "don't leak secrets stored in user space
somewhere else". Well, and assuming that no other users on such systems
use mlock() for a different purpose where the additional clearing will
just be overhead.
In general, I'm not a fan of any such kernel cmdline options. Really, we
want to handle memory that stores secrets always in a sane way.
Note that in the meantime, we do have secretmem for that purpose, which
primary use case -- in contrast to mlock -- is to store secrets.
I now that "teach user space to use secretmem" is not a good answer, but
further emphasizing "mlock means storing secrets" feels wrong.
Also note that your patch won't handle all cases: mlocked folios can be
migrated in memory. But there is no such code that handles freeing of
the source page during migration, so you could still leak memory at
least there ...
(I was briefly thinking about a VMA option, independent of mlock, to
achieve that. But likely just using secretmem might be the better
long-term approach)
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists