[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5232c8a7-8a05-9d0f-69ff-3dba2b04e784@redhat.com>
Date: Fri, 7 May 2021 09:35:45 +0200
From: David Hildenbrand <david@...hat.com>
To: Nick Kossifidis <mick@....forth.gr>, jejb@...ux.ibm.com
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...nel.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andy Lutomirski <luto@...nel.org>,
Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>,
Catalin Marinas <catalin.marinas@....com>,
Christopher Lameter <cl@...ux.com>,
Dan Williams <dan.j.williams@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Elena Reshetova <elena.reshetova@...el.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Matthew Wilcox <willy@...radead.org>,
Matthew Garrett <mjg59@...f.ucam.org>,
Mark Rutland <mark.rutland@....com>,
Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...ux.ibm.com>,
Michael Kerrisk <mtk.manpages@...il.com>,
Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Peter Zijlstra <peterz@...radead.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Roman Gushchin <guro@...com>,
Shakeel Butt <shakeelb@...gle.com>,
Shuah Khan <shuah@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Tycho Andersen <tycho@...ho.ws>, Will Deacon <will@...nel.org>,
linux-api@...r.kernel.org, linux-arch@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
linux-nvdimm@...ts.01.org, linux-riscv@...ts.infradead.org,
x86@...nel.org
Subject: Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create
"secret" memory areas
On 07.05.21 01:16, Nick Kossifidis wrote:
> Στις 2021-05-06 20:05, James Bottomley έγραψε:
>> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>>>
>>> Also, there is a way to still read that memory when root by
>>>
>>> 1. Having kdump active (which would often be the case, but maybe not
>>> to dump user pages )
>>> 2. Triggering a kernel crash (easy via proc as root)
>>> 3. Waiting for the reboot after kump() created the dump and then
>>> reading the content from disk.
>>
>> Anything that can leave physical memory intact but boot to a kernel
>> where the missing direct map entry is restored could theoretically
>> extract the secret. However, it's not exactly going to be a stealthy
>> extraction ...
>>
>>> Or, as an attacker, load a custom kexec() kernel and read memory
>>> from the new environment. Of course, the latter two are advanced
>>> mechanisms, but they are possible when root. We might be able to
>>> mitigate, for example, by zeroing out secretmem pages before booting
>>> into the kexec kernel, if we care :)
>>
>> I think we could handle it by marking the region, yes, and a zero on
>> shutdown might be useful ... it would prevent all warm reboot type
>> attacks.
>>
>
> I had similar concerns about recovering secrets with kdump, and
> considered cleaning up keyrings before jumping to the new kernel. The
> problem is we can't provide guarantees in that case, once the kernel has
> crashed and we are on our way to run crashkernel, we can't be sure we
> can reliably zero-out anything, the more code we add to that path the
Well, I think it depends. Assume we do the following
1) Zero out any secretmem pages when handing them back to the buddy.
(alternative: init_on_free=1) -- if not already done, I didn't check the
code.
2) On kdump(), zero out all allocated secretmem. It'd be easier if we'd
just allocated from a fixed physical memory area; otherwise we have to
walk process page tables or use a PFN walker. And zeroing out secretmem
pages without a direct mapping is a different challenge.
Now, during 2) it can happen that
a) We crash in our clearing code (e.g., something is seriously messed
up) and fail to start the kdump kernel. That's actually good, instead of
leaking data we fail hard.
b) We don't find all secretmem pages, for example, because process page
tables are messed up or something messed up our memmap (if we'd use that
to identify secretmem pages via a PFN walker somehow)
But for the simple cases (e.g., malicious root tries to crash the kernel
via /proc/sysrq-trigger) both a) and b) wouldn't apply.
Obviously, if an admin would want to mitigate right now, he would want
to disable kdump completely, meaning any attempt to load a crashkernel
would fail and cannot be enabled again for that kernel (also not via
cmdline an attacker could modify to reboot into a system with the option
for a crashkernel). Disabling kdump in the kernel when secretmem pages
are allocated is one approach, although sub-optimal.
> more risky it gets. However during reboot/normal kexec() we should do
> some cleanup, it makes sense and secretmem can indeed be useful in that
> case. Regarding loading custom kexec() kernels, we mitigate this with
> the kexec file-based API where we can verify the signature of the loaded
> kimage (assuming the system runs a kernel provided by a trusted 3rd
> party and we 've maintained a chain of trust since booting).
For example in VMs (like QEMU), we often don't clear physical memory
during a reboot. So if an attacker manages to load a kernel that you can
trick into reading random physical memory areas, we can leak secretmem
data I think.
And there might be ways to achieve that just using the cmdline, not
necessarily loading a different kernel. For example if you limit the
kernel footprint ("mem=256M") and disable strict_iomem_checks
("strict_iomem_checks=relaxed") you can just extract that memory via
/dev/mem if I am not wrong.
So as an attacker, modify the (grub) cmdline to "mem=256M
strict_iomem_checks=relaxed", reboot, and read all memory via /dev/mem.
Or load a signed kexec kernel with that cmdline and boot into it.
Interesting problem :)
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists