[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h83s62mi.fsf@mid.deneb.enyo.de>
Date: Mon, 28 Oct 2019 21:23:17 +0100
From: Florian Weimer <fw@...eb.enyo.de>
To: Mike Rapoport <rppt@...nel.org>
Cc: linux-kernel@...r.kernel.org,
Alexey Dobriyan <adobriyan@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
James Bottomley <jejb@...ux.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, linux-api@...r.kernel.org,
linux-mm@...ck.org, x86@...nel.org,
Mike Rapoport <rppt@...ux.ibm.com>
Subject: Re: [PATCH RFC] mm: add MAP_EXCLUSIVE to create exclusive user mappings
* Mike Rapoport:
> On October 27, 2019 12:30:21 PM GMT+02:00, Florian Weimer
> <fw@...eb.enyo.de> wrote:
>>* Mike Rapoport:
>>
>>> The patch below aims to allow applications to create mappins that
>>have
>>> pages visible only to the owning process. Such mappings could be used
>>to
>>> store secrets so that these secrets are not visible neither to other
>>> processes nor to the kernel.
>>
>>How is this expected to interact with CRIU?
>
> CRIU dumps the memory contents using a parasite code from inside the
> dumpee address space, so it would work the same way as for the other
> mappings. Of course, at the restore time the exclusive mapping should
> be recreated with the appropriate flags.
Hmm, so it would use a bounce buffer to perform the extraction?
>>> I've only tested the basic functionality, the changes should be
>>verified
>>> against THP/migration/compaction. Yet, I'd appreciate early feedback.
>>
>>What are the expected semantics for VM migration? Should it fail?
>
> I don't quite follow. If qemu would use such mappings it would be able
> to transfer them during live migration.
I was wondering if the special state is supposed to bubble up to the
host eventually.
Powered by blists - more mailing lists