lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <489d1494-626c-40d9-89ec-4afc4cd0624b@redhat.com>
Date: Wed, 19 Jun 2024 14:16:11 +0200
From: David Hildenbrand <david@...hat.com>
To: Fuad Tabba <tabba@...gle.com>
Cc: John Hubbard <jhubbard@...dia.com>,
 Elliot Berman <quic_eberman@...cinc.com>,
 Andrew Morton <akpm@...ux-foundation.org>, Shuah Khan <shuah@...nel.org>,
 Matthew Wilcox <willy@...radead.org>, maz@...nel.org, kvm@...r.kernel.org,
 linux-arm-msm@...r.kernel.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
 pbonzini@...hat.com, Jason Gunthorpe <jgg@...dia.com>
Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning

On 19.06.24 11:11, Fuad Tabba wrote:
> Hi John and David,
> 
> Thank you for your comments.
> 
> On Wed, Jun 19, 2024 at 8:38 AM David Hildenbrand <david@...hat.com> wrote:
>>
>> Hi,
>>
>> On 19.06.24 04:44, John Hubbard wrote:
>>> On 6/18/24 5:05 PM, Elliot Berman wrote:
>>>> In arm64 pKVM and QuIC's Gunyah protected VM model, we want to support
>>>> grabbing shmem user pages instead of using KVM's guestmemfd. These
>>>> hypervisors provide a different isolation model than the CoCo
>>>> implementations from x86. KVM's guest_memfd is focused on providing
>>>> memory that is more isolated than AVF requires. Some specific examples
>>>> include ability to pre-load data onto guest-private pages, dynamically
>>>> sharing/isolating guest pages without copy, and (future) migrating
>>>> guest-private pages.  In sum of those differences after a discussion in
>>>> [1] and at PUCK, we want to try to stick with existing shmem and extend
>>>> GUP to support the isolation needs for arm64 pKVM and Gunyah.
>>
>> The main question really is, into which direction we want and can
>> develop guest_memfd. At this point (after talking to Jason at LSF/MM), I
>> wonder if guest_memfd should be our new target for guest memory, both
>> shared and private. There are a bunch of issues to be sorted out though ...
>>
>> As there is interest from Red Hat into supporting hugetlb-style huge
>> pages in confidential VMs for real-time workloads, and wasting memory is
>> not really desired, I'm going to think some more about some of the
>> challenges (shared+private in guest_memfd, mmap support, migration of
>> !shared folios, hugetlb-like support, in-place shared<->private
>> conversion, interaction with page pinning). Tricky.
>>
>> Ideally, we'd have one way to back guest memory for confidential VMs in
>> the future.
> 
> As you know, initially we went down the route of guest memory and
> invested a lot of time on it, including presenting our proposal at LPC
> last year. But there was resistance to expanding it to support more
> than what was initially envisioned, e.g., sharing guest memory in
> place migration, and maybe even huge pages, and its implications such
> as being able to conditionally mmap guest memory.

Yes, and I think we might have to revive that discussion, unfortunately. 
I started thinking about this, but did not reach a conclusion. Sharing 
my thoughts.

The minimum we might need to make use of guest_memfd (v1 or v2 ;) ) not 
just for private memory should be:

(1) Have private + shared parts backed by guest_memfd. Either the same,
     or a fd pair.
(2) Allow to mmap only the "shared" parts.
(3) Allow in-place conversion between "shared" and "private" parts.
(4) Allow migration of the "shared" parts.

A) Convert shared -> private?
* Must not be GUP-pinned
* Must not be mapped
* Must not reside on ZONE_MOVABLE/MIGRATE_CMA
* (must rule out any other problematic folio references that could
    read/write memory, might be feasible for guest_memfd)

B) Convert private -> shared?
* Nothing to consider

C) Map something?
* Must not be private

For ordinary (small) pages, that might be feasible. 
(ZONE_MOVABLE/MIGRATE_CMA might be feasible, but maybe we could just not 
support them initially)

The real fun begins once we want to support huge pages/large folios and 
can end up having a mixture of "private" and "shared" per huge page. But 
really, that's what we want in the end I think.

Unless we can teach the VM to not convert arbitrary physical memory 
ranges on a 4k basis to a mixture of private/shared ... but I've been 
told we don't want that. Hm.


There are two big problems with that that I can see:

1) References/GUP-pins are per folio

What if some shared part of the folio is pinned but another shared part 
that we want to convert to private is not? Core-mm will not provide the 
answer to that: the folio maybe pinned, that's it. *Disallowing* at 
least long-term GUP-pins might be an option.

To get stuff into an IOMMU, maybe a per-fd interface could work, and 
guest_memfd would track itself which parts are currently "handed out", 
and with which "semantics" (shared vs. private).

[IOMMU + private parts might require that either way? Because, if we 
dissallow mmap, how should that ever work with an IOMMU otherwise].

2) Tracking of mappings will likely soon be per folio.

page_mapped() / folio_mapped() only tell us if any part of the folio is 
mapped. Of course, what always works is unmapping the whole thing, or 
walking the rmap to detect if a specific part is currently mapped.


Then, there is the problem of getting huge pages into guest_memfd (using 
hugetlb reserves, but not using hugetlb), but that should be solvable.


As raised in previous discussions, I think we should then allow the 
whole guest_memfd to be mapped, but simply SIGBUS/... when trying to 
access a private part. We would track private/shared internally, and 
track "handed out" pages to IOMMUs internally. FOLL_LONGTERM would be 
disallowed.

But that's only the high level idea I had so far ... likely ignore way 
too many details.

Is there broader interest to discuss that and there would be value in 
setting up a meeting and finally make progress with that?

I recall quite some details with memory renting or so on pKVM ... and I 
have to refresh my memory on that.

> 
> To be honest, personally (speaking only for myself, not necessarily
> for Elliot and not for anyone else in the pKVM team), I still would
> prefer to use guest_memfd(). I think that having one solution for
> confidential computing that rules them all would be best. But we do
> need to be able to share memory in place, have a plan for supporting
> huge pages in the near future, and migration in the not-too-distant
> future.

Yes, huge pages are also of interest for RH. And memory-overconsumption 
due to having partially used huge pages in private/shared memory is not 
desired.

> 
> We are currently shipping pKVM in Android as it is, warts and all.
> We're also working on upstreaming the rest of it. Currently, this is
> the main blocker for us to be able to upstream the rest (same probably
> applies to Gunyah).
> 
>> Can you comment on the bigger design goal here? In particular:
> 
> At a high level: We want to prevent a misbehaving host process from
> crashing the system when attempting to access (deliberately or
> accidentally) protected guest memory. As it currently stands in pKVM
> and Gunyah, the hypervisor does prevent the host from accessing
> (private) guest memory. In certain cases though, if the host attempts
> to access that memory and is prevented by the hypervisor (either out
> of ignorance or out of malice), the host kernel wouldn't be able to
> recover, causing the whole system to crash.
> 
> guest_memfd() prevents such accesses by not allowing confidential
> memory to be mapped at the host to begin with. This works fine for us,
> but there's the issue of being able to share memory in place, which
> implies mapping it conditionally (among others that I've mentioned).
> 
> The approach we're taking with this proposal is to instead restrict
> the pinning of protected memory. If the host kernel can't pin the
> memory, then a misbehaving process can't trick the host into accessing
> it.

Got it, thanks. So once we pinned it, nobody else can pin it. But we can 
still map it?

> 
>>
>> 1) Who would get the exclusive PIN and for which reason? When would we
>>      pin, when would we unpin?
> 
> The exclusive pin would be acquired for private guest pages, in
> addition to a normal pin. It would be released when the private memory
> is released, or if the guest shares that memory.

Understood.

> 
>> 2) What would happen if there is already another PIN? Can we deal with
>>      speculative short-term PINs from GUP-fast that could introduce
>>      errors?
> 
> The exclusive pin would be rejected if there's any other pin
> (exclusive or normal). Normal pins would be rejected if there's an
> exclusive pin.

Makes sense, thanks.

> 
>> 3) How can we be sure we don't need other long-term pins (IOMMUs?) in
>>      the future?
> 
> I can't :)

:)

> 
>> 4) Why are GUP pins special? How one would deal with other folio
>>      references (e.g., simply mmap the shmem file into a different
>>      process).
> 
> Other references would crash the userspace process, but the host
> kernel can handle them, and shouldn't cause the system to crash. The
> way things are now in Android/pKVM, a userspace process can crash the
> system as a whole.

Okay, so very Android/pKVM specific :/

> 
>> 5) Why you have to bother about anonymous pages at all (skimming over s
>>      some patches), when you really want to handle shmem differently only?
> 
> I'm not sure I understand the question. We use anonymous memory for pKVM.
> 

"we want to support grabbing shmem user pages instead of using KVM's 
guestmemfd" indicated to me that you primarily care about shmem with 
FOLL_EXCLUSIVE?

>>>> To that
>>>> end, we introduce the concept of "exclusive GUP pinning", which enforces
>>>> that only one pin of any kind is allowed when using the FOLL_EXCLUSIVE
>>>> flag is set. This behavior doesn't affect FOLL_GET or any other folio
>>>> refcount operations that don't go through the FOLL_PIN path.
>>
>> So, FOLL_EXCLUSIVE would fail if there already is a PIN, but
>> !FOLL_EXCLUSIVE would succeed even if there is a single PIN via
>> FOLL_EXCLUSIVE? Or would the single FOLL_EXCLUSIVE pin make other pins
>> that don't have FOLL_EXCLUSIVE set fail as well?
> 
> A FOLL_EXCLUSIVE would fail if there's any other pin. A normal pin
> (!FOLL_EXCLUSIVE) would fail if there's a FOLL_EXCLUSIVE pin. It's the
> PIN to end all pins!
> 
>>>>
>>>> [1]: https://lore.kernel.org/all/20240319143119.GA2736@willie-the-truck/
>>>>
>>>
>>> Hi!
>>>
>>> Looking through this, I feel that some intangible threshold of "this is
>>> too much overloading of page->_refcount" has been crossed. This is a very
>>> specific feature, and it is using approximately one more bit than is
>>> really actually "available"...
>>
>> Agreed.
> 
> We are gating it behind a CONFIG flag :)

;)

> 
> Also, since pin is already overloading the refcount, having the
> exclusive pin there helps in ensuring atomic accesses and avoiding
> races.
> 
>>>
>>> If we need a bit in struct page/folio, is this really the only way? Willy
>>> is working towards getting us an entirely separate folio->pincount, I
>>> suppose that might take too long? Or not?
>>
>> Before talking about how to implement it, I think we first have to learn
>> whether that approach is what we want at all, and how it fits into the
>> bigger picture of that use case.
>>
>>>
>>> This feels like force-fitting a very specific feature (KVM/CoCo handling
>>> of shmem pages) into a more general mechanism that is running low on
>>> bits (gup/pup).
>>
>> Agreed.
>>
>>>
>>> Maybe a good topic for LPC!
>>
>> The KVM track has plenty of guest_memfd topics, might be a good fit
>> there. (or in the MM track, of course)
> 
> We are planning on submitting a proposal for LPC (see you in Vienna!) :)

Great!

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ