lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bf8e96be-c6e7-40c9-a914-cd022d1fd056@redhat.com>
Date: Thu, 20 Jun 2024 20:56:20 +0200
From: David Hildenbrand <david@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Jason Gunthorpe <jgg@...dia.com>, Fuad Tabba <tabba@...gle.com>,
 Christoph Hellwig <hch@...radead.org>, John Hubbard <jhubbard@...dia.com>,
 Elliot Berman <quic_eberman@...cinc.com>,
 Andrew Morton <akpm@...ux-foundation.org>, Shuah Khan <shuah@...nel.org>,
 Matthew Wilcox <willy@...radead.org>, maz@...nel.org, kvm@...r.kernel.org,
 linux-arm-msm@...r.kernel.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
 pbonzini@...hat.com
Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning

On 20.06.24 18:04, Sean Christopherson wrote:
> On Thu, Jun 20, 2024, David Hildenbrand wrote:
>> On 20.06.24 16:29, Jason Gunthorpe wrote:
>>> On Thu, Jun 20, 2024 at 04:01:08PM +0200, David Hildenbrand wrote:
>>>> On 20.06.24 15:55, Jason Gunthorpe wrote:
>>>>> On Thu, Jun 20, 2024 at 09:32:11AM +0100, Fuad Tabba wrote:
>>>> Regarding huge pages: assume the huge page (e.g., 1 GiB hugetlb) is shared,
>>>> now the VM requests to make one subpage private.
>>>
>>> I think the general CC model has the shared/private setup earlier on
>>> the VM lifecycle with large runs of contiguous pages. It would only
>>> become a problem if you intend to to high rate fine granual
>>> shared/private switching. Which is why I am asking what the actual
>>> "why" is here.
>>
>> I am not an expert on that, but I remember that the way memory
>> shared<->private conversion happens can heavily depend on the VM use case,
> 
> Yeah, I forget the details, but there are scenarios where the guest will share
> (and unshare) memory at 4KiB (give or take) granularity, at runtime.  There's an
> RFC[*] for making SWIOTLB operate at 2MiB is driven by the same underlying problems.
> 
> But even if Linux-as-a-guest were better behaved, we (the host) can't prevent the
> guest from doing suboptimal conversions.  In practice, killing the guest or
> refusing to convert memory isn't an option, i.e. we can't completely push the
> problem into the guest

Agreed!

> 
> https://lore.kernel.org/all/20240112055251.36101-1-vannapurve@google.com
> 
>> and that under pKVM we might see more frequent conversion, without even
>> going to user space.
>>
>>>
>>>> How to handle that without eventually running into a double
>>>> memory-allocation? (in the worst case, allocating a 1GiB huge page
>>>> for shared and for private memory).
>>>
>>> I expect you'd take the linear range of 1G of PFNs and fragment it
>>> into three ranges private/shared/private that span the same 1G.
>>>
>>> When you construct a page table (ie a S2) that holds these three
>>> ranges and has permission to access all the memory you want the page
>>> table to automatically join them back together into 1GB entry.
>>>
>>> When you construct a page table that has only access to the shared,
>>> then you'd only install the shared hole at its natural best size.
>>>
>>> So, I think there are two challenges - how to build an allocator and
>>> uAPI to manage this sort of stuff so you can keep track of any
>>> fractured pfns and ensure things remain in physical order.
>>>
>>> Then how to re-consolidate this for the KVM side of the world.
>>
>> Exactly!
>>
>>>
>>> guest_memfd, or something like it, is just really a good answer. You
>>> have it obtain the huge folio, and keep track on its own which sub
>>> pages can be mapped to a VMA because they are shared. KVM will obtain
>>> the PFNs directly from the fd and KVM will not see the shared
>>> holes. This means your S2's can be trivially constructed correctly.
>>>
>>> No need to double allocate..
>>
>> Yes, that's why my thinking so far was:
>>
>> Let guest_memfd (or something like that) consume huge pages (somehow, let it
>> access the hugetlb reserves). Preallocate that memory once, as the VM starts
>> up: just like we do with hugetlb in VMs.
>>
>> Let KVM track which parts are shared/private, and if required, let it map
>> only the shared parts to user space. KVM has all information to make these
>> decisions.
>>
>> If we could disallow pinning any shared pages, that would make life a lot
>> easier, but I think there were reasons for why we might require it. To
>> convert shared->private, simply unmap that folio (only the shared parts
>> could possibly be mapped) from all user page tables.
>>
>> Of course, there might be alternatives, and I'll be happy to learn about
>> them. The allcoator part would be fairly easy, and the uAPI part would
>> similarly be comparably easy. So far the theory :)
>>
>>>
>>> I'm kind of surprised the CC folks don't want the same thing for
>>> exactly the same reason. It is much easier to recover the huge
>>> mappings for the S2 in the presence of shared holes if you track it
>>> this way. Even CC will have this problem, to some degree, too.
>>
>> Precisely! RH (and therefore, me) is primarily interested in existing
>> guest_memfd users at this point ("CC"), and I don't see an easy way to get
>> that running with huge pages in the existing model reasonably well ...
> 
> This is the general direction guest_memfd is headed, but getting there is easier
> said than done.  E.g. as alluded to above, "simply unmap that folio" is quite
> difficult, bordering on infeasible if the kernel is allowed to gup() shared
> guest_memfd memory.

Right. I think ways forward are the ones stated in my mail to Jason: 
disallow long-term GUP or expose the huge page as unmovable small folios 
to core-mm.

Maybe there are other alternatives, but it all feels like we want the MM 
to track in granularity of small pages, but map it into the KVM/IOMMU 
page tables in large pages.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ