[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGtprH_+bF4VZg2ps6CM8vjJVvShsvSGAvaLfTedts4cKqhSUw@mail.gmail.com>
Date: Wed, 10 May 2023 10:26:32 -0700
From: Vishal Annapurve <vannapurve@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: David Hildenbrand <david@...hat.com>,
Chao Peng <chao.p.peng@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Vlastimil Babka <vbabka@...e.cz>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
dhildenb@...hat.com, Quentin Perret <qperret@...gle.com>,
tabba@...gle.com, Michael Roth <michael.roth@....com>,
wei.w.wang@...el.com, Mike Rapoport <rppt@...nel.org>,
Liam Merwick <liam.merwick@...cle.com>,
Isaku Yamahata <isaku.yamahata@...il.com>,
Jarkko Sakkinen <jarkko@...nel.org>,
Ackerley Tng <ackerleytng@...gle.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Hugh Dickins <hughd@...gle.com>,
Christian Brauner <brauner@...nel.org>
Subject: Re: Rename restrictedmem => guardedmem? (was: Re: [PATCH v10 0/9]
KVM: mm: fd-based approach for supporting KVM)
On Fri, Apr 21, 2023 at 6:33 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> ...
> cold. I poked around a bit to see how we could avoid reinventing all of that
> infrastructure for fd-only memory, and the best idea I could come up with is
> basically a rehash of Kirill's very original "KVM protected memory" RFC[3], i.e.
> allow "mapping" fd-only memory, but ensure that memory is never actually present
> from hardware's perspective.
>
I am most likely missing a lot of context here and possibly venturing
into an infeasible/already shot down direction here. But I would still
like to get this discussed here before we move on.
I am wondering if it would make sense to implement
restricted_mem/guest_mem file to expose both private and shared memory
regions, inline with Kirill's original proposal now that the file
implementation is controlled by KVM.
Thinking from userspace perspective:
1) Userspace creates guest mem files and is able to mmap them but all
accesses to these files result into faults as no memory is allowed to
be mapped into userspace VMM pagetables.
2) Userspace registers mmaped HVA ranges with KVM with additional
KVM_MEM_PRIVATE flag
3) Userspace converts memory attributes and this memory conversion
allows userspace to access shared ranges of the file because those are
allowed to be faulted in from guest_mem. Shared to private conversion
unmaps the file ranges from userspace VMM pagetables.
4) Granularity of userspace pagetable mappings for shared ranges will
have to be dictated by KVM guest_mem file implementation.
Caveat here is that once private pages are mapped into userspace view.
Benefits here:
1) Userspace view remains consistent while still being able to use HVA ranges
2) It would be possible to use HVA based APIs from userspace to do
things like binding.
3) Double allocation wouldn't be a concern since hva ranges and gpa
ranges possibly map to the same HPA ranges.
>
> Code is available here if folks want to take a look before any kind of formal
> posting:
>
> https://github.com/sean-jc/linux.git x86/kvm_gmem_solo
>
> [1] https://lore.kernel.org/all/ff5c5b97-acdf-9745-ebe5-c6609dd6322e@google.com
> [2] https://lore.kernel.org/all/20230418-anfallen-irdisch-6993a61be10b@brauner
> [3] https://lore.kernel.org/linux-mm/20200522125214.31348-1-kirill.shutemov@linux.intel.com
Powered by blists - more mailing lists