lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <253965df-6d80-bbfd-ab01-f9e69b274bf3@quicinc.com>
Date:   Mon, 28 Aug 2023 19:53:26 -0700
From:   Elliot Berman <quic_eberman@...cinc.com>
To:     Ackerley Tng <ackerleytng@...gle.com>,
        Sean Christopherson <seanjc@...gle.com>
CC:     <pbonzini@...hat.com>, <maz@...nel.org>, <oliver.upton@...ux.dev>,
        <chenhuacai@...nel.org>, <mpe@...erman.id.au>,
        <anup@...infault.org>, <paul.walmsley@...ive.com>,
        <palmer@...belt.com>, <aou@...s.berkeley.edu>,
        <willy@...radead.org>, <akpm@...ux-foundation.org>,
        <paul@...l-moore.com>, <jmorris@...ei.org>, <serge@...lyn.com>,
        <kvm@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org>,
        <kvmarm@...ts.linux.dev>, <linux-mips@...r.kernel.org>,
        <linuxppc-dev@...ts.ozlabs.org>, <kvm-riscv@...ts.infradead.org>,
        <linux-riscv@...ts.infradead.org>, <linux-fsdevel@...r.kernel.org>,
        <linux-mm@...ck.org>, <linux-security-module@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <chao.p.peng@...ux.intel.com>,
        <tabba@...gle.com>, <jarkko@...nel.org>,
        <yu.c.zhang@...ux.intel.com>, <vannapurve@...gle.com>,
        <mail@...iej.szmigiero.name>, <vbabka@...e.cz>, <david@...hat.com>,
        <qperret@...gle.com>, <michael.roth@....com>,
        <wei.w.wang@...el.com>, <liam.merwick@...cle.com>,
        <isaku.yamahata@...il.com>, <kirill.shutemov@...ux.intel.com>
Subject: Re: [RFC PATCH v11 12/29] KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for
 guest-specific backing memory



On 8/28/2023 3:56 PM, Ackerley Tng wrote:
 > 1. Since the physical memory's representation is the inode and should be
 >     coupled to the virtual machine (as a concept, not struct kvm), should
 >     the binding/coupling be with the file, or the inode?
 >

I've been working on Gunyah's implementation in parallel (not yet posted 
anywhere). Thus far, I've coupled the virtual machine struct to the 
struct file so that I can increment the file refcount when mapping the 
gmem to the virtual machine.

 > 2. Should struct kvm still be bound to the file/inode at gmem file
 >     creation time, since
 >
 >     + struct kvm isn't a good representation of a "virtual machine"
 >     + we currently don't have anything that really represents a "virtual
 >       machine" without hardware support
 >
 >
 > I'd also like to bring up another userspace use case that Google has:
 > re-use of gmem files for rebooting guests when the KVM instance is
 > destroyed and rebuilt.
 >
 > When rebooting a VM there are some steps relating to gmem that are
 > performance-sensitive:
 >
 > a.      Zeroing pages from the old VM when we close a gmem file/inode
 > b. Deallocating pages from the old VM when we close a gmem file/inode
 > c.   Allocating pages for the new VM from the new gmem file/inode
 > d.      Zeroing pages on page allocation
 >
 > We want to reuse the gmem file to save re-allocating pages (b. and c.),
 > and one of the two page zeroing allocations (a. or d.).
 >
 > Binding the gmem file to a struct kvm on creation time means the gmem
 > file can't be reused with another VM on reboot. Also, host userspace is
 > forced to close the gmem file to allow the old VM to be freed.
 >
 > For other places where files pin KVM, like the stats fd pinning vCPUs, I
 > guess that matters less since there isn't much of a penalty to close and
 > re-open the stats fd.

I had a 3rd question that's related to how to wire the gmem up to a 
virtual machine:

I learned of a usecase to implement copy-on-write for gmem. The premise 
would be to have a "golden copy" of the memory that multiple virtual 
machines can map in as RO. If a virtual machine tries to write to those 
pages, they get copied to a virtual machine-specific page that isn't 
shared with other VMs. How do we track those pages?

Thanks,
Elliot

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ