lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 9 Jun 2022 17:11:21 -0700
From:   Marc Orr <marcorr@...gle.com>
To:     Chao Peng <chao.p.peng@...ux.intel.com>
Cc:     Vishal Annapurve <vannapurve@...gle.com>,
        kvm list <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
        linux-fsdevel@...r.kernel.org, linux-api@...r.kernel.org,
        linux-doc@...r.kernel.org, qemu-devel@...gnu.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        x86 <x86@...nel.org>, "H . Peter Anvin" <hpa@...or.com>,
        Hugh Dickins <hughd@...gle.com>,
        Jeff Layton <jlayton@...nel.org>,
        "J . Bruce Fields" <bfields@...ldses.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mike Rapoport <rppt@...nel.org>,
        Steven Price <steven.price@....com>,
        "Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
        Vlastimil Babka <vbabka@...e.cz>,
        Yu Zhang <yu.c.zhang@...ux.intel.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Andy Lutomirski <luto@...nel.org>,
        Jun Nakajima <jun.nakajima@...el.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Andi Kleen <ak@...ux.intel.com>,
        David Hildenbrand <david@...hat.com>, aarcange@...hat.com,
        ddutile@...hat.com, dhildenb@...hat.com,
        Quentin Perret <qperret@...gle.com>,
        Michael Roth <michael.roth@....com>, mhocko@...e.com
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM
 guest private memory

On Tue, Jun 7, 2022 at 7:22 PM Chao Peng <chao.p.peng@...ux.intel.com> wrote:
>
> On Tue, Jun 07, 2022 at 05:55:46PM -0700, Marc Orr wrote:
> > On Tue, Jun 7, 2022 at 12:01 AM Chao Peng <chao.p.peng@...ux.intel.com> wrote:
> > >
> > > On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote:
> > > > >
> > > > > Private memory map/unmap and conversion
> > > > > ---------------------------------------
> > > > > Userspace's map/unmap operations are done by fallocate() ioctl on the
> > > > > backing store fd.
> > > > >   - map: default fallocate() with mode=0.
> > > > >   - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE.
> > > > > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap
> > > > > secondary MMU page tables.
> > > > >
> > > > ....
> > > > >    QEMU: https://github.com/chao-p/qemu/tree/privmem-v6
> > > > >
> > > > > An example QEMU command line for TDX test:
> > > > > -object tdx-guest,id=tdx \
> > > > > -object memory-backend-memfd-private,id=ram1,size=2G \
> > > > > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1
> > > > >
> > > >
> > > > There should be more discussion around double allocation scenarios
> > > > when using the private fd approach. A malicious guest or buggy
> > > > userspace VMM can cause physical memory getting allocated for both
> > > > shared (memory accessible from host) and private fds backing the guest
> > > > memory.
> > > > Userspace VMM will need to unback the shared guest memory while
> > > > handling the conversion from shared to private in order to prevent
> > > > double allocation even with malicious guests or bugs in userspace VMM.
> > >
> > > I don't know how malicious guest can cause that. The initial design of
> > > this serie is to put the private/shared memory into two different
> > > address spaces and gives usersapce VMM the flexibility to convert
> > > between the two. It can choose respect the guest conversion request or
> > > not.
> >
> > For example, the guest could maliciously give a device driver a
> > private page so that a host-side virtual device will blindly write the
> > private page.
>
> With this patch series, it's actually even not possible for userspace VMM
> to allocate private page by a direct write, it's basically unmapped from
> there. If it really wants to, it should so something special, by intention,
> that's basically the conversion, which we should allow.

I think Vishal did a better job to explain this scenario in his last
reply than I did.

> > > It's possible for a usrspace VMM to cause double allocation if it fails
> > > to call the unback operation during the conversion, this may be a bug
> > > or not. Double allocation may not be a wrong thing, even in conception.
> > > At least TDX allows you to use half shared half private in guest, means
> > > both shared/private can be effective. Unbacking the memory is just the
> > > current QEMU implementation choice.
> >
> > Right. But the idea is that this patch series should accommodate all
> > of the CVM architectures. Or at least that's what I know was
> > envisioned last time we discussed this topic for SNP [*].
>
> AFAICS, this series should work for both TDX and SNP, and other CVM
> architectures. I don't see where TDX can work but SNP cannot, or I
> missed something here?

Agreed. I was just responding to the "At least TDX..." bit. Sorry for
any confusion.

> >
> > Regardless, it's important to ensure that the VM respects its memory
> > budget. For example, within Google, we run VMs inside of containers.
> > So if we double allocate we're going to OOM. This seems acceptable for
> > an early version of CVMs. But ultimately, I think we need a more
> > robust way to ensure that the VM operates within its memory container.
> > Otherwise, the OOM is going to be hard to diagnose and distinguish
> > from a real OOM.
>
> Thanks for bringing this up. But in my mind I still think userspace VMM
> can do and it's its responsibility to guarantee that, if that is hard
> required. By design, userspace VMM is the decision-maker for page
> conversion and has all the necessary information to know which page is
> shared/private. It also has the necessary knobs to allocate/free the
> physical pages for guest memory. Definitely, we should make userspace
> VMM more robust.

Vishal and Sean did a better job to articulate the concern in their
most recent replies.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ