lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+EHjTyGyGL+ox81=jdtoHERtHPV=P7wJub=3j7chdijyq-AgA@mail.gmail.com>
Date:   Mon, 17 Oct 2022 11:15:02 +0100
From:   Fuad Tabba <tabba@...gle.com>
To:     Chao Peng <chao.p.peng@...ux.intel.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
        linux-api@...r.kernel.org, linux-doc@...r.kernel.org,
        qemu-devel@...gnu.org, Paolo Bonzini <pbonzini@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
        Hugh Dickins <hughd@...gle.com>,
        Jeff Layton <jlayton@...nel.org>,
        "J . Bruce Fields" <bfields@...ldses.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Shuah Khan <shuah@...nel.org>, Mike Rapoport <rppt@...nel.org>,
        Steven Price <steven.price@....com>,
        "Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
        Vlastimil Babka <vbabka@...e.cz>,
        Vishal Annapurve <vannapurve@...gle.com>,
        Yu Zhang <yu.c.zhang@...ux.intel.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        luto@...nel.org, jun.nakajima@...el.com, dave.hansen@...el.com,
        ak@...ux.intel.com, david@...hat.com, aarcange@...hat.com,
        ddutile@...hat.com, dhildenb@...hat.com,
        Quentin Perret <qperret@...gle.com>,
        Michael Roth <michael.roth@....com>, mhocko@...e.com,
        Muchun Song <songmuchun@...edance.com>, wei.w.wang@...el.com
Subject: Re: [PATCH v8 5/8] KVM: Register/unregister the guest private memory regions

Hi,

> > > +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM
> > > +#define KVM_MEM_ATTR_SHARED    0x0001
> > > +static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size,
> > > +                                    bool is_private)
> > > +{
> >
> > I wonder if this ioctl should be implemented as an arch-specific
> > ioctl. In this patch it performs some actions that pKVM might not need
> > or might want to do differently.
>
> I think it's doable. We can provide the mem_attr_array kind thing in
> common code and let arch code decide to use it or not. Currently
> mem_attr_array is defined in the struct kvm, if those bytes are
> unnecessary for pKVM it can even be moved to arch definition, but that
> also loses the potential code sharing for confidential usages in other
> non-architectures, e.g. if ARM also supports such usage. Or it can be
> provided through a different CONFIG_ instead of
> CONFIG_HAVE_KVM_PRIVATE_MEM.

This sounds good. Thank you.


/fuad

> Thanks,
> Chao
> >
> > pKVM tracks the sharing status in the stage-2 page table's software
> > bits, so it can avoid the overhead of using mem_attr_array.
> >
> > Also, this ioctl calls kvm_zap_gfn_range(), as does the invalidation
> > notifier (introduced in patch 8). For pKVM, the kind of zapping (or
> > the information conveyed to the hypervisor) might need to be different
> > depending on the cause; whether it's invalidation or change of sharing
> > status.
>
> >
> > Thanks,
> > /fuad
> >
> >
> > > +       gfn_t start, end;
> > > +       unsigned long index;
> > > +       void *entry;
> > > +       int r;
> > > +
> > > +       if (size == 0 || gpa + size < gpa)
> > > +               return -EINVAL;
> > > +       if (gpa & (PAGE_SIZE - 1) || size & (PAGE_SIZE - 1))
> > > +               return -EINVAL;
> > > +
> > > +       start = gpa >> PAGE_SHIFT;
> > > +       end = (gpa + size - 1 + PAGE_SIZE) >> PAGE_SHIFT;
> > > +
> > > +       /*
> > > +        * Guest memory defaults to private, kvm->mem_attr_array only stores
> > > +        * shared memory.
> > > +        */
> > > +       entry = is_private ? NULL : xa_mk_value(KVM_MEM_ATTR_SHARED);
> > > +
> > > +       for (index = start; index < end; index++) {
> > > +               r = xa_err(xa_store(&kvm->mem_attr_array, index, entry,
> > > +                                   GFP_KERNEL_ACCOUNT));
> > > +               if (r)
> > > +                       goto err;
> > > +       }
> > > +
> > > +       kvm_zap_gfn_range(kvm, start, end);
> > > +
> > > +       return r;
> > > +err:
> > > +       for (; index > start; index--)
> > > +               xa_erase(&kvm->mem_attr_array, index);
> > > +       return r;
> > > +}
> > > +#endif /* CONFIG_HAVE_KVM_PRIVATE_MEM */
> > > +
> > >  #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
> > >  static int kvm_pm_notifier_call(struct notifier_block *bl,
> > >                                 unsigned long state,
> > > @@ -1165,6 +1206,9 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname)
> > >         spin_lock_init(&kvm->mn_invalidate_lock);
> > >         rcuwait_init(&kvm->mn_memslots_update_rcuwait);
> > >         xa_init(&kvm->vcpu_array);
> > > +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM
> > > +       xa_init(&kvm->mem_attr_array);
> > > +#endif
> > >
> > >         INIT_LIST_HEAD(&kvm->gpc_list);
> > >         spin_lock_init(&kvm->gpc_lock);
> > > @@ -1338,6 +1382,9 @@ static void kvm_destroy_vm(struct kvm *kvm)
> > >                 kvm_free_memslots(kvm, &kvm->__memslots[i][0]);
> > >                 kvm_free_memslots(kvm, &kvm->__memslots[i][1]);
> > >         }
> > > +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM
> > > +       xa_destroy(&kvm->mem_attr_array);
> > > +#endif
> > >         cleanup_srcu_struct(&kvm->irq_srcu);
> > >         cleanup_srcu_struct(&kvm->srcu);
> > >         kvm_arch_free_vm(kvm);
> > > @@ -1541,6 +1588,11 @@ static void kvm_replace_memslot(struct kvm *kvm,
> > >         }
> > >  }
> > >
> > > +bool __weak kvm_arch_has_private_mem(struct kvm *kvm)
> > > +{
> > > +       return false;
> > > +}
> > > +
> > >  static int check_memory_region_flags(const struct kvm_user_mem_region *mem)
> > >  {
> > >         u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
> > > @@ -4703,6 +4755,24 @@ static long kvm_vm_ioctl(struct file *filp,
> > >                 r = kvm_vm_ioctl_set_memory_region(kvm, &mem);
> > >                 break;
> > >         }
> > > +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM
> > > +       case KVM_MEMORY_ENCRYPT_REG_REGION:
> > > +       case KVM_MEMORY_ENCRYPT_UNREG_REGION: {
> > > +               struct kvm_enc_region region;
> > > +               bool set = ioctl == KVM_MEMORY_ENCRYPT_REG_REGION;
> > > +
> > > +               if (!kvm_arch_has_private_mem(kvm))
> > > +                       goto arch_vm_ioctl;
> > > +
> > > +               r = -EFAULT;
> > > +               if (copy_from_user(&region, argp, sizeof(region)))
> > > +                       goto out;
> > > +
> > > +               r = kvm_vm_ioctl_set_mem_attr(kvm, region.addr,
> > > +                                             region.size, set);
> > > +               break;
> > > +       }
> > > +#endif
> > >         case KVM_GET_DIRTY_LOG: {
> > >                 struct kvm_dirty_log log;
> > >
> > > @@ -4856,6 +4926,9 @@ static long kvm_vm_ioctl(struct file *filp,
> > >                 r = kvm_vm_ioctl_get_stats_fd(kvm);
> > >                 break;
> > >         default:
> > > +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM
> > > +arch_vm_ioctl:
> > > +#endif
> > >                 r = kvm_arch_vm_ioctl(filp, ioctl, arg);
> > >         }
> > >  out:
> > > --
> > > 2.25.1
> > >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ