lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a6102420-cbda-700a-b049-31db96d357b1@oracle.com>
Date:   Tue, 1 Dec 2020 21:17:47 -0800
From:   Ankur Arora <ankur.a.arora@...cle.com>
To:     David Woodhouse <dwmw2@...radead.org>,
        Joao Martins <joao.m.martins@...cle.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Cc:     Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org
Subject: Re: [PATCH RFC 03/39] KVM: x86/xen: register shared_info page

On 2020-12-01 5:26 p.m., David Woodhouse wrote
> On Tue, 2020-12-01 at 16:40 -0800, Ankur Arora wrote:
>> On 2020-12-01 5:07 a.m., David Woodhouse wrote:
>>> On Wed, 2019-02-20 at 20:15 +0000, Joao Martins wrote:
>>>> +static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
>>>> +{
>>>> +       struct shared_info *shared_info;
>>>> +       struct page *page;
>>>> +
>>>> +       page = gfn_to_page(kvm, gfn);
>>>> +       if (is_error_page(page))
>>>> +               return -EINVAL;
>>>> +
>>>> +       kvm->arch.xen.shinfo_addr = gfn;
>>>> +
>>>> +       shared_info = page_to_virt(page);
>>>> +       memset(shared_info, 0, sizeof(struct shared_info));
>>>> +       kvm->arch.xen.shinfo = shared_info;
>>>> +       return 0;
>>>> +}
>>>> +
>>>
>>> Hm.
>>>
>>> How come we get to pin the page and directly dereference it every time,
>>> while kvm_setup_pvclock_page() has to use kvm_write_guest_cached()
>>> instead?
>>
>> So looking at my WIP trees from the time, this is something that
>> we went back and forth on as well with using just a pinned page or a
>> persistent kvm_vcpu_map().
> 
> OK, thanks.
> 
>> I remember distinguishing shared_info/vcpu_info from kvm_setup_pvclock_page()
>> as shared_info is created early and is not expected to change during the
>> lifetime of the guest which didn't seem true for MSR_KVM_SYSTEM_TIME (or
>> MSR_KVM_STEAL_TIME) so that would either need to do a kvm_vcpu_map()
>> kvm_vcpu_unmap() dance or do some kind of synchronization.
>>
>> That said, I don't think this code explicitly disallows any updates
>> to shared_info.
> 
> It needs to allow updates as well as disabling the shared_info pages.
> We're going to need that to implement SHUTDOWN_soft_reset for kexec.
True.

> 
>>>
>>> If that was allowed, wouldn't it have been a much simpler fix for
>>> CVE-2019-3016? What am I missing?
>>
>> Agreed.
>>
>> Perhaps, Paolo can chime in with why KVM never uses pinned page
>> and always prefers to do cached mappings instead?
>>
>>>
>>> Should I rework these to use kvm_write_guest_cached()?
>>
>> kvm_vcpu_map() would be better. The event channel logic does RMW operations
>> on shared_info->vcpu_info.
> 
> I've ported the shared_info/vcpu_info parts and made a test case, and
> was going back through to make it use kvm_write_guest_cached(). But I
> should probably plug on through the evtchn bits before I do that.
> 
> I also don't see much locking on the cache, and can't convince myself
> that accessing the shared_info page from multiple CPUs with
> kvm_write_guest_cached() or kvm_map_gfn() is sane unless they each have
> their own cache.

I think you could get a VCPU specific cache with kvm_vcpu_map().

> 
> What I have so far is at
> 
> https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/xenpv

Thanks. Will take a look there.

Ankur

> 
> I'll do the event channel support tomorrow and hook it up to my actual
> VMM to give it some more serious testing.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ