lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <097eb5f5-2cd9-8b08-32c5-d90c8e0cbb6d@amd.com>
Date:   Mon, 10 Sep 2018 10:10:09 -0500
From:   Brijesh Singh <brijesh.singh@....com>
To:     Sean Christopherson <sean.j.christopherson@...el.com>,
        Borislav Petkov <bp@...e.de>
Cc:     brijesh.singh@....com, x86@...nel.org,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        Tom Lendacky <thomas.lendacky@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <hpa@...or.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH v6 5/5] x86/kvm: Avoid dynamic allocation of pvclock data
 when SEV is active



On 09/10/2018 08:29 AM, Sean Christopherson wrote:
...

>>>> + */
>>>> +static struct pvclock_vsyscall_time_info
>>>> +			hv_clock_aux[NR_CPUS] __decrypted_aux;
>>> Hmm, so worst case that's 64 4K pages:
>>>
>>> (8192*32)/4096 = 64 4K pages.
>> We can minimize the worst case memory usage. The number of VCPUs
>> supported by KVM maybe less than NR_CPUS. e.g Currently KVM_MAX_VCPUS is
>> set to 288
> 
> KVM_MAX_VCPUS is a property of the host, whereas this code runs in the
> guest, e.g. KVM_MAX_VCPUS could be 2048 in the host for all we know.
> 


IIRC, during guest creation time qemu will check the host supported
VCPUS count. If count is greater than KVM_MAX_VCPUS then it will
fail to launch guest (or fail to hot plug vcpus). In other words, the
number of vcpus in a KVM guest will never to > KVM_MAX_VCPUS.

Am I missing something ?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ