lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 25 Sep 2017 21:02:38 +0800
From:   Wei Wang <wei.w.wang@...el.com>
To:     Paolo Bonzini <pbonzini@...hat.com>,
        virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, mst@...hat.com, rkrcmar@...hat.com,
        ak@...ux.intel.com, mingo@...hat.com
Subject: Re: [PATCH v1 1/4] KVM/vmx: re-write the msr auto switch feature

On 09/25/2017 07:54 PM, Paolo Bonzini wrote:
> On 25/09/2017 06:44, Wei Wang wrote:
>>   
>> +static void update_msr_autoload_count_max(void)
>> +{
>> +	u64 vmx_msr;
>> +	int n;
>> +
>> +	/*
>> +	 * According to the Intel SDM, if Bits 27:25 of MSR_IA32_VMX_MISC is
>> +	 * n, then (n + 1) * 512 is the recommended max number of MSRs to be
>> +	 * included in the VMExit and VMEntry MSR auto switch list.
>> +	 */
>> +	rdmsrl(MSR_IA32_VMX_MISC, vmx_msr);
>> +	n = ((vmx_msr & 0xe000000) >> 25) + 1;
>> +	msr_autoload_count_max = n * KVM_VMX_DEFAULT_MSR_AUTO_LOAD_COUNT;
>> +}
>> +
>
> Any reasons to do this if it's unlikely that we'll ever update more than
> 512 MSRs?
>
> Paolo

It isn't a must to allocate memory for 512 MSRs, but I think it would be 
good to
allocate memory at least for 128 MSRs, because on skylake we have 32 
triplets
of MSRs (FROM/TO/INFO), which are 96 in total already.

The disadvantage is that developers would need to manually calculate and 
change
the number carefully when they need to add new MSRs for auto switching 
in the
future. So, if consuming a little bit more memory isn't a concern, I 
think we can
directly allocate memory based on what's recommended by the hardware.


Best,
Wei

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ