lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 14 Aug 2020 10:33:45 +0800
From:   Chenyi Qiang <chenyi.qiang@...el.com>
To:     Jim Mattson <jmattson@...gle.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Joerg Roedel <joro@...tes.org>,
        Xiaoyao Li <xiaoyao.li@...el.com>,
        kvm list <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC 6/7] KVM: X86: Expose PKS to guest and userspace



On 8/14/2020 3:04 AM, Jim Mattson wrote:
> On Fri, Aug 7, 2020 at 1:47 AM Chenyi Qiang <chenyi.qiang@...el.com> wrote:
>>
>> Existence of PKS is enumerated via CPUID.(EAX=7H,ECX=0):ECX[31]. It is
>> enabled by setting CR4.PKS when long mode is active. PKS is only
>> implemented when EPT is enabled and requires the support of VM_{ENTRY,
>> EXIT}_LOAD_IA32_PKRS currently.
>>
>> Signed-off-by: Chenyi Qiang <chenyi.qiang@...el.com>
> 
>> @@ -967,7 +969,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
>>   {
>>          unsigned long old_cr4 = kvm_read_cr4(vcpu);
>>          unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
>> -                                  X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE;
>> +                                  X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE |
>> +                                  X86_CR4_PKS;
> 
> This list already seems overly long, but I don't think CR4.PKS belongs
> here. In volume 3 of the SDM, section 4.4.1, it says:
> 
> - If PAE paging would be in use following an execution of MOV to CR0
> or MOV to CR4 (see Section 4.1.1) and the instruction is modifying any
> of CR0.CD, CR0.NW, CR0.PG, CR4.PAE, CR4.PGE, CR4.PSE, or CR4.SMEP;
> then the PDPTEs are loaded from the address in CR3.
> 
> CR4.PKS is not in the list of CR4 bits that result in a PDPTE load.
> Since it has no effect on PAE paging, I would be surprised if it did
> result in a PDPTE load.
> 

Oh, My mistake.

>>          if (kvm_valid_cr4(vcpu, cr4))
>>                  return 1;
>> @@ -1202,7 +1205,7 @@ static const u32 msrs_to_save_all[] = {
>>          MSR_IA32_RTIT_ADDR1_A, MSR_IA32_RTIT_ADDR1_B,
>>          MSR_IA32_RTIT_ADDR2_A, MSR_IA32_RTIT_ADDR2_B,
>>          MSR_IA32_RTIT_ADDR3_A, MSR_IA32_RTIT_ADDR3_B,
>> -       MSR_IA32_UMWAIT_CONTROL,
>> +       MSR_IA32_UMWAIT_CONTROL, MSR_IA32_PKRS,
> 
> Should MSR_IA32_PKRS be added to the switch statement in
> kvm_init_msr_list()? Something like...
> 
> case MSR_IA32_PKRS:
>          if (!kvm_cpu_cap_has(X86_FEATURE_PKRS))
>                  continue;
>          break;
> 

Yes, this should be added.

Powered by blists - more mailing lists