lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e350fd1c-f8e4-5e4a-cc0a-baa735277083@linux.alibaba.com>
Date:   Wed, 8 Dec 2021 07:30:32 +0800
From:   Lai Jiangshan <laijs@...ux.alibaba.com>
To:     Sean Christopherson <seanjc@...gle.com>,
        Lai Jiangshan <jiangshanlai@...il.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
        "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH 3/4] KVM: X86: Handle implicit supervisor access with SMAP



On 2021/12/8 05:52, Sean Christopherson wrote:

> 
>> +	 *
>> +	 * This computes explicit_access && (rflags & X86_EFLAGS_AC), leaving
> 
> Too many &&, the logic below is a bitwise &, not a logical &&.

The intended logic is "explicit_access &&" ("logic and") and the code ensures
explicit_access has the bit X86_EFLAGS_AC and morphs it into "&" ("binary and")
below to achieve branchless.

Original comments is "(cpl < 3) &&", and it is morphed into "(cpl - 3) &" in
code to achieve branchless.

The comments is bad in this patch, it should have stated that the logic operations
on the conditions are changed to use "binary and" to achieve branchless.

> 
>>   	 * the result in X86_EFLAGS_AC. We then insert it in place of
>>   	 * the PFERR_RSVD_MASK bit; this bit will always be zero in pfec,
>>   	 * but it will be one in index if SMAP checks are being overridden.
>>   	 * It is important to keep this branchless.
> 
> Heh, so important that it incurs multiple branches and possible VMREADs in
> vmx_get_cpl() and vmx_get_rflags().  And before static_call, multiple retpolines
> to boot.  Probably a net win now as only the first permission_fault() check for
> a given VM-Exit be penalized, but the comment is amusing nonetheless.
> 
>>   	 */
>> -	unsigned long not_smap = (cpl - 3) & (rflags & X86_EFLAGS_AC);
>> +	u32 not_smap = (rflags & X86_EFLAGS_AC) & vcpu->arch.explicit_access;
> 
> I really, really dislike shoving this into vcpu->arch.  I'd much prefer to make
> this a property of the access, even if that means adding another param or doing
> something gross with @access (@pfec here).

En, it taste bad to add a variable in vcpu->arch for a scope-like condition.
I will do as you suggested.

> 
>>   	int index = (pfec >> 1) +
>>   		    (not_smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
>>   	bool fault = (mmu->permissions[index] >> pte_access) & 1;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ