lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e5d0e772-ec9e-049a-da85-960c14520f8c@suse.com>
Date:   Tue, 4 Oct 2022 17:22:34 +0200
From:   Juergen Gross <jgross@...e.com>
To:     Jan Beulich <jbeulich@...e.com>
Cc:     Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        "H. Peter Anvin" <hpa@...or.com>, xen-devel@...ts.xenproject.org,
        linux-kernel@...r.kernel.org, x86@...nel.org
Subject: Re: [PATCH v2 1/3] xen/pv: allow pmu msr accesses to cause GP

On 04.10.22 12:58, Jan Beulich wrote:
> On 04.10.2022 10:43, Juergen Gross wrote:
>> Today pmu_msr_read() and pmu_msr_write() fall back to the safe variants
>> of read/write MSR in case the MSR access isn't emulated via Xen. Allow
>> the caller to select the potentially faulting variant by passing NULL
>> for the error pointer.
>>
>> Restructure the code to make it more readable.
>>
>> Signed-off-by: Juergen Gross <jgross@...e.com>
> 
> I think the title (and to some degree also the description) is misleading:
> The property we care about here isn't whether an MSR access would raise
> #GP (we can't control that), but whether that #GP would be recovered from.

Would you be fine with adding "fatal" or "visible"?

> 
>> --- a/arch/x86/xen/pmu.c
>> +++ b/arch/x86/xen/pmu.c
>> @@ -131,6 +131,9 @@ static inline uint32_t get_fam15h_addr(u32 addr)
>>   
>>   static inline bool is_amd_pmu_msr(unsigned int msr)
>>   {
>> +	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
>> +		return false;
> 
> I understand this and ...
> 
>> @@ -144,6 +147,9 @@ static int is_intel_pmu_msr(u32 msr_index, int *type, int *index)
>>   {
>>   	u32 msr_index_pmc;
>>   
>> +	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
>> +		return false;
> 
> ... this matches prior behavior, but may I suggest that while moving
> these here you at least accompany them by a comment clarifying that
> these aren't really correct? We'd come closer if is_amd_pmu_msr()
> accepted AMD and Hygon, while is_intel_pmu_msr() may want to accept
> Intel and Centaur (but I understand this would be largely orthogonal,
> hence the suggestion towards comments). In the hypervisor we kind of
> also support Shanghai, but I wonder whether we wouldn't better rip
> out that code as unmaintained.

Maybe the correct thing to do would be to add another patch to fix
is_*_pmu_msr() along the lines you are suggesting.


Juergen

Download attachment "OpenPGP_0xB0DE9DD628BF132F.asc" of type "application/pgp-keys" (3099 bytes)

Download attachment "OpenPGP_signature" of type "application/pgp-signature" (496 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ