lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 7 Mar 2024 19:54:23 +0800
From: "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Kan Liang
 <kan.liang@...ux.intel.com>, Like Xu <likexu@...cent.com>,
 kvm@...r.kernel.org, linux-perf-users@...r.kernel.org,
 linux-kernel@...r.kernel.org, Zhenyu Wang <zhenyuw@...ux.intel.com>,
 Zhang Xiong <xiong.y.zhang@...el.com>, Lv Zhiyuan <zhiyuan.lv@...el.com>,
 Dapeng Mi <dapeng1.mi@...el.com>, Mingwei Zhang <mizhang@...gle.com>
Subject: Re: [Patch v3] KVM: x86/pmu: Manipulate FIXED_CTR_CTRL MSR with
 macros


On 3/6/2024 3:55 PM, Mi, Dapeng wrote:
>
> On 3/6/2024 7:22 AM, Sean Christopherson wrote:
>> +Mingwei
>>
>> On Thu, Aug 24, 2023, Dapeng Mi wrote:
>>   diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
>>> index 7d9ba301c090..ffda2ecc3a22 100644
>>> --- a/arch/x86/kvm/pmu.h
>>> +++ b/arch/x86/kvm/pmu.h
>>> @@ -12,7 +12,8 @@
>>>                         MSR_IA32_MISC_ENABLE_BTS_UNAVAIL)
>>>     /* retrieve the 4 bits for EN and PMI out of IA32_FIXED_CTR_CTRL */
>>> -#define fixed_ctrl_field(ctrl_reg, idx) (((ctrl_reg) >> ((idx)*4)) 
>>> & 0xf)
>>> +#define fixed_ctrl_field(ctrl_reg, idx) \
>>> +    (((ctrl_reg) >> ((idx) * INTEL_FIXED_BITS_STRIDE)) & 
>>> INTEL_FIXED_BITS_MASK)
>>>     #define VMWARE_BACKDOOR_PMC_HOST_TSC        0x10000
>>>   #define VMWARE_BACKDOOR_PMC_REAL_TIME        0x10001
>>> @@ -165,7 +166,8 @@ static inline bool pmc_speculative_in_use(struct 
>>> kvm_pmc *pmc)
>>>         if (pmc_is_fixed(pmc))
>>>           return fixed_ctrl_field(pmu->fixed_ctr_ctrl,
>>> -                    pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3;
>>> +                    pmc->idx - INTEL_PMC_IDX_FIXED) &
>>> +                    (INTEL_FIXED_0_KERNEL | INTEL_FIXED_0_USER);
>>>         return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE;
>>>   }
>>> diff --git a/arch/x86/kvm/vmx/pmu_intel.c 
>>> b/arch/x86/kvm/vmx/pmu_intel.c
>>> index f2efa0bf7ae8..b0ac55891cb7 100644
>>> --- a/arch/x86/kvm/vmx/pmu_intel.c
>>> +++ b/arch/x86/kvm/vmx/pmu_intel.c
>>> @@ -548,8 +548,13 @@ static void intel_pmu_refresh(struct kvm_vcpu 
>>> *vcpu)
>>>           setup_fixed_pmc_eventsel(pmu);
>>>       }
>>>   -    for (i = 0; i < pmu->nr_arch_fixed_counters; i++)
>>> -        pmu->fixed_ctr_ctrl_mask &= ~(0xbull << (i * 4));
>>> +    for (i = 0; i < pmu->nr_arch_fixed_counters; i++) {
>>> +        pmu->fixed_ctr_ctrl_mask &=
>>> +             ~intel_fixed_bits_by_idx(i,
>>> +                          INTEL_FIXED_0_KERNEL |
>>> +                          INTEL_FIXED_0_USER |
>>> +                          INTEL_FIXED_0_ENABLE_PMI);
>>> +    }
>>>       counter_mask = ~(((1ull << pmu->nr_arch_gp_counters) - 1) |
>>>           (((1ull << pmu->nr_arch_fixed_counters) - 1) << 
>>> INTEL_PMC_IDX_FIXED));
>>>       pmu->global_ctrl_mask = counter_mask;
>>> @@ -595,7 +600,7 @@ static void intel_pmu_refresh(struct kvm_vcpu 
>>> *vcpu)
>>>               pmu->reserved_bits &= ~ICL_EVENTSEL_ADAPTIVE;
>>>               for (i = 0; i < pmu->nr_arch_fixed_counters; i++) {
>>>                   pmu->fixed_ctr_ctrl_mask &=
>>> -                    ~(1ULL << (INTEL_PMC_IDX_FIXED + i * 4));
>> OMG, this might just win the award for most obfuscated PMU code in 
>> KVM, which is
>> saying something.  The fact that INTEL_PMC_IDX_FIXED happens to be 
>> 32, the same
>> bit number as ICL_FIXED_0_ADAPTIVE, is 100% coincidence.  Good riddance.
>>
>> Argh, and this goofy code helped introduce a real bug. 
>> reprogram_fixed_counters()
>> doesn't account for the upper 32 bits of IA32_FIXED_CTR_CTRL.
>>
>> Wait, WTF?  Nothing in KVM accounts for the upper bits.  This can't 
>> possibly work.
>>
>> IIUC, because KVM _always_ sets precise_ip to a non-zero bit for PEBS 
>> events,
>> perf will _always_ generate an adaptive record, even if the guest 
>> requested a
>> basic record.  Ugh, and KVM will always generate adaptive records 
>> even if the
>> guest doesn't support them.  This is all completely broken.  It 
>> probably kinda
>> sorta works because the Basic info is always stored in the record, 
>> and generating
>> more info requires a non-zero MSR_PEBS_DATA_CFG, but ugh.
>>
>> Oh great, and it gets worse.  intel_pmu_disable_fixed() doesn't clear 
>> the upper
>> bits either, i.e. leaves ICL_FIXED_0_ADAPTIVE set.  Unless I'm 
>> misreading the code,
>> intel_pmu_enable_fixed() effectively doesn't clear 
>> ICL_FIXED_0_ADAPTIVE either,
>> as it only modifies the bit when it wants to set ICL_FIXED_0_ADAPTIVE.
>
>
> Currently the host PMU driver would always set the "Adaptive_Record" 
> bit in PERFEVTSELx and FIXED_CNTR_CTR MSRs as long as HW supports the 
> adaptive PEBS feature (See details in helpers intel_pmu_pebs_enable() 
> and intel_pmu_enable_fixed()).
>
> It looks perf system doesn't export a interface to enable/disable the 
> adaptive PEBS.  I suppose that's why KVM doesn't handle the 
> "Adaptive_Record" bit in ERFEVTSELx and FIXED_CNTR_CTR MSRs.
>
>
>>
>> *sigh*
>>
>> I'm _very_ tempted to disable KVM PEBS support for the current PMU, 
>> and make it
>> available only when the so-called passthrough PMU is available[*].  
>> Because I
>> don't see how this is can possibly be functionally correct, nor do I 
>> see a way
>> to make it functionally correct without a rather large and invasive 
>> series.
>>
>> Ouch.  And after chatting with Mingwei, who asked the very good 
>> question of
>> "can this leak host state?", I am pretty sure that yes, this can leak 
>> host state.
>>
>> When PERF_CAP_PEBS_BASELINE is enabled for the guest, i.e. when the 
>> guest has
>> access to adaptive records, KVM gives the guest full access to 
>> MSR_PEBS_DATA_CFG
>>
>>     pmu->pebs_data_cfg_mask = ~0xff00000full;
>>
>> which makes sense in a vacuum, because AFAICT the architecture 
>> doesn't allow
>> exposing a subset of the four adaptive controls.
>>
>> GPRs and XMMs are always context switched and thus benign, but IIUC, 
>> Memory Info
>> provides data that might now otherwise be available to the guest, 
>> e.g. if host
>> userspace has disallowed equivalent events via KVM_SET_PMU_EVENT_FILTER.
>>
>> And unless I'm missing something, LBRs are a full leak of host 
>> state.  Nothing
>> in the SDM suggests that PEBS records honor MSR intercepts, so unless 
>> KVM is
>> also passing through LBRs, i.e. is context switching all LBR MSRs, 
>> the guest can
>> use PEBS to read host LBRs at will.
>
> Not sure If I missed something, but I don't see there is leak of host 
> state. All perf events created by KVM would set "exclude_host" 
> attribute, that would leads to all guest counters including the PEBS 
> enabling counters would be disabled immediately by VMX once vm exits, 
> and so the PEBS engine would stop as well. I don't see a PEBS record 
> contains host state is possible to be written into guest DS area.


Jut think twice, it looks the host LBR stack could really be possible to 
leak into guest since LBR stack is not cleared or switched in VM-entry 
even though the captured guest LBR record may overwrite the LBR stack 
gradually after vm-entry.


>
>
>>
>> Unless someone chimes in to point out how PEBS virtualization isn't a 
>> broken mess,
>> I will post a patch to effectively disable PEBS virtualization.
>>
>> diff --git a/arch/x86/kvm/vmx/capabilities.h 
>> b/arch/x86/kvm/vmx/capabilities.h
>> index 41a4533f9989..a2f827fa0ca1 100644
>> --- a/arch/x86/kvm/vmx/capabilities.h
>> +++ b/arch/x86/kvm/vmx/capabilities.h
>> @@ -392,7 +392,7 @@ static inline bool vmx_pt_mode_is_host_guest(void)
>>     static inline bool vmx_pebs_supported(void)
>>   {
>> -       return boot_cpu_has(X86_FEATURE_PEBS) && kvm_pmu_cap.pebs_ept;
>> +       return false;
>>   }
>>     static inline bool cpu_has_notify_vmexit(void)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ