[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4d60384a-11e0-2f2b-a568-517b40c91b25@loongson.cn>
Date: Mon, 22 Apr 2024 10:14:18 +0800
From: maobibo <maobibo@...ngson.cn>
To: Sean Christopherson <seanjc@...gle.com>,
Mingwei Zhang <mizhang@...gle.com>
Cc: Dapeng Mi <dapeng1.mi@...ux.intel.com>,
Xiong Zhang <xiong.y.zhang@...ux.intel.com>, pbonzini@...hat.com,
peterz@...radead.org, kan.liang@...el.com, zhenyuw@...ux.intel.com,
jmattson@...gle.com, kvm@...r.kernel.org, linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org, zhiyuan.lv@...el.com, eranian@...gle.com,
irogers@...gle.com, samantha.alt@...el.com, like.xu.linux@...il.com,
chao.gao@...el.com
Subject: Re: [RFC PATCH 23/41] KVM: x86/pmu: Implement the save/restore of PMU
state for Intel CPU
On 2024/4/16 上午6:45, Sean Christopherson wrote:
> On Mon, Apr 15, 2024, Mingwei Zhang wrote:
>> On Mon, Apr 15, 2024 at 10:38 AM Sean Christopherson <seanjc@...gle.com> wrote:
>>> One my biggest complaints with the current vPMU code is that the roles and
>>> responsibilities between KVM and perf are poorly defined, which leads to suboptimal
>>> and hard to maintain code.
>>>
>>> Case in point, I'm pretty sure leaving guest values in PMCs _would_ leak guest
>>> state to userspace processes that have RDPMC permissions, as the PMCs might not
>>> be dirty from perf's perspective (see perf_clear_dirty_counters()).
>>>
>>> Blindly clearing PMCs in KVM "solves" that problem, but in doing so makes the
>>> overall code brittle because it's not clear whether KVM _needs_ to clear PMCs,
>>> or if KVM is just being paranoid.
>>
>> So once this rolls out, perf and vPMU are clients directly to PMU HW.
>
> I don't think this is a statement we want to make, as it opens a discussion
> that we won't win. Nor do I think it's one we *need* to make. KVM doesn't need
> to be on equal footing with perf in terms of owning/managing PMU hardware, KVM
> just needs a few APIs to allow faithfully and accurately virtualizing a guest PMU.
>
>> Faithful cleaning (blind cleaning) has to be the baseline
>> implementation, until both clients agree to a "deal" between them.
>> Currently, there is no such deal, but I believe we could have one via
>> future discussion.
>
> What I am saying is that there needs to be a "deal" in place before this code
> is merged. It doesn't need to be anything fancy, e.g. perf can still pave over
> PMCs it doesn't immediately load, as opposed to using cpu_hw_events.dirty to lazily
> do the clearing. But perf and KVM need to work together from the get go, ie. I
> don't want KVM doing something without regard to what perf does, and vice versa.
>
There is similar issue on LoongArch vPMU where vm can directly pmu
hardware and pmu hw is shard with guest and host. Besides context switch
there are other places where perf core will access pmu hw, such as tick
timer/hrtimer/ipi function call, and KVM can only intercept context switch.
Can we add callback handler in structure kvm_guest_cbs? just like this:
@@ -6403,6 +6403,7 @@ static struct perf_guest_info_callbacks
kvm_guest_cbs = {
.state = kvm_guest_state,
.get_ip = kvm_guest_get_ip,
.handle_intel_pt_intr = NULL,
+ .lose_pmu = kvm_guest_lose_pmu,
};
By the way, I do not know should the callback handler be triggered in
perf core or detailed pmu hw driver. From ARM pmu hw driver, it is
triggered in pmu hw driver such as function kvm_vcpu_pmu_resync_el0,
but I think it will be better if it is done in perf core.
Regards
Bibo Mao
Powered by blists - more mailing lists