lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0b21f3b7-9946-4f65-71c7-260ef490928d@gmail.com>
Date:   Wed, 28 Jun 2023 14:07:25 +0800
From:   Like Xu <like.xu.linux@...il.com>
To:     Xiong Y Zhang <xiong.y.zhang@...el.com>,
        Sean Christopherson <seanjc@...gle.com>
Cc:     "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "pbonzini@...hat.com" <pbonzini@...hat.com>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "kan.liang@...ux.intel.com" <kan.liang@...ux.intel.com>,
        "zhenyuw@...ux.intel.com" <zhenyuw@...ux.intel.com>,
        Zhiyuan Lv <zhiyuan.lv@...el.com>,
        Weijiang Yang <weijiang.yang@...el.com>
Subject: Re: [PATCH 3/4] KVM: VMX/pmu: Enable inactive vLBR event in guest LBR
 MSR emulation

On 27/6/2023 11:07 pm, Sean Christopherson wrote:
> On Tue, Jun 27, 2023, Xiong Y Zhang wrote:
>>> On Sun, Jun 25, 2023, Xiong Y Zhang wrote:
>>>>>> On Fri, Jun 16, 2023, Xiong Zhang wrote:
>>>>> 	/*
>>>>> 	 * Attempt to re-enable the vLBR event if it was disabled due to
>>>>> 	 * contention with host LBR usage, i.e. was put into an error state.
>>>>> 	 * Perf doesn't notify KVM if the host stops using LBRs, i.e. KVM needs
>>>>> 	 * to manually re-enable the event.
>>>>> 	 */
>>>>>
>>>>> Which begs the question, why can't there be a notification of some
>>>>> form that the LBRs are once again available?
>>>> This is perf scheduler rule. If pinned event couldn't get resource as
>>>> resource limitation, perf will put it into error state and exclude it
>>>> from perf scheduler, even if resource available later, perf won't
>>>> schedule it again as it is in error state, the only way to reschedule
>>>> it is to enable it again.  If non-pinned event couldn't get resource
>>>> as resource limitation, perf will put it into inactive state, perf
>>>> will reschedule it automatically once resource is available.  vLBR event is per
>>> process pinned event.
>>>
>>> That doesn't answer my question.  I get that all of this is subject to perf
>>> scheduling, I'm asking why perf doesn't communicate directly with KVM to
>>> coordinate access to LBRs instead of pulling the rug out from under KVM.
>> Perf doesn't need such notification interface currently, as non-pinned event
>> will be active automatically once resource available, only pinned event is
>> still in inactive even if resource available, perf may refuse to add such
>> interface for KVM usage only.
> 
> Or maybe perf will be overjoyed that someone is finally proposing a coherent
> interface.  Until we actually try/ask, we'll never know.

For the perf subsystem, KVM or any other perf_event in kernel space is just a user,
and any external logic that deeply interferes with perf management of host PMU
hw resources will be defeated without question.

> 
>>> Your other response[1] mostly answered that question, but I want explicit
>>> documentation on the contract between perf and KVM with respect to LBRs.  In
>>> short, please work with Weijiang to fulfill my request/demand[*] that someone
>>> document KVM's LBR support, and justify the "design".  I am simply not willing to
>>> take KVM LBR patches until that documentation is provided.
>> Sure, I will work with Weijiang to supply such documentation. Will this
>> document be put in Documentation/virt/kvm/x86/ ?
> 
> Ya, Documentation/virt/kvm/x86/pmu.rst please.

Please pay extra attention to the current status and expectations of co-existence
in the documentation. I'm really looking forward to this document written from
the perspective of a new vPMU developer. Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ