lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d65fbd73-7612-8348-2fd8-8da0f5e2a3c0@bytedance.com>
Date:   Tue, 16 Nov 2021 10:56:25 +0800
From:   zhenwei pi <pizhenwei@...edance.com>
To:     Wanpeng Li <kernellwp@...il.com>,
        Maxim Levitsky <mlevitsk@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Kele Huang <huangkele@...edance.com>, chaiwen.cc@...edance.com,
        xieyongji@...edance.com, dengliang.1214@...edance.com,
        Wanpeng Li <wanpengli@...cent.com>,
        Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        the arch/x86 maintainers <x86@...nel.org>,
        "H. Peter Anvin" <hpa@...or.com>, kvm <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: Re: [RFC] KVM: x86: SVM: don't expose PV_SEND_IPI feature with
 AVIC



On 11/16/21 10:48 AM, Wanpeng Li wrote:
> On Mon, 8 Nov 2021 at 22:09, Maxim Levitsky <mlevitsk@...hat.com> wrote:
>>
>> On Mon, 2021-11-08 at 11:30 +0100, Paolo Bonzini wrote:
>>> On 11/8/21 10:59, Kele Huang wrote:
>>>> Currently, AVIC is disabled if x2apic feature is exposed to guest
>>>> or in-kernel PIT is in re-injection mode.
>>>>
>>>> We can enable AVIC with options:
>>>>
>>>>     Kmod args:
>>>>     modprobe kvm_amd avic=1 nested=0 npt=1
>>>>     QEMU args:
>>>>     ... -cpu host,-x2apic -global kvm-pit.lost_tick_policy=discard ...
>>>>
>>>> When LAPIC works in xapic mode, both AVIC and PV_SEND_IPI feature
>>>> can accelerate IPI operations for guest. However, the relationship
>>>> between AVIC and PV_SEND_IPI feature is not sorted out.
>>>>
>>>> In logical, AVIC accelerates most of frequently IPI operations
>>>> without VMM intervention, while the re-hooking of apic->send_IPI_xxx
>>>> from PV_SEND_IPI feature masks out it. People can get confused
>>>> if AVIC is enabled while getting lots of hypercall kvm_exits
>>>> from IPI.
>>>>
>>>> In performance, benchmark tool
>>>> https://lore.kernel.org/kvm/20171219085010.4081-1-ynorov@caviumnetworks.com/
>>>> shows below results:
>>>>
>>>>     Test env:
>>>>     CPU: AMD EPYC 7742 64-Core Processor
>>>>     2 vCPUs pinned 1:1
>>>>     idle=poll
>>>>
>>>>     Test result (average ns per IPI of lots of running):
>>>>     PV_SEND_IPI      : 1860
>>>>     AVIC             : 1390
>>>>
>>>> Besides, disscussions in https://lkml.org/lkml/2021/10/20/423
>>>> do have some solid performance test results to this.
>>>>
>>>> This patch fixes this by masking out PV_SEND_IPI feature when
>>>> AVIC is enabled in setting up of guest vCPUs' CPUID.
>>>>
>>>> Signed-off-by: Kele Huang <huangkele@...edance.com>
>>>
>>> AVIC can change across migration.  I think we should instead use a new
>>> KVM_HINTS_* bit (KVM_HINTS_ACCELERATED_LAPIC or something like that).
>>> The KVM_HINTS_* bits are intended to be changeable across migration,
>>> even though we don't have for now anything equivalent to the Hyper-V
>>> reenlightenment interrupt.
>>
>> Note that the same issue exists with HyperV. It also has PV APIC,
>> which is harmful when AVIC is enabled (that is guest uses it instead
>> of using AVIC, negating AVIC benefits).
>>
>> Also note that Intel recently posted IPI virtualizaion, which
>> will make this issue relevant to APICv too soon.
> 
> The recently posted Intel IPI virtualization will accelerate unicast
> ipi but not broadcast ipis, AMD AVIC accelerates unicast ipi well but
> accelerates broadcast ipis worse than pv ipis. Could we just handle
> unicast ipi here?
> 
>      Wanpeng
> 
Depend on the number of target vCPUs, broadcast IPIs gets unstable 
performance on AVIC, and usually worse than PV Send IPI.
So agree with Wanpeng's point, is it possible to separate single IPI and 
broadcast IPI on a hardware acceleration platform?

-- 
zhenwei pi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ