lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2a7a78b3-cc4e-de17-ee5d-6f6582683d34@redhat.com>
Date:   Fri, 20 Jul 2018 10:06:52 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Wanpeng Li <kernellwp@...il.com>
Cc:     Radim Krcmar <rkrcmar@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
        Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH v3 2/6] KVM: X86: Implement PV IPIs in linux guest

On 20/07/2018 07:58, Wanpeng Li wrote:
>>
>> We could keep the cluster size of 128, but it would be more complicated
>> to do the left shift in the first "else if".  If the limit is 64, you
>> can keep the two arguments in the hypercall, and just pass 0 as the
>> "high" bitmap on 64-bit kernels.
> As David pointed out, we need to scale to higher APIC IDs.

The offset is enough to scale to higher APIC IDs.

It's just an optimization to allow 128 CPUs per hypercall instead of 64
CPUs.  But actually you can use __uint128_t on 64-bit machines, I forgot
about that.  With u64 on 32-bit and __uint128_t on 64-bit, you can do 64
CPUs per hypercall on 32-bit and 128 CPUs per hypercall on 64-bit.

 I will add
> the cpu id to apic id transfer in the for loop. How about:
> kvm_hypercall2(KVM_HC_SEND_IPI, ipi_bitmap, vector); directly. In
> addition, why need to pass the 0 as the "high" bitmap even if for 128
> vCPUs case?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ