lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180719172202.GD11749@flask>
Date:   Thu, 19 Jul 2018 19:22:02 +0200
From:   Radim Krčmář <rkrcmar@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Wanpeng Li <kernellwp@...il.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org, Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH v3 2/6] KVM: X86: Implement PV IPIs in linux guest

2018-07-19 18:47+0200, Paolo Bonzini:
> On 19/07/2018 18:28, Radim Krčmář wrote:
> >> +
> >> +	kvm_hypercall3(KVM_HC_SEND_IPI, ipi_bitmap_low, ipi_bitmap_high, vector);
> > and
> > 
> > 	kvm_hypercall3(KVM_HC_SEND_IPI, ipi_bitmap[0], ipi_bitmap[1], vector);
> > 
> > Still, the main problem is that we can only address 128 APICs.
> > 
> > A simple improvement would reuse the vector field (as we need only 8
> > bits) and put a 'offset' in the rest.  The offset would say which
> > cluster of 128 are we addressing.  24 bits of offset results in 2^31
> > total addressable CPUs (we probably should even use that many bits).
> > The downside of this is that we can only address 128 at a time.
> > 
> > It's basically the same as x2apic cluster mode, only with 128 cluster
> > size instead of 16, so the code should be a straightforward port.
> > And because x2apic code doesn't seem to use any division by the cluster
> > size, we could even try to use kvm_hypercall4, add ipi_bitmap[2], and
> > make the cluster size 192. :)
> 
> I did suggest an offset earlier in the discussion.
> 
> The main problem is that consecutive CPU ids do not map to consecutive
> APIC ids.  But still, we could do an hypercall whenever the total range
> exceeds 64.  Something like

Right, the cluster x2apic implementation came with a second mapping to make
this in linear time and send as little IPIs as possible:

·       /* Collapse cpus in a cluster so a single IPI per cluster is sent */
·       for_each_cpu(cpu, tmpmsk) {
·       ·       struct cluster_mask *cmsk = per_cpu(cluster_masks, cpu);

·       ·       dest = 0;
·       ·       for_each_cpu_and(clustercpu, tmpmsk, &cmsk->mask)
·       ·       ·       dest |= per_cpu(x86_cpu_to_logical_apicid, clustercpu);

·       ·       if (!dest)
·       ·       ·       continue;

·       ·       __x2apic_send_IPI_dest(dest, vector, apic->dest_logical);
·       ·       /* Remove cluster CPUs from tmpmask */
·       ·       cpumask_andnot(tmpmsk, tmpmsk, &cmsk->mask);
·       }

I think that the extra memory consumption would be excusable.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ