lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Aug 2017 10:10:47 -0400
From:   Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:     Lan Tianyu <tianyu.lan@...el.com>
Cc:     Radim Krčmář <rkrcmar@...hat.com>,
        David Hildenbrand <david@...hat.com>, pbonzini@...hat.com,
        tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
        x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM/x86: Increase max vcpu number to 352

On Tue, Aug 15, 2017 at 11:00:04AM +0800, Lan Tianyu wrote:
> On 2017年08月12日 03:35, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 11, 2017 at 03:00:20PM +0200, Radim Krčmář wrote:
> >> 2017-08-11 10:11+0200, David Hildenbrand:
> >>> On 11.08.2017 09:49, Lan Tianyu wrote:
> >>>> Hi Konrad:
> >>>> 	Thanks for your review.
> >>>>
> >>>> On 2017年08月11日 01:50, Konrad Rzeszutek Wilk wrote:
> >>>>> On Thu, Aug 10, 2017 at 06:00:59PM +0800, Lan Tianyu wrote:
> >>>>>> Intel Xeon phi chip will support 352 logical threads. For HPC usage
> >>>>>> case, it will create a huge VM with vcpu number as same as host cpus. This
> >>>>>> patch is to increase max vcpu number to 352.
> >>>>>
> >>>>> Why not 1024 or 4096?
> >>>>
> >>>> This is on demand. We can set a higher number since KVM already has
> >>>> x2apic and vIOMMU interrupt remapping support.
> >>>>
> >>>>>
> >>>>> Are there any issues with increasing the value from 288 to 352 right now?
> >>>>
> >>>> No found.
> >>
> >> Yeah, the only issue until around 2^20 (when we reach the maximum of
> >> logical x2APIC addressing) should be the size of per-VM arrays when only
> >> few VCPUs are going to be used.
> > 
> > Migration with 352 CPUs all being busy dirtying memory and also poking
> > at various I/O ports (say all of them dirtying the VGA) is no problem?
> 
> This depends on what kind of workload is running during migration. I
> think this may affect service down time since there maybe a lot of dirty
> memory data to transfer after stopping vcpus. This also depends on how
> user sets "migrate_set_downtime" for qemu. But I think increasing vcpus
> will break migration function.

OK, so let me take a step back.

I see this nice 'supported' CPU count that is exposed in kvm module.

Then there is QEMU throwing out a warning if you crank up the CPU count
above that number.

Red Hat's web-pages talk about CPU count as well.

And I am assuming all of those are around what has been tested and
what has shown to work. And one of those test-cases surely must
be migration.

Ergo, if the vCPU count increase will break migration, then it is
a regression.

Or a fix/work needs to be done to support a higher CPU count for
migrating?


Is my understanding incorrect?

> 
> > 
> > 
> >>
> >>>>> Also perhaps this should be made in an Kconfig entry?
> >>>>
> >>>> That will be anther option but I find different platforms will define
> >>>> different MAX_VCPU. If we introduce a generic Kconfig entry, different
> >>>> platforms should have different range.
> > 
> > 
> > By different platforms you mean q35 vs the older one, and such?
> 
> I meant x86, arm, sparc and other vendors' code define different max
> vcpu number.

Right, and?
> 
> > Not whether the underlaying accelerator is tcg, Xen, KVM, or bHyve?
> > 
> > What I was trying to understand whether it makes even sense for
> > the platforms to have such limits in the first place - and instead the
> > accelerators should be the ones setting it?
> > 
> > 
> >>>>
> >>>> Radim & Paolo, Could you give some input? In qemu thread, we will set
> >>>> max vcpu to 8192 for x86 VM. In KVM, The length of vcpu pointer array in
> >>>> struct kvm and dest_vcpu_bitmap in kvm_irq_delivery_to_apic() are
> >>>> specified by KVM_MAX_VCPUS. Should we keep align with Qemu?
> >>
> >> That would be great.
> >>
> >>> commit 682f732ecf7396e9d6fe24d44738966699fae6c0
> >>> Author: Radim Krčmář <rkrcmar@...hat.com>
> >>> Date:   Tue Jul 12 22:09:29 2016 +0200
> >>>
> >>>     KVM: x86: bump MAX_VCPUS to 288
> >>>
> >>>     288 is in high demand because of Knights Landing CPU.
> >>>     We cannot set the limit to 640k, because that would be wasting space.
> >>>
> >>> I think we want to keep it small as long as possible. I remember a patch
> >>> series from Radim which would dynamically allocate memory for these
> >>> arrays (using a new VM creation ioctl, specifying the max # of vcpus).
> >>> Wonder what happened to that (I remember requesting a simply remalloc
> >>> instead of a new VM creation ioctl :] ).
> >>
> >> Eh, I forgot about them ...  I didn't like the dynamic allocation as we
> >> would need to protect the memory, which would result in a much bigger
> >> changeset, or fragile macros.
> >>
> >> I can't recall the disgust now, so I'll send a RFC with the dynamic
> >> version to see how it turned out.
> >>
> >> Thanks.
> 
> 
> -- 
> Best regards
> Tianyu Lan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ