lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170815142005.GA5975@flask>
Date:   Tue, 15 Aug 2017 16:20:05 +0200
From:   Radim Krčmář <rkrcmar@...hat.com>
To:     Lan Tianyu <tianyu.lan@...el.com>
Cc:     Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        David Hildenbrand <david@...hat.com>, pbonzini@...hat.com,
        tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
        x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM/x86: Increase max vcpu number to 352

2017-08-15 11:00+0800, Lan Tianyu:
> On 2017年08月12日 03:35, Konrad Rzeszutek Wilk wrote:
>> On Fri, Aug 11, 2017 at 03:00:20PM +0200, Radim Krčmář wrote:
>>> 2017-08-11 10:11+0200, David Hildenbrand:
>>>> On 11.08.2017 09:49, Lan Tianyu wrote:
>>>>> On 2017年08月11日 01:50, Konrad Rzeszutek Wilk wrote:
>>>>>> Are there any issues with increasing the value from 288 to 352 right now?
>>>>>
>>>>> No found.
>>>
>>> Yeah, the only issue until around 2^20 (when we reach the maximum of
>>> logical x2APIC addressing) should be the size of per-VM arrays when only
>>> few VCPUs are going to be used.

(I was talking only about the KVM side.)

>> Migration with 352 CPUs all being busy dirtying memory and also poking
>> at various I/O ports (say all of them dirtying the VGA) is no problem?
> 
> This depends on what kind of workload is running during migration. I
> think this may affect service down time since there maybe a lot of dirty
> memory data to transfer after stopping vcpus. This also depends on how
> user sets "migrate_set_downtime" for qemu. But I think increasing vcpus
> will break migration function.

Utilizing post-copy in the last migration phase should make migration of
busy big guests possible.  (I agree that pre-copy in not going to be
feasible.)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ