lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7eba0113-5188-883c-307d-9e0f8222b913@oracle.com>
Date:   Thu, 8 Jun 2017 17:05:30 -0700
From:   Ankur Arora <ankur.a.arora@...cle.com>
To:     Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        Juergen Gross <jgross@...e.com>
Cc:     linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org,
        boris.ostrovsky@...cle.com
Subject: Re: [Xen-devel] [PATCH 0/5] xen/pvh*: Support > 32 VCPUs at restore

On 2017-06-08 03:53 PM, Konrad Rzeszutek Wilk wrote:
> On Thu, Jun 08, 2017 at 10:28:15AM +0200, Juergen Gross wrote:
>> On 03/06/17 02:05, Ankur Arora wrote:
>>> This patch series fixes a bunch of issues in the xen_vcpu setup
>>> logic.
>>>
>>> Simplify xen_vcpu related code: code refactoring in advance of the
>>> rest of the patch series.
>>>
>>> Support > 32 VCPUs at restore: unify all vcpu restore logic in
>>> xen_vcpu_restore() and support > 32 VCPUs for PVH*.
>>>
>>> Remove vcpu info placement from restore (!SMP): some pv_ops are
>>> marked RO after init so lets not redo xen_setup_vcpu_info_placement
>>> at restore.
>>>
>>> Handle xen_vcpu_setup() failure in hotplug: handle vcpu_info
>>> registration failures by propagating them from the cpuhp-prepare
>>> callback back up to the cpuhp logic.
>>>
>>> Handle xen_vcpu_setup() failure at boot: pull CPUs (> MAX_VIRT_CPUS)
>>> down if we fall back to xen_have_vcpu_info_placement = 0.
>>>
>>> Tested with various combinations of PV/PVHv2/PVHVM save/restore
>>> and cpu-hotadd-hotremove. Also tested by simulating failure in
>>> VCPUOP_register_vcpu_info.
>>>
>>> Please review.
>>
>> Just a question regarding the sequence of tags (Reviewed-by: and
>> Signed-off-by:) in the patches:
>>
>> It seems a little bit odd to have the Reviewed-by: tag before the
>> S-o-b: tag. This suggests the review was done before you wrote the
>> patches, which is hard to believe. :-)
Heh :). As Konrad surmises, I was unsure of the order and manually
ordered them to comport with Linux style. (Now that I see arch/x86/xen/,
I see that Xen puts them in time-order.)

Happy to reorder in case of V2.

Ankur

> 
> That is how the Linux orders the tags, just do 'git log' and you
> will see that pattern >>
>> So please reorder the tags in future patches to be in their logical
>> sequence.
> 
> While Xen orders it in the other order (SoB first, then Reviewed-by).
> 
>>
>> I can fix this up in this series in case there is no need for V2.
>>
>>
>> Juergen
>>
>>>
>>> Ankur Arora (5):
>>>    xen/vcpu: Simplify xen_vcpu related code
>>>    xen/pvh*: Support > 32 VCPUs at domain restore
>>>    xen/pv: Fix OOPS on restore for a PV, !SMP domain
>>>    xen/vcpu: Handle xen_vcpu_setup() failure in hotplug
>>>    xen/vcpu: Handle xen_vcpu_setup() failure at boot
>>>
>>>   arch/x86/xen/enlighten.c     | 154 +++++++++++++++++++++++++++++++------------
>>>   arch/x86/xen/enlighten_hvm.c |  33 ++++------
>>>   arch/x86/xen/enlighten_pv.c  |  87 +++++++++++-------------
>>>   arch/x86/xen/smp.c           |  31 +++++++++
>>>   arch/x86/xen/smp.h           |   2 +
>>>   arch/x86/xen/smp_hvm.c       |  14 +++-
>>>   arch/x86/xen/smp_pv.c        |   6 +-
>>>   arch/x86/xen/suspend_hvm.c   |  11 +---
>>>   arch/x86/xen/xen-ops.h       |   3 +-
>>>   include/xen/xen-ops.h        |   2 +
>>>   10 files changed, 218 insertions(+), 125 deletions(-)
>>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@...ts.xen.org
>> https://lists.xen.org/xen-devel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ