lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN6PR03MB2481DC4ECFDCB687018CF1BFA0F90@BN6PR03MB2481.namprd03.prod.outlook.com>
Date:   Tue, 23 May 2017 17:50:03 +0000
From:   KY Srinivasan <kys@...rosoft.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>,
        Andy Lutomirski <luto@...nel.org>
CC:     Stephen Hemminger <sthemmin@...rosoft.com>,
        Jork Loeser <Jork.Loeser@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        X86 ML <x86@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        "Ingo Molnar" <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>,
        "devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
        "Thomas Gleixner" <tglx@...utronix.de>
Subject: RE: [PATCH v3 08/10] x86/hyper-v: use hypercall for remote TLB flush



> -----Original Message-----
> From: devel [mailto:driverdev-devel-bounces@...uxdriverproject.org] On
> Behalf Of Vitaly Kuznetsov
> Sent: Tuesday, May 23, 2017 5:37 AM
> To: Andy Lutomirski <luto@...nel.org>
> Cc: Stephen Hemminger <sthemmin@...rosoft.com>; Jork Loeser
> <Jork.Loeser@...rosoft.com>; Haiyang Zhang <haiyangz@...rosoft.com>;
> X86 ML <x86@...nel.org>; linux-kernel@...r.kernel.org; Steven Rostedt
> <rostedt@...dmis.org>; Ingo Molnar <mingo@...hat.com>; H. Peter Anvin
> <hpa@...or.com>; devel@...uxdriverproject.org; Thomas Gleixner
> <tglx@...utronix.de>
> Subject: Re: [PATCH v3 08/10] x86/hyper-v: use hypercall for remote TLB
> flush
> 
> Andy Lutomirski <luto@...nel.org> writes:
> 
> > On Mon, May 22, 2017 at 3:43 AM, Vitaly Kuznetsov
> <vkuznets@...hat.com> wrote:
> >> Andy Lutomirski <luto@...nel.org> writes:
> >>
> >>> On 05/19/2017 07:09 AM, Vitaly Kuznetsov wrote:
> >>>> Hyper-V host can suggest us to use hypercall for doing remote TLB
> flush,
> >>>> this is supposed to work faster than IPIs.
> >>>>
> >>>> Implementation details: to do HvFlushVirtualAddress{Space,List}
> hypercalls
> >>>> we need to put the input somewhere in memory and we don't really
> want to
> >>>> have memory allocation on each call so we pre-allocate per cpu
> memory areas
> >>>> on boot. These areas are of fixes size, limit them with an arbitrary
> number
> >>>> of 16 (16 gvas are able to specify 16 * 4096 pages).
> >>>>
> >>>> pv_ops patching is happening very early so we need to separate
> >>>> hyperv_setup_mmu_ops() and hyper_alloc_mmu().
> >>>>
> >>>> It is possible and easy to implement local TLB flushing too and there is
> >>>> even a hint for that. However, I don't see a room for optimization on
> the
> >>>> host side as both hypercall and native tlb flush will result in vmexit. The
> >>>> hint is also not set on modern Hyper-V versions.
> >>>
> >>> Why do local flushes exit?
> >>
> >> "exist"? I don't know, to be honest. To me it makes no difference from
> >> hypervisor's point of view as intercepting tlb flushing instructions is
> >> not any different from implmenting a hypercall.
> >>
> >> Hyper-V gives its guests 'hints' to indicate if they need to use
> >> hypercalls for remote/locat TLB flush and I don't remember seeing
> >> 'local' bit set.
> >
> > What I meant was: why aren't local flushes handled directly in the
> > guest without exiting to the host?  Or are they?  In principle,
> > INVPCID should just work, right?  Even reading and writing CR3 back
> > should work if the hypervisor sets up the magic list of allowed CR3
> > values, right?
> >
> > I guess on older CPUs there might not be any way to flush the local
> > TLB without exiting, but I'm not *that* familiar with the details of
> > the virtualization extensions.
> >
> 
> Right, local flushes should 'just work'. If for whatever reason
> hypervisor decides to trap us it's nothing we can do about it.
> 
> >>
> >>>
> >>>> +static void hyperv_flush_tlb_others(const struct cpumask *cpus,
> >>>> +                                struct mm_struct *mm, unsigned long start,
> >>>> +                                unsigned long end)
> >>>> +{
> >>>
> >>> What tree will this go through?  I'm about to send a signature change
> >>> for this function for tip:x86/mm.
> >>
> >> I think this was going to get through Greg's char-misc tree but if we
> >> need to synchronize I think we can push this through x86.
> >
> > Works for me.  Linus can probably resolve the trivial conflict.  But
> > going through the x86 tree might make sense here if that's okay with
> > you.
> >
> 
> Definitely fine with me, I'll leave this decision up to x86 maintainers,
> Hyper-V maintainers, and Greg.
> 
> >>
> >>>
> >>> Also, how would this interact with PCID?  I have PCID patches that I'm
> >>> pretty happy with now, and I'm hoping to support PCID in 4.13.
> >>>
> >>
> >> Sorry, I wasn't following this work closely. .flush_tlb_others() hook is
> >> not going away from pv_mmu_ops, right? In think case we can have both
> in
> >> 4.13. Or do you see any other clashes?
> >>
> >
> > The issue is that I'm changing the whole flush algorithm.  The main
> > patch that affects this is here:
> >
> >
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.ker
> nel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Fluto%2Flinux.git%2Fco
> mmit%2F%3Fh%3Dx86%2Fpcid%26id%3Da67bff42e1e55666fdbaddf233a484a
> 8773688c1&data=02%7C01%7Ckys%40microsoft.com%7C88a812b285a741bcd
> 28d08d4a1d864aa%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636
> 311398154677248&sdata=%2BaCK2EW9S%2BdggL168xQ5eiaXXRZY31II6lLle1ys
> 6Bw%3D&reserved=0
> >
> > The interactions between that patch and paravirt flush helpers may be
> > complex, and it'll need some thought.  PCID makes everything even more
> > subtle, so just turning off PCID when paravirt flush is involved seems
> > the safest for now.  Ideally we'd eventually support PCID and paravirt
> > flushes together (and even eventual native remote flushes assuming
> > they ever get added).
> 
> I see. On Hyper-V HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST hypercall's
> interface is:
> 1) List of entries to flush. Each entry is a PFN and lower 12 bits are
> used to encode the number of pages after this one (defined by the PFN)
> we'd like to flush. We can flush up to 509 entries with one
> hypercall (can be extended but requires a pre-allocated memory region).
> 
> 2) Processor mask
> 
> 3) Address space id (all 64 bits of CR3. Not sure how it's used within
> the hypervisor).
> 
> HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX is more or less the same but
> we
> need more space to specify > 64 vCPUs so we'll be able to pass less than
> 509 entries.
> 
> The main advantage compared to sending IPIs, as far as I understand, is
> that virtual CPUs which are not currently scheduled don't need flushing
> and we can't know this from within the guest.

There are other potential advantages as well:
1. When we need to flush with a large CPU mask, the hypercall mechanism can obviously
minimize the number of intercepts.
2. There is no instruction emulation in the hypercall path. 
> 
> I agree that disabling PCID for paravirt flush users for now is a good
> option, let's have it merged and tested without this additional
> complexity and make another round after.
> 
> >
> > Also, can you share the benchmark you used for these patches?
> 
> I didn't do much while writing the patchset, mostly I was running the
> attached dumb trasher (32 pthreads doing mmap/munmap). On a 16 vCPU
> Hyper-V 2016 guest I get the following (just re-did the test with
> 4.12-rc1):
> 
> Before the patchset:
> # time ./pthread_mmap ./randfile
> 
> real	3m33.118s
> user	0m3.698s
> sys	3m16.624s
> 
> After the patchset:
> # time ./pthread_mmap ./randfile
> 
> real	2m19.920s
> user	0m2.662s
> sys	2m9.948s
> 
> K. Y.'s guys at Microsoft did additional testing for the patchset on
> different Hyper-V deployments including Azure, they may share their
> findings too.

Our testing was mostly focused on stability and correctness. For the benchmarks we ran
(micro benchmarks for storage and networking) we did see improvements across the board.

Regards,

K. Y

> 
> --
>   Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ