lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Sep 2017 20:59:19 -0300
From:   Marcelo Tosatti <mtosatti@...hat.com>
To:     Jan Kiszka <jan.kiszka@...mens.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 0/3] KVM KVM_HC_RT_PRIO hypercall support

On Fri, Sep 22, 2017 at 08:23:02AM +0200, Jan Kiszka wrote:
> On 2017-09-22 03:19, Marcelo Tosatti wrote:
> > On Thu, Sep 21, 2017 at 07:45:32PM +0200, Jan Kiszka wrote:
> >> On 2017-09-21 13:38, Marcelo Tosatti wrote:
> >>> When executing guest vcpu-0 with FIFO:1 priority, which is necessary to
> >>> deal with the following situation:
> >>>
> >>> VCPU-0 (housekeeping VCPU)              VCPU-1 (realtime VCPU)
> >>>
> >>> raw_spin_lock(A)
> >>> interrupted, schedule task T-1          raw_spin_lock(A) (spin)
> >>>
> >>> raw_spin_unlock(A)
> >>>
> >>> Certain operations must interrupt guest vcpu-0 (see trace below).
> >>>
> >>> To fix this issue, only change guest vcpu-0 to FIFO priority
> >>> on spinlock critical sections (see patch).
> >>>
> >>> Hang trace
> >>> ==========
> >>>
> >>> Without FIFO priority:
> >>>
> >>> qemu-kvm-6705  [002] ....1.. 767785.648964: kvm_exit: reason IO_INSTRUCTION rip 0xe8fe info 1f00039 0
> >>> qemu-kvm-6705  [002] ....1.. 767785.648965: kvm_exit: reason IO_INSTRUCTION rip 0xe911 info 3f60008 0
> >>> qemu-kvm-6705  [002] ....1.. 767785.648968: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0
> >>> qemu-kvm-6705  [002] ....1.. 767785.648971: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
> >>> qemu-kvm-6705  [002] ....1.. 767785.648974: kvm_exit: reason IO_INSTRUCTION rip 0xb514 info 3f60000 0
> >>> qemu-kvm-6705  [002] ....1.. 767785.648977: kvm_exit: reason PENDING_INTERRUPT rip 0x8052 info 0 0
> >>> qemu-kvm-6705  [002] ....1.. 767785.648980: kvm_exit: reason IO_INSTRUCTION rip 0xeee6 info 200040 0
> >>> qemu-kvm-6705  [002] ....1.. 767785.648999: kvm_exit: reason EPT_MISCONFIG rip 0x2120 info 0 0
> >>>
> >>> With FIFO priority:
> >>>
> >>> qemu-kvm-7636  [002] ....1.. 768218.205065: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
> >>> qemu-kvm-7636  [002] ....1.. 768218.205068: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0
> >>> qemu-kvm-7636  [002] ....1.. 768218.205071: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
> >>> qemu-kvm-7636  [002] ....1.. 768218.205074: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0
> >>> qemu-kvm-7636  [002] ....1.. 768218.205077: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0
> >>> ..
> >>>
> >>> Performance numbers (kernel compilation with make -j2)
> >>> ======================================================
> >>>
> >>> With hypercall: 4:40.  (make -j2)
> >>> Without hypercall: 3:38.  (make -j2)
> >>>
> >>> Note for NFV workloads spinlock performance is not relevant
> >>> since DPDK should not enter the kernel (and housekeeping vcpu
> >>> performance is far from a key factor).
> >>>
> >>> Signed-off-by: Marcelo Tosatti <mtosatti@...hat.com>
> >>>
> >>
> >> That sounds familiar, though not yet the same: :)
> >>
> >> http://git.kiszka.org/?p=linux-kvm.git;a=shortlog;h=refs/heads/queues/paravirt-sched
> >> (paper: http://lwn.net/images/conf/rtlws11/papers/proc/p18.pdf)
> >>
> >> I suppose your goal is not to enable the host to follow the guest
> >> scheduler priority completely but only to have prio-ceiling for such
> >> short critical sections. Maybe still useful to think ahead about future
> >> extensions when actually introducing such an interface.
> > 
> > Hi Jan!
> > 
> > Hum... I'll take a look at your interface/paper and get back to you.
> > 
> >> But shouldn't there be some limits on the maximum prio the guest can select?
> > 
> > The SCHED_FIFO prio is fixed, selectable when QEMU starts. Do you
> > envision any other use case than a fixed priority value selectable
> > at QEMU initialization?
> 
> Oh, indeed, this is a pure prio-ceiling variant with host-defined
> ceiling value.
> 
> But it's very inefficient to use a hypercall for entering and leaving
> each and every sections. I would strongly recommend using a lazy scheme
> where the guest writes the desired state into a shared memory page, and
> the host only evaluates that prior to taking a scheduling decision, or
> at least only on real vmexits. We're using such scheme successfully to
> accelerate the fast path of prio-ceiling for pthread mutexes in the
> Xenomai real-time extension.

Yes, a faster scheme was envisioned, but not developed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ