lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170922121640.GA29589@amt.cnet>
Date:   Fri, 22 Sep 2017 09:16:40 -0300
From:   Marcelo Tosatti <mtosatti@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>, mingo@...hat.com,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [patch 3/3] x86: kvm guest side support for KVM_HC_RT_PRIO
 hypercall

On Fri, Sep 22, 2017 at 12:00:05PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 21, 2017 at 10:10:41PM -0300, Marcelo Tosatti wrote:
> > When executing guest vcpu-0 with FIFO:1 priority, which is necessary
> > to
> > deal with the following situation:
> > 
> > VCPU-0 (housekeeping VCPU)              VCPU-1 (realtime VCPU)
> > 
> > raw_spin_lock(A)
> > interrupted, schedule task T-1          raw_spin_lock(A) (spin)
> > 
> > raw_spin_unlock(A)
> > 
> > Certain operations must interrupt guest vcpu-0 (see trace below).
> 
> Those traces don't make any sense. All they include is kvm_exit and you
> can't tell anything from that.

Hi Peter,

OK lets describe whats happening:

With QEMU emulator thread and vcpu-0 sharing a physical CPU
(which is a request from several NFV customers, to improve
guest packing), the following occurs when the guest generates 
the following pattern:

		1. submit IO.
		2. busy spin.

Hang trace
==========

Without FIFO priority:

qemu-kvm-6705  [002] ....1.. 767785.648964: kvm_exit: reason
IO_INSTRUCTION rip 0xe8fe info 1f00039 0
qemu-kvm-6705  [002] ....1.. 767785.648965: kvm_exit: reason
IO_INSTRUCTION rip 0xe911 info 3f60008 0
qemu-kvm-6705  [002] ....1.. 767785.648968: kvm_exit: reason
IO_INSTRUCTION rip 0x8984 info 608000b 0
qemu-kvm-6705  [002] ....1.. 767785.648971: kvm_exit: reason
IO_INSTRUCTION rip 0xb313 info 1f70008 0
qemu-kvm-6705  [002] ....1.. 767785.648974: kvm_exit: reason
IO_INSTRUCTION rip 0xb514 info 3f60000 0
qemu-kvm-6705  [002] ....1.. 767785.648977: kvm_exit: reason
PENDING_INTERRUPT rip 0x8052 info 0 0
qemu-kvm-6705  [002] ....1.. 767785.648980: kvm_exit: reason
IO_INSTRUCTION rip 0xeee6 info 200040 0
qemu-kvm-6705  [002] ....1.. 767785.648999: kvm_exit: reason
EPT_MISCONFIG rip 0x2120 info 0 0

The emulator thread is able to interrupt qemu vcpu0 at SCHED_NORMAL
priority.

With FIFO priority:

Now, with qemu vcpu0 at SCHED_FIFO priority, which is necessary to
avoid the following scenario:

(*)
VCPU-0 (housekeeping VCPU)              VCPU-1 (realtime VCPU)
 
raw_spin_lock(A)
interrupted, schedule task T-1          raw_spin_lock(A) (spin)
 
raw_spin_unlock(A)

And the following code pattern by vcpu0:

		1. submit IO.
		2. busy spin.

The emulator thread is unable to interrupt vcpu0 thread
(vcpu0 busy spinning at SCHED_FIFO, emulator thread at SCHED_NORMAL), 
and you get a hang at boot as follows:

qemu-kvm-7636  [002] ....1.. 768218.205065: kvm_exit: reason
IO_INSTRUCTION rip 0xb313 info 1f70008 0
qemu-kvm-7636  [002] ....1.. 768218.205068: kvm_exit: reason
IO_INSTRUCTION rip 0x8984 info 608000b 0
qemu-kvm-7636  [002] ....1.. 768218.205071: kvm_exit: reason
IO_INSTRUCTION rip 0xb313 info 1f70008 0
qemu-kvm-7636  [002] ....1.. 768218.205074: kvm_exit: reason
IO_INSTRUCTION rip 0x8984 info 608000b 0
qemu-kvm-7636  [002] ....1.. 768218.205077: kvm_exit: reason
IO_INSTRUCTION rip 0xb313 info 1f70008 0

So to fix this problem, the patchset changes the priority
of the VCPU thread (to fix (*)), only when taking spinlocks.

Does that make sense now?

> 
> > To fix this issue, only change guest vcpu-0 to FIFO priority
> > on spinlock critical sections (see patch).
> 
> This doesn't make sense. So you're saying that if you run all VCPUs as
> FIFO things come apart? Why?

Please see above.

> And why can't they still come apart when the guest holds a spinlock?

Hopefully the above makes sense.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ