[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <43b663ae-2b6b-d6ca-b41f-a35247551953@redhat.com>
Date: Wed, 9 Aug 2017 22:24:32 +0200
From: David Hildenbrand <david@...hat.com>
To: Dmitry Vyukov <dvyukov@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
KVM list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, P J P <ppandit@...hat.com>,
James Mattson <jmattson@...gle.com>,
Xiao Guangrong <guangrong.xiao@...ux.intel.com>,
Haozhong Zhang <haozhong.zhang@...el.com>,
Wanpeng Li <kernellwp@...il.com>,
Steve Rutherford <srutherford@...gle.com>
Cc: syzkaller <syzkaller@...glegroups.com>
Subject: Re: kvm: WARNING in kvm_arch_vcpu_ioctl_run
On 09.08.2017 19:07, Dmitry Vyukov wrote:
> Hello,
>
> syzkaller fuzzer has hit the following WARNING in kvm_arch_vcpu_ioctl_run.
> This is easily reproducible and reproducer is attached at the bottom.
> The report is on upstream commit
> 26c5cebfdb6ca799186f1e56be7d6f2480c5012c. This requires setting
> kvm-intel.unrestricted_guest=0 on the machine, with
> unrestricted_guest=1 the WARNING does not happen. Output of the
> program is:
>
> ret1=0 exit_reason=17 suberror=1
> ret2=0 exit_reason=8 suberror=65530
>
>
I wonder if the following is possible:
1) we set vcpu->arch.pio.count != 0
2) we set vcpu->arch.complete_userspace_io = complete_emulated_pio;
(in x86_emulate_instruction)
3) in kvm_arch_vcpu_ioctl_run(), we execute
vcpu->arch.complete_userspace_io and set it to NULL.
4. complete_emulated_pio() calls complete_emulated_io(), which fails
(in some different way) and leaves vcpu->arch.pio.count set.
5. we now have vcpu->arch.pio.count set but
vcpu->arch.complete_userspace_io cleared.
Same should not apply to vcpu->mmio_needed, as we clear it before
calling complete_emulated_io() in complete_emulated_mmio().
AFAIK, we cannot simply clear vcpu->arch.pio.count before calling
complete_emulated_io().
Maybe something like this could help: (untested, wild guess)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 86242ee..60296fef 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7227,10 +7227,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
int (*cui)(struct kvm_vcpu *) = vcpu->arch.complete_userspace_io;
vcpu->arch.complete_userspace_io = NULL;
r = cui(vcpu);
+ if (!vcpu->arch.complete_userspace_io) {
+ /* no new handler -> everything handled */
+ vcpu->arch.pio.count = 0;
+ vcpu->mmio_needed = 0;
+ }
if (r <= 0)
goto out;
- } else
- WARN_ON(vcpu->arch.pio.count || vcpu->mmio_needed);
+ }
if (kvm_run->immediate_exit)
r = -EINTR;
Alternatively, something like this
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 86242ee..58be388 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7127,9 +7127,15 @@ static inline int complete_emulated_io(struct kvm_vcpu *vcpu)
static int complete_emulated_pio(struct kvm_vcpu *vcpu)
{
+ int ret;
+
BUG_ON(!vcpu->arch.pio.count);
- return complete_emulated_io(vcpu);
+ ret = complete_emulated_io(vcpu);
+ if (ret && !vcpu->arch.complete_userspace_io)
+ vcpu->arch.pio.count = 0;
+
+ return ret;
}
/*
Haven't tried to reproduce on upstream yet (at least on my Fedora 25
I can't trigger it), so only wild guesses.
--
Thanks,
David
Powered by blists - more mailing lists