[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cfa024924eb3be66f94a2c59e164b9a1fa16653e.camel@redhat.com>
Date: Thu, 28 Apr 2022 20:21:56 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: syzbot <syzbot+a8ad3ee1525a0c4b40ec@...kaller.appspotmail.com>,
bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
jmattson@...gle.com, joro@...tes.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, mingo@...hat.com,
pbonzini@...hat.com, syzkaller-bugs@...glegroups.com,
tglx@...utronix.de, vkuznets@...hat.com, wanpengli@...cent.com,
x86@...nel.org
Subject: Re: [syzbot] WARNING in kvm_mmu_uninit_tdp_mmu (2)
On Thu, 2022-04-28 at 20:16 +0300, Maxim Levitsky wrote:
> On Thu, 2022-04-28 at 15:32 +0000, Sean Christopherson wrote:
> > On Tue, Apr 26, 2022, Maxim Levitsky wrote:
> > > I can reproduce this in a VM, by running and CTRL+C'in my ipi_stress test,
> >
> > Can you post your ipi_stress test? I'm curious to see if I can repro, and also
> > very curious as to what might be unique about your test. I haven't been able to
> > repro the syzbot test, nor have I been able to repro by killing VMs/tests.
> >
>
> This is the patch series (mostly attempt to turn svm to mini library,
> but I don't know if this is worth it.
> It was done so that ipi_stress could use nesting itself to wait for IPI
> from within a nested guest. I usually don't use it.
>
> This is more or less how I was running it lately (I have a wrapper script)
>
>
> ./x86/run x86/ipi_stress.flat \
> -global kvm-pit.lost_tick_policy=discard \
> -machine kernel-irqchip=on -name debug-threads=on \
> \
> -smp 8 \
> -cpu host,x2apic=off,svm=off,-hypervisor \
> -overcommit cpu-pm=on \
> -m 4g -append "0 10000"
I forgot to mention: this should be run in a loop.
Best regards,
Maxim Levitsky
>
>
> Its not fully finised for upstream, I will get to it soon.
>
> 'cpu-pm=on' won't work for you as this fails due to non atomic memslot
> update bug for which I have a small hack in qemu, and it is on my
> backlog to fix it correctly.
>
> Mostly likely cpu_pm=off will also reproduce it.
>
>
> Test was run in a guest, natively this doesn't seem to reproduce.
> tdp mmu was used for both L0 and L1.
>
> Best regards,
> Maxim levitsky
Powered by blists - more mailing lists