[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210324160326.555b4464@alex-virtual-machine>
Date: Wed, 24 Mar 2021 16:03:26 +0800
From: Aili Yao <yaoaili@...gsoft.com>
To: <qemu-devel@...gnu.org>
CC: "Luck, Tony" <tony.luck@...el.com>, Borislav Petkov <bp@...en8.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"yangfeng1@...gsoft.com" <yangfeng1@...gsoft.com>,
<sunhao2@...gsoft.com>, <yaoaili@...gsoft.com>
Subject: Re: [PATCH v2] x86/mce: fix wrong no-return-ip logic in
do_machine_check()
On Wed, 24 Mar 2021 10:59:50 +0800
Aili Yao <yaoaili@...gsoft.com> wrote:
> On Wed, 24 Feb 2021 10:39:21 +0800
> Aili Yao <yaoaili@...gsoft.com> wrote:
>
> > On Tue, 23 Feb 2021 16:12:43 +0000
> > "Luck, Tony" <tony.luck@...el.com> wrote:
> >
> > > > What I think is qemu has not an easy to get the MCE signature from host or currently no methods for this
> > > > So qemu treat all AR will be No RIPV, Do more is better than do less.
> > >
> > > RIPV would be important in the guest in the case where the guest can fix the problem that caused
> > > the machine check and return to the failed instruction to continue.
> > >
> > > I think the only case where this happens is a fault in a read-only page mapped from a file (typically
> > > code page, but could be a data page). In this case memory-failure() unmaps the page with the posion
> > > but Linux can recover by reading data from the file into a new page.
> > >
> > > Other cases we send SIGBUS (so go to the signal handler instead of to the faulting instruction).
> > >
> > > So it would be good if the state of RIPV could be added to the signal state sent to qemu. If that
> > > isn't possible, then this full recovery case turns into another SIGBUS case.
> >
> > This KVM and VM case of failing recovery for SRAR is just one scenario I think,
> > If Intel guarantee that when memory SRAR is triggered, RIPV will always be set, then it's the job of qemu to
> > set the RIPV instead.
> > Or if When SRAR is triggered with RIPV cleared, the same issue will be true for host.
> >
> > And I think it's better for VM to know the real RIPV value, It need more work in qemu and kernel if possible.
> >
> > Thanks
> > Aili Yao
>
> ADD this topic to qemu list, this is really one bad issue.
>
> Issue report:
> when VM receive one SRAR memory failure from host, it all has RIPV cleared, and then vm process it and trigger one panic!
>
> Can any qemu maintainer fix this?
>
> Suggestion:
> qemu get the true value of RIPV from host, the inject it to VM accordingly.
Sorry for my previous description, I may not describe the issue clearly,
I found this issue when I do memory SRAR test for kvm virtual machine, the step is:
1. Inject one uncorrectable error to one specific memory address A.
2. Then one user process in the VM access the address A and trigger a MCE exception to host.
3. In do_machine_check() kernel will check the related register and do recovery job from memory_failure();
4. Normally a BUS_MCEERR_AR SIGBUS is sent to the specifc core triggering this error.
5. Qemu will take control, and will inject this event to VM, all infomation qume can get currently is the Error code
BUS_MCEERR_AR and virtual address, in the qemu inject function:
if (code == BUS_MCEERR_AR) {
status |= MCI_STATUS_AR | 0x134;
mcg_status |= MCG_STATUS_EIPV;
} else {
status |= 0xc0;
mcg_status |= MCG_STATUS_RIPV;
}
For BUS_MCEERR_AR case, MCG_STATUS_RIPV will always be cleared.
6. Then in VM kernel, do_machine_check will got this:
if (!(m.mcgstatus & MCG_STATUS_RIPV))
kill_current_task = 1;
then go to force_sig(SIGBUS) without calling memory_failure();
so for now, the page is not marked hwpoison.
7 The VM kernel want to exit to user mode and then process the SIGBUS signal.
As SIGBUS is a fatal signal, the coredump related work will be called.
8. Then coredump will get the user space mapped memory dumped, include the error page.
9. Then UE is triggered again, and qemu will take control again, then inject this UE event to VM and
this time the error is triggered in kernel code, then VM panic.
I don't know how can this issue be fixed cleanly, maybe qemu developers may help on this.
If qemu can fix this, that will be great!
--
Thanks!
Aili Yao
Powered by blists - more mailing lists