[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YpTngZ5Qr0KIvL0H@xz-m1.local>
Date: Mon, 30 May 2022 11:49:21 -0400
From: Peter Xu <peterx@...hat.com>
To: zhenwei pi <pizhenwei@...edance.com>
Cc: David Hildenbrand <david@...hat.com>, Jue Wang <juew@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>, jasowang@...hat.com,
LKML <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>, mst@...hat.com,
HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>, qemu-devel@...gnu.org,
virtualization@...ts.linux-foundation.org
Subject: Re: Re: [PATCH 0/3] recover hardware corrupted page by virtio balloon
On Mon, May 30, 2022 at 07:33:35PM +0800, zhenwei pi wrote:
> A VM uses RAM of 2M huge page. Once a MCE(@HVAy in [HVAx,HVAz)) occurs, the
> 2M([HVAx,HVAz)) of hypervisor becomes unaccessible, but the guest poisons 4K
> (@GPAy in [GPAx, GPAz)) only, it may hit another 511 MCE ([GPAx, GPAz)
> except GPAy). This is the worse case, so I want to add
> '__le32 corrupted_pages' in struct virtio_balloon_config, it is used in the
> next step: reporting 512 * 4K 'corrupted_pages' to the guest, the guest has
> a chance to isolate the other 511 pages ahead of time. And the guest
> actually loses 2M, fixing 512*4K seems to help significantly.
It sounds hackish to teach a virtio device to assume one page will always
be poisoned in huge page granule. That's only a limitation to host kernel
not virtio itself.
E.g. there're upstream effort ongoing with enabling doublemap on hugetlbfs
pages so hugetlb pages can be mapped in 4k with it. It provides potential
possibility to do page poisoning with huge pages in 4k too. When that'll
be ready the assumption can go away, and that does sound like a better
approach towards this problem.
>
> >
> > I assume when talking about "the performance memory drops a lot", you
> > imply that this patch set can mitigate that performance drop?
> >
> > But why do you see a performance drop? Because we might lose some
> > possible THP candidates (in the host or the guest) and you want to plug
> > does holes? I assume you'll see a performance drop simply because
> > poisoning memory is expensive, including migrating pages around on CE.
> >
> > If you have some numbers to share, especially before/after this change,
> > that would be great.
> >
>
> The CE storm leads 2 problems I have even seen:
> 1, the memory bandwidth slows down to 10%~20%, and the cycles per
> instruction of CPU increases a lot.
> 2, the THR (/proc/interrupts) interrupts frequently, the CPU has to use a
> lot time to handle IRQ.
Totally no good knowledge on CMCI, but if 2) is true then I'm wondering
whether it's necessary to handle the interrupts that frequently. When I
was reading the Intel CMCI vector handler I stumbled over this comment:
/*
* The interrupt handler. This is called on every event.
* Just call the poller directly to log any events.
* This could in theory increase the threshold under high load,
* but doesn't for now.
*/
static void intel_threshold_interrupt(void)
I think that matches with what I was thinking.. I mean for 2) not sure
whether it can be seen as a CMCI problem and potentially can be optimized
by adjust the cmci threshold dynamically.
--
Peter Xu
Powered by blists - more mailing lists