[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <27062739-74af-7deb-2486-45bb84888433@bytedance.com>
Date: Fri, 27 May 2022 10:22:23 +0800
From: zhenwei pi <pizhenwei@...edance.com>
To: "Michael S. Tsirkin" <mst@...hat.com>, akpm@...ux-foundation.org,
naoya.horiguchi@....com
Cc: david@...hat.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
jasowang@...hat.com, virtualization@...ts.linux-foundation.org,
pbonzini@...hat.com, peterx@...hat.com, qemu-devel@...gnu.org
Subject: Re: Re: [PATCH 3/3] virtio_balloon: Introduce memory recover
On 5/27/22 03:18, Michael S. Tsirkin wrote:
> On Fri, May 20, 2022 at 03:06:48PM +0800, zhenwei pi wrote:
>> Introduce a new queue 'recover VQ', this allows guest to recover
>> hardware corrupted page:
>>
>> Guest 5.MF -> 6.RVQ FE 10.Unpoison page
>> / \ /
>> -------------------+-------------+----------+-----------
>> | | |
>> 4.MCE 7.RVQ BE 9.RVQ Event
>> QEMU / \ /
>> 3.SIGBUS 8.Remap
>> /
>> ----------------+------------------------------------
>> |
>> +--2.MF
>> Host /
>> 1.HW error
>>
>> The workflow:
>> 1, HardWare page error occurs randomly.
>> 2, host side handles corrupted page by Memory Failure mechanism, sends
>> SIGBUS to the user process if early-kill is enabled.
>> 3, QEMU handles SIGBUS, if the address belongs to guest RAM, then:
>> 4, QEMU tries to inject MCE into guest.
>> 5, guest handles memory failure again.
>>
>> 1-5 is already supported for a long time, the next steps are supported
>> in this patch(also related driver patch):
>> 6, guest balloon driver gets noticed of the corrupted PFN, and sends
>> request to host side by Recover VQ FrontEnd.
>> 7, QEMU handles request from Recover VQ BackEnd, then:
>> 8, QEMU remaps the corrupted HVA fo fix the memory failure, then:
>> 9, QEMU acks the guest side the result by Recover VQ.
>> 10, guest unpoisons the page if the corrupted page gets recoverd
>> successfully.
>>
>> Then the guest fixes the HW page error dynamiclly without rebooting.
>>
>> Emulate MCE by QEMU, the guest works fine:
>> mce: [Hardware Error]: Machine check events logged
>> Memory failure: 0x61646: recovery action for dirty LRU page: Recovered
>> virtio_balloon virtio5: recovered pfn 0x61646
>> Unpoison: Unpoisoned page 0x61646 by virtio-balloon
>> MCE: Killing stress:24502 due to hardware memory corruption fault at 7f5be2e5a010
>>
>> The 'HardwareCorrupted' in /proc/meminfo also shows 0 kB.
>>
>> Signed-off-by: zhenwei pi <pizhenwei@...edance.com>
>> ---
>> drivers/virtio/virtio_balloon.c | 243 ++++++++++++++++++++++++++++
>> include/uapi/linux/virtio_balloon.h | 16 ++
>> 2 files changed, 259 insertions(+)
>>
>> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
>> index f4c34a2a6b8e..f9d95d1d8a4d 100644
>> --- a/drivers/virtio/virtio_balloon.c
>> +++ b/drivers/virtio/virtio_balloon.c
>> @@ -52,6 +52,7 @@ enum virtio_balloon_vq {
>> VIRTIO_BALLOON_VQ_STATS,
>> VIRTIO_BALLOON_VQ_FREE_PAGE,
>> VIRTIO_BALLOON_VQ_REPORTING,
>> + VIRTIO_BALLOON_VQ_RECOVER,
>> VIRTIO_BALLOON_VQ_MAX
>> };
>>
>> @@ -59,6 +60,12 @@ enum virtio_balloon_config_read {
>> VIRTIO_BALLOON_CONFIG_READ_CMD_ID = 0,
>> };
>>
>> +/* the request body to commucate with host side */
>> +struct __virtio_balloon_recover {
>> + struct virtio_balloon_recover vbr;
>> + __virtio32 pfns[VIRTIO_BALLOON_PAGES_PER_PAGE];
>> +};
>> +
>
>
> I don't think this idea of passing 32 bit pfns is going to fly.
> What is wrong with just passing the pages normally as a s/g list?
> this is what is done for the hints at the moment.
>
> neither should you use __virtio types for new functionality
> (should all be __le), nor use __virtio for the struct name.
>
>
Guest side sends GPA/PFN to host side by passing the pages normally as a
s/g list, this is OK.
But in this scenario, guest also needs to get
status(recovered?corrupted?failed to recover?) of page from the host side.
For a normal page(Ex, 4K), the host could return the status quite
immediately. But for the 2M hugetlb of guest RAM, the host should be
pending until the guest requests 512*4K to recover. Once the 2M hugepage
gets recovered(or failed to recover), the host returns 512 PFNs with
status to guest. There are at most 512 recover requests of a single 2M
huge page.
For example, the guest tries to recover a corrupted page:
struct scatterlist status_sg, page_sg, *sgs[2];
sg_init_one(&status_sg, status, sizeof(*status));
sgs[0] = &status_sg;
p = page_address(page);
sg_init_one(&page_sg, p, PAGE_SIZE);
sgs[1] = &page_sg;
virtqueue_add_sgs(recover_vq, sgs, 1, 1, NULL, GFP_ATOMIC);
The host handles 4K recover request on 2M hugepage, this request is
pending until the full 2M huge page gets recovered(or failed).
To avoid too many pending request in virt queue, I designed as this
patch(should use __le), passing PFN in request body, using a single IN
request only.
...
>> --- a/include/uapi/linux/virtio_balloon.h
>> +++ b/include/uapi/linux/virtio_balloon.h
>> @@ -37,6 +37,7 @@
>> #define VIRTIO_BALLOON_F_FREE_PAGE_HINT 3 /* VQ to report free pages */
>> #define VIRTIO_BALLOON_F_PAGE_POISON 4 /* Guest is using page poisoning */
>> #define VIRTIO_BALLOON_F_REPORTING 5 /* Page reporting virtqueue */
>> +#define VIRTIO_BALLOON_F_RECOVER 6 /* Memory recover virtqueue */
>>
>> /* Size of a PFN in the balloon interface. */
>> #define VIRTIO_BALLOON_PFN_SHIFT 12
>
> Please get this feature recorded in the spec with the virtio TC.
> They will also ask you to supply minimal documentation.
>
Sure!
By the way, this feature depends on the memory&&memory-failure
mechanism, what about sending the change of spec to virtio TC after
Andrew and Naoya ack?
--
zhenwei pi
Powered by blists - more mailing lists