lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac3fc5b9-d09c-5fb6-998d-f7c655d7fa00@redhat.com>
Date:   Fri, 6 May 2022 18:28:11 +0200
From:   David Hildenbrand <david@...hat.com>
To:     zhenwei pi <pizhenwei@...edance.com>,
        Naoya Horiguchi <naoya.horiguchi@...ux.dev>
Cc:     akpm@...ux-foundation.org, naoya.horiguchi@....com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Wu Fengguang <fengguang.wu@...el.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        "virtualization@...ts.linux-foundation.org" 
        <virtualization@...ts.linux-foundation.org>
Subject: Re: [PATCH 3/4] mm/memofy-failure.c: optimize hwpoison_filter

On 06.05.22 15:38, zhenwei pi wrote:
> 
> 
> On 5/6/22 16:59, Naoya Horiguchi wrote:
>> On Fri, Apr 29, 2022 at 10:22:05PM +0800, zhenwei pi wrote:
>>> In the memory failure procedure, hwpoison_filter has higher priority,
>>> if memory_filter() filters the error event, there is no need to do
>>> the further work.
>>
>> Could you clarify what problem you are trying to solve (what does
>> "optimize" mean in this context or what is the benefit)?
>>
> 
> OK. The background of this work:
> As well known, the memory failure mechanism handles memory corrupted 
> event, and try to send SIGBUS to the user process which uses this 
> corrupted page.
> 
> For the virtualization case, QEMU catches SIGBUS and tries to inject MCE 
> into the guest, and the guest handles memory failure again. Thus the 
> guest gets the minimal effect from hardware memory corruption.
> 
> The further step I'm working on:
> 1, try to modify code to decrease poisoned pages in a single place 
> (mm/memofy-failure.c: simplify num_poisoned_pages_dec in this series).
> 
> 2, try to use page_handle_poison() to handle SetPageHWPoison() and 
> num_poisoned_pages_inc() together. It would be best to call 
> num_poisoned_pages_inc() in a single place too. I'm not sure if this is 
> possible or not, please correct me if I misunderstand.
> 
> 3, introduce memory failure notifier list in memory-failure.c: notify 
> the corrupted PFN to someone who registers this list.
> If I can complete [1] and [2] part, [3] will be quite easy(just call 
> notifier list after increasing poisoned page).
> 
> 4, introduce memory recover VQ for memory balloon device, and registers 
> memory failure notifier list. During the guest kernel handles memory 
> failure, balloon device gets notified by memory failure notifier list, 
> and tells the host to recover the corrupted PFN(GPA) by the new VQ.

Most probably you might want to do that asynchronously, and once the
callback succeeds, un-poison the page.

> 
> 5, host side remaps the corrupted page(HVA), and tells the guest side to 
> unpoison the PFN(GPA). Then the guest fixes the corrupted page(GPA) 
> dynamically.

I think QEMU already does that during reboots. Now it would be triggered
by the guest for individual pages.

> 
> Because [4] and [5] are related to balloon device, also CC Michael, 
> David and Jason.

Doesn't sound too crazy for me, although it's a shame that we always
have to use virtio-balloon for such fairly balloon-unrelated things.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ