[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6986a8dd-7211-fb4d-1d66-5b203cad1aab@redhat.com>
Date: Wed, 11 May 2022 18:22:41 +0200
From: David Hildenbrand <david@...hat.com>
To: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>
Cc: Miaohe Lin <linmiaohe@...wei.com>,
Oscar Salvador <osalvador@...e.de>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Yang Shi <shy828301@...il.com>,
Muchun Song <songmuchun@...edance.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v1 0/4] mm, hwpoison: improve handling workload
related to hugetlb and memory_hotplug
On 11.05.22 18:10, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Wed, May 11, 2022 at 05:11:17PM +0200, David Hildenbrand wrote:
>> On 09.05.22 12:53, Miaohe Lin wrote:
>>> On 2022/5/9 17:58, Oscar Salvador wrote:
>>>> On Mon, May 09, 2022 at 05:04:54PM +0800, Miaohe Lin wrote:
>>>>>>> So that leaves us with either
>>>>>>>
>>>>>>> 1) Fail offlining -> no need to care about reonlining
>>>>>
>>>>> Maybe fail offlining will be a better alternative as we can get rid of many races
>>>>> between memory failure and memory offline? But no strong opinion. :)
>>>>
>>>> If taking care of those races is not an herculean effort, I'd go with
>>>> allowing offlining + disallow re-onlining.
>>>> Mainly because memory RAS stuff.
>>>
>>> This dose make sense to me. Thanks. We can try to solve those races if
>>> offlining + disallow re-onlining is applied. :)
>>>
>>>>
>>>> Now, to the re-onlining thing, we'll have to come up with a way to check
>>>> whether a section contains hwpoisoned pages, so we do not have to go
>>>> and check every single page, as that will be really suboptimal.
>>>
>>> Yes, we need a stable and cheap way to do that.
>>
>> My simplistic approach would be a simple flag/indicator in the memory block devices
>> that indicates that any page in the memory block was hwpoisoned. It's easy to
>> check that during memory onlining and fail it.
>>
>> diff --git a/drivers/base/memory.c b/drivers/base/memory.c
>> index 084d67fd55cc..3d0ef812e901 100644
>> --- a/drivers/base/memory.c
>> +++ b/drivers/base/memory.c
>> @@ -183,6 +183,9 @@ static int memory_block_online(struct memory_block *mem)
>> struct zone *zone;
>> int ret;
>>
>> + if (mem->hwpoisoned)
>> + return -EHWPOISON;
>> +
>> zone = zone_for_pfn_range(mem->online_type, mem->nid, mem->group,
>> start_pfn, nr_pages);
>>
>
> Thanks for the idea, a simple flag could work if we don't have to consider
> unpoison. If we need consider unpoison, we need remember the last hwpoison
> page in the memory block, so mem->hwpoisoned should be the counter of
> hwpoison pages.
Right, but unpoisoning+memory offlining+memory onlining is a yet more
extreme use case we don't have to bother about I think.
>
>>
>>
>> Once the problematic DIMM would actually get unplugged, the memory block devices
>> would get removed as well. So when hotplugging a new DIMM in the same
>> location, we could online that memory again.
>
> What about PG_hwpoison flags? struct pages are also freed and reallocated
> in the actual DIMM replacement?
Once memory is offline, the memmap is stale and is no longer
trustworthy. It gets reinitialize during memory onlining -- so any
previous PG_hwpoison is overridden at least there. In some setups, we
even poison the whole memmap via page_init_poison() during memory offlining.
Apart from that, we should be freeing the memmap in all relevant cases
when removing memory. I remember there are a couple of corner cases, but
we don't really have to care about that.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists