[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <070697f1-83f5-ff8e-dfc0-2f99c98c448c@huawei.com>
Date: Tue, 2 Jul 2024 16:04:02 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Rui Qi <qirui.001@...edance.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <nao.horiguchi@...il.com>
Subject: Re: [PATCH] mm/memory-failure: allow memory allocation from emergency
reserves
On 2024/7/2 15:19, Andrew Morton wrote:
> On Sat, 29 Jun 2024 10:09:46 +0800 Miaohe Lin <linmiaohe@...wei.com> wrote:
>
>> On 2024/6/25 10:23, Rui Qi wrote:
>>> From: Rui Qi <qirui.001@...edance.com>
>>>
>>> we hope that memory errors can be successfully handled quickly, using
>>> __GFP_MEMALLOC can help us improve the success rate of processing
>>
>> Comments of __GFP_MEMALLOC says:
>>
>> * Users of this flag have to be extremely careful to not deplete the reserve
>> * completely and implement a throttling mechanism which controls the
>> * consumption of the reserve based on the amount of freed memory.
>>
>> It seems there's no such throttling mechanism in memory_failure.
>>
>>> under memory pressure, because to_kill struct is freed very quickly,
>>> so using __GFP_MEMALLOC will not exacerbate memory pressure for a long time,
>>> and more memory will be freed after killed task exiting, which will also
>>
>> Tasks might not be killed even to_kill struct is allocated.
>>
>> ...
>>
>>> - raw_hwp = kmalloc(sizeof(struct raw_hwp_page), GFP_ATOMIC);
>>> + raw_hwp = kmalloc(sizeof(struct raw_hwp_page), GFP_ATOMIC | __GFP_MEMALLOC);
>>
>> In already hardware poisoned code path, raw_hwp can be allocated to store raw page info
>> without killing anything. So __GFP_MEMALLOC might not be suitable to use.
>> Or am I miss something?
>
> Yes, I'm doubtful about this patch. I think that rather than poking at a
> particular implementation, it would be helpful for us to see a complete
> description of the issues which were observed, please. Let's see the
> bug report and we can discuss fixes later.
I agree with you, Andrew. Thanks. :)
.
Powered by blists - more mailing lists