[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <34a694c8-eb12-757a-05e3-f87f3ba1347a@huawei.com>
Date: Tue, 6 Dec 2022 09:40:02 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<xialonglong1@...wei.com>
Subject: Re: [PATCH] mm: add cond_resched() in swapin_walk_pmd_entry()
On 2022/12/6 5:03, Andrew Morton wrote:
> On Mon, 5 Dec 2022 22:03:27 +0800 Kefeng Wang <wangkefeng.wang@...wei.com> wrote:
>
>> When handle MADV_WILLNEED in madvise(), the soflockup may be occurred
>> in swapin_walk_pmd_entry() if swapin lots of memory on slow device,
>> add a cond_resched() into it to avoid the possible softlockup.
>>
>> ...
>>
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -226,6 +226,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
>> put_page(page);
>> }
>> swap_read_unplug(splug);
>> + cond_resched();
>>
>> return 0;
>> }
> I wonder if this would be better in walk_pmd_range(), to address other
> very large walk attempts.
mm/madvise.c:287: walk_page_range(vma->vm_mm, start, end,
&swapin_walk_ops, vma);
mm/madvise.c:514: walk_page_range(vma->vm_mm, addr, end,
&cold_walk_ops, &walk_private);
mm/madvise.c:762: walk_page_range(vma->vm_mm, range.start, range.end,
mm/madvise.c-763- &madvise_free_walk_ops, &tlb);
The cold_walk_ops and madvise_free_walk_ops are already with cond_resched()
in theirs pmd_entry walk, maybe there's no need for a precautionary increase
a cond_resched() for now
>
> .
Powered by blists - more mailing lists