[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d9284139-e32e-493c-86ea-77130b503a77@oracle.com>
Date: Wed, 13 Dec 2023 17:03:09 -0800
From: Jianfeng Wang <jianfeng.w.wang@...cle.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>, akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: remove redundant lru_add_drain() prior to unmapping
pages
On 12/13/23 2:57 PM, Tim Chen wrote:
> On Tue, 2023-12-12 at 23:28 -0800, Jianfeng Wang wrote:
>> When unmapping VMA pages, pages will be gathered in batch and released by
>> tlb_finish_mmu() if CONFIG_MMU_GATHER_NO_GATHER is not set. The function
>> tlb_finish_mmu() is responsible for calling free_pages_and_swap_cache(),
>> which calls lru_add_drain() to drain cached pages in folio_batch before
>> releasing gathered pages. Thus, it is redundant to call lru_add_drain()
>> before gathering pages, if CONFIG_MMU_GATHER_NO_GATHER is not set.
>>
>> Remove lru_add_drain() prior to gathering and unmapping pages in
>> exit_mmap() and unmap_region() if CONFIG_MMU_GATHER_NO_GATHER is not set.
>>
>> Note that the page unmapping process in oom_killer (e.g., in
>> __oom_reap_task_mm()) also uses tlb_finish_mmu() and does not have
>> redundant lru_add_drain(). So, this commit makes the code more consistent.
>>
>> Signed-off-by: Jianfeng Wang <jianfeng.w.wang@...cle.com>
>> ---
>> mm/mmap.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 1971bfffcc03..0451285dee4f 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -2330,7 +2330,9 @@ static void unmap_region(struct mm_struct *mm, struct ma_state *mas,
>> struct mmu_gather tlb;
>> unsigned long mt_start = mas->index;
>>
>> +#ifdef CONFIG_MMU_GATHER_NO_GATHER
>
> In your comment you say skip lru_add_drain() when CONFIG_MMU_GATHER_NO_GATHER
> is *not* set. So shouldn't this be
>
> #ifndef CONFIG_MMU_GATHER_NO_GATHER ?
>
Hi Tim,
The mmu_gather feature is used to gather pages produced by unmap_vmas() and
release them in batch in tlb_finish_mmu(). The feature is *on* if
CONFIG_MMU_GATHER_NO_GATHER is *not* set. Note that: tlb_finish_mmu() will call
free_pages_and_swap_cache()/lru_add_drain() only when the feature is on.
Yes, this commit aims to skip lru_add_drain() when CONFIG_MMU_GATHER_NO_GATHER
is *not* set (i.e. when the mmu_gather feature is on) because it is redundant.
If CONFIG_MMU_GATHER_NO_GATHER is set, pages will be released in unmap_vmas().
tlb_finish_mmu() will not call lru_add_drain(). So, it is still necessary to
keep the lru_add_drain() call to clear cached pages before unmap_vmas(), as
folio_batchs hold a reference count for pages in them.
The same applies to the other case.
Thanks,
- Jianfeng
>> lru_add_drain();
>> +#endif
>> tlb_gather_mmu(&tlb, mm);
>> update_hiwater_rss(mm);
>> unmap_vmas(&tlb, mas, vma, start, end, tree_end, mm_wr_locked);
>> @@ -3300,7 +3302,9 @@ void exit_mmap(struct mm_struct *mm)
>> return;
>> }
>>
>> +#ifdef CONFIG_MMU_GATHER_NO_GATHER
>
> same question as above.
>
>> lru_add_drain();
>> +#endif
>> flush_cache_mm(mm);
>> tlb_gather_mmu_fullmm(&tlb, mm);
>> /* update_hiwater_rss(mm) here? but nobody should be looking */
>
Powered by blists - more mailing lists