lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <xxfhcjaq2xxcl5adastz5omkytenq7izo2e5f4q7e3ns4z6lko@odigjjc7hqrg>
Date: Fri, 20 Dec 2024 17:10:31 -0800
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Rik van Riel <riel@...riel.com>
Cc: David Hildenbrand <david@...hat.com>, 
	Andrew Morton <akpm@...ux-foundation.org>, Chris Li <chrisl@...nel.org>, 
	Ryan Roberts <ryan.roberts@....com>, "Matthew Wilcox (Oracle)" <willy@...radead.org>, 
	linux-mm@...ck.org, kernel-team@...a.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: remove unnecessary calls to lru_add_drain

I am trying to go beyond the first git commit to find the actual
motivation for adding these lru_add_drain.

On Thu, Dec 19, 2024 at 03:32:53PM -0500, Rik van Riel wrote:
> There seem to be several categories of calls to lru_add_drain
> and lru_add_drain_all.
> 
> The first are code paths that recently allocated, swapped in,
> or otherwise processed a batch of pages, and want them all on
> the LRU. These drain pages that were recently allocated,
> probably on the local CPU.
> 
> A second category are code paths that are actively trying to
> reclaim, migrate, or offline memory. These often use lru_add_drain_all,
> to drain the caches on all CPUs.
> 
> However, there also seem to be some other callers where we
> aren't really doing either. They are calling lru_add_drain(),
> despite operating on pages that may have been allocated
> long ago, and quite possibly on different CPUs.
> 
> Those calls are not likely to be effective at anything but
> creating lock contention on the LRU locks.
> 
> Remove the lru_add_drain calls in the latter category.
> 
> Signed-off-by: Rik van Riel <riel@...riel.com>
> Suggested-by: David Hildenbrand <david@...hat.com>
> ---
>  mm/memory.c     | 1 -
>  mm/mmap.c       | 2 --
>  mm/swap_state.c | 1 -
>  mm/vma.c        | 2 --
>  4 files changed, 6 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 75c2dfd04f72..95ce298dc254 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1935,7 +1935,6 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
>  	struct mmu_notifier_range range;
>  	struct mmu_gather tlb;
>  
> -	lru_add_drain();

The above was added in [1]. It seems like the motivation was that the
lru_add_cache was holding to some freed pages for long period of time
and some workload (AIM9) was suffering to go to reclaim to flush those
pages and use them. By draining here, such a workload was going to
reclaim less often. (I kind of extrapolate the reasoning).

I think now it is ok to remove this draining as ratio of pages getting
stuck in such a cache to the total RAM is drastically reduced and these
pages becoming the main cause of slowdown is almost zero.

[1] https://github.com/mpe/linux-fullhistory/commit/15317018be190db05f7420f27afd3d053aad48b5

>  	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
>  				address, end);
>  	hugetlb_zap_begin(vma, &range.start, &range.end);
> diff --git a/mm/mmap.c b/mm/mmap.c
> index d32b7e701058..ef57488f1020 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -1660,7 +1660,6 @@ void exit_mmap(struct mm_struct *mm)
>  		goto destroy;
>  	}
>  
> -	lru_add_drain();

The above was added in [2]. I think it was just a move from the callee
to multiple callers i.e. from unmap_page_range() to unmap_region() and
exit_mmap(). For unmap_page_range(), lru_add_drain() was added in [1].
So, I think the same reason can apply to it i.e. we can remove it now.

[2] https://github.com/mpe/linux-fullhistory/commit/5b0aee25a3c09b7c4fbb52a737fc9f8ec6761079

>  	flush_cache_mm(mm);
>  	tlb_gather_mmu_fullmm(&tlb, mm);
>  	/* update_hiwater_rss(mm) here? but nobody should be looking */
> @@ -2103,7 +2102,6 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)
>  				       vma, new_start, length, false, true))
>  		return -ENOMEM;
>  
> -	lru_add_drain();

The above was added by commit b6a2fea39318e ("mm: variable length
argument support"). From what I see, there was no reason provided and I
couldn't find on lkml any discussion on this. I think it was just
following a pattern from somewhere else of lru_add_drain() along with
tlb_gather_mmu().

I think we can remove this one as well.

>  	tlb_gather_mmu(&tlb, mm);
>  	next = vma_next(&vmi);
>  	if (new_end > old_start) {
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index e0c0321b8ff7..ca42b2be64d9 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -317,7 +317,6 @@ void free_pages_and_swap_cache(struct encoded_page **pages, int nr)
>  	struct folio_batch folios;
>  	unsigned int refs[PAGEVEC_SIZE];
>  
> -	lru_add_drain();

This was added in [1] as well and I think the same reason applies.

>  	folio_batch_init(&folios);
>  	for (int i = 0; i < nr; i++) {
>  		struct folio *folio = page_folio(encoded_page_ptr(pages[i]));
> diff --git a/mm/vma.c b/mm/vma.c
> index 8e31b7e25aeb..d84e5ef6d15b 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -398,7 +398,6 @@ void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
>  	struct mm_struct *mm = vma->vm_mm;
>  	struct mmu_gather tlb;
>  
> -	lru_add_drain();

Same reason as for exit_mmap().

>  	tlb_gather_mmu(&tlb, mm);
>  	update_hiwater_rss(mm);
>  	unmap_vmas(&tlb, mas, vma, vma->vm_start, vma->vm_end, vma->vm_end,
> @@ -1130,7 +1129,6 @@ static inline void vms_clear_ptes(struct vma_munmap_struct *vms,
>  	 * were isolated before we downgraded mmap_lock.
>  	 */
>  	mas_set(mas_detach, 1);
> -	lru_add_drain();

This is from 9c3ebeda8fb5a ("mm/vma: track start and end for munmap in
vma_munmap_struct") and I think it was also just following the pattern.
I think we can remove this one as well.

>  	tlb_gather_mmu(&tlb, vms->vma->vm_mm);
>  	update_hiwater_rss(vms->vma->vm_mm);
>  	unmap_vmas(&tlb, mas_detach, vms->vma, vms->start, vms->end,

I hope this much history is good enough to convince Andrew. With that
please add:

Acked-by: Shakeel Butt <shakeel.butt@...ux.dev>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ