lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58d69446edb0e2b3b4edec442043cd0a9748f15f.camel@surriel.com>
Date: Thu, 19 Dec 2024 09:11:07 -0500
From: Rik van Riel <riel@...riel.com>
To: David Hildenbrand <david@...hat.com>, Andrew Morton
	 <akpm@...ux-foundation.org>
Cc: Chris Li <chrisl@...nel.org>, Ryan Roberts <ryan.roberts@....com>, 
 "Matthew Wilcox (Oracle)"
	 <willy@...radead.org>, linux-kernel@...r.kernel.org, linux-mm@...ck.org, 
	kernel-team@...a.com
Subject: Re: [PATCH] mm: add maybe_lru_add_drain() that only drains when
 threshold is exceeded

On Thu, 2024-12-19 at 14:47 +0100, David Hildenbrand wrote:
> 
> > +++ b/mm/swap_state.c
> > @@ -317,7 +317,7 @@ void free_pages_and_swap_cache(struct
> > encoded_page **pages, int nr)
> >   	struct folio_batch folios;
> >   	unsigned int refs[PAGEVEC_SIZE];
> >   
> > -	lru_add_drain();
> > +	maybe_lru_add_drain();
> 
> I'm wondering about the reason+effect of this existing call.
> 
> Seems to date back to the beginning of git.
> 
> Likely it doesn't make sense to have effectively-free pages in the 
> LRU+mlock cache. But then, this only considers the local CPU
> LRU/mlock 
> caches ... hmmm
> 
> So .... do we need this at all? :)
> 
That is a very good question.

I think we need to free those pending pages at
some point. They can't accumulate there forever.
However, I am not sure where those points should
be. 

I can think of a few considerations:
1) We should consider approximate LRU ordering,
   and move pages onto the LRU every once in a
   while.
2) When we are trying to free memory, we should
   maybe ensure not too many pages are in these
   temporary buffers?
3) For lock batching reasons, we do not want to
   drain these buffers too frequently.

My patch takes a small step in the direction of
more batching, but maybe we can take a larger one?

-- 
All Rights Reversed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ