lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <acb8c972fe569bf0fcfe65a1d806b135d007ef88.camel@surriel.com>
Date: Wed, 18 Dec 2024 22:13:30 -0500
From: Rik van Riel <riel@...riel.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>, "Huang, Ying"	
 <ying.huang@...el.com>, Chris Li <chrisl@...nel.org>, Ryan Roberts	
 <ryan.roberts@....com>, David Hildenbrand <david@...hat.com>, "Matthew
 Wilcox (Oracle)" <willy@...radead.org>, linux-kernel@...r.kernel.org,
 linux-mm@...ck.org, kernel-team@...a.com
Subject: Re: [PATCH] mm: add maybe_lru_add_drain() that only drains when
 threshold  is exceeded

On Wed, 2024-12-18 at 12:20 -0800, Shakeel Butt wrote:
> On Wed, Dec 18, 2024 at 11:56:04AM -0500, Rik van Riel wrote:
> [...]
> >  
> > +static bool should_lru_add_drain(void)
> > +{
> > +	struct cpu_fbatches *fbatches =
> > this_cpu_ptr(&cpu_fbatches);
> 
> You will need either a local_lock or preempt_disable to access the
> per
> cpu batches.
> 
Why is that? Can the per-cpu batches disappear on us
while we're trying to access them, without that
local_lock or preempt_disable?

I'm not trying to protect against accidentally reading
the wrong CPU's numbers, since we could be preempted
and migrated to another CPU immediately after returning
from should_lru_add_drain, but do want to keep things
safe against other potential issues.


> > +	int pending = folio_batch_count(&fbatches->lru_add);
> > +	pending += folio_batch_count(&fbatches->lru_deactivate);
> > +	pending += folio_batch_count(&fbatches-
> > >lru_deactivate_file);
> > +	pending += folio_batch_count(&fbatches->lru_lazyfree);
> > +
> > +	/* Don't bother draining unless we have several pages
> > pending. */
> > +	return pending > SWAP_CLUSTER_MAX;
> > +}
> > +
> > +void maybe_lru_add_drain(void)
> 
> Later it might also make sense to see if other users of
> lru_add_drain()
> should be fine with maybe_lru_add_drain() as well.

Agreed. I think there are a few other users where this
could make sense, including munmap.

-- 
All Rights Reversed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ