lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 21 Nov 2022 16:03:24 +0000
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Hugh Dickins <hughd@...gle.com>, Yu Zhao <yuzhao@...gle.com>,
        Marcelo Tosatti <mtosatti@...hat.com>,
        Michal Hocko <mhocko@...nel.org>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH 2/2] mm/page_alloc: Leave IRQs enabled for per-cpu page
 allocations

On Mon, Nov 21, 2022 at 12:01:23PM +0000, Mel Gorman wrote:
> On Fri, Nov 18, 2022 at 03:30:57PM +0100, Vlastimil Babka wrote:
> > On 11/18/22 11:17, Mel Gorman wrote:
> > AFAICS if this block was just "locked_zone = NULL;" then the existing code
> > would do the right thing.
> > Or maybe to have simpler code, just do batch_count++ here and
> > make the relocking check do
> > if (zone != locked_zone || batch_count == SWAP_CLUSTER_MAX)
> > 
> 
> While I think you're right, I think it's a bit subtle, the batch reset would
> need to move, rechecked within the "Different zone, different pcp lock."
> block and it would be easy to forget exactly why it's structured like
> that in the future.  Rather than being a fix, it could be a standalone
> patch so it would be obvious in git blame but I don't feel particularly
> strongly about it.
> 

Ok, less subtle than I initially thought but still deserving of a separate
patch instead of being a fix. This?

--8<--
mm/page_alloc: Simplify locking during free_unref_page_list

While freeing a large list, the zone lock will be released and reacquired
to avoid long hold times since commit c24ad77d962c ("mm/page_alloc.c: avoid
excessive IRQ disabled times in free_unref_page_list()"). As suggested
by Vlastimil Babka, the lockrelease/reacquire logic can be simplified by
reusing the logic that acquires a different lock when changing zones.

Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
---
 mm/page_alloc.c | 25 +++++++++----------------
 1 file changed, 9 insertions(+), 16 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 445066617204..08e32daf0918 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3518,13 +3518,19 @@ void free_unref_page_list(struct list_head *list)
 		list_del(&page->lru);
 		migratetype = get_pcppage_migratetype(page);
 
-		/* Different zone, different pcp lock. */
-		if (zone != locked_zone) {
+		/*
+		 * Either different zone requiring a different pcp lock or
+		 * excessive lock hold times when freeing a large list of
+		 * pages.
+		 */
+		if (zone != locked_zone || batch_count == SWAP_CLUSTER_MAX) {
 			if (pcp) {
 				pcp_spin_unlock(pcp);
 				pcp_trylock_finish(UP_flags);
 			}
 
+			batch_count = 0;
+
 			/*
 			 * trylock is necessary as pages may be getting freed
 			 * from IRQ or SoftIRQ context after an IO completion.
@@ -3539,7 +3545,6 @@ void free_unref_page_list(struct list_head *list)
 				continue;
 			}
 			locked_zone = zone;
-			batch_count = 0;
 		}
 
 		/*
@@ -3551,19 +3556,7 @@ void free_unref_page_list(struct list_head *list)
 
 		trace_mm_page_free_batched(page);
 		free_unref_page_commit(zone, pcp, page, migratetype, 0);
-
-		/*
-		 * Guard against excessive lock hold times when freeing
-		 * a large list of pages. Lock will be reacquired if
-		 * necessary on the next iteration.
-		 */
-		if (++batch_count == SWAP_CLUSTER_MAX) {
-			pcp_spin_unlock(pcp);
-			pcp_trylock_finish(UP_flags);
-			batch_count = 0;
-			pcp = NULL;
-			locked_zone = NULL;
-		}
+		batch_count++;
 	}
 
 	if (pcp) {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ