lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200520232525.798933-11-hannes@cmpxchg.org>
Date:   Wed, 20 May 2020 19:25:21 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     linux-mm@...ck.org
Cc:     Rik van Riel <riel@...riel.com>,
        Minchan Kim <minchan.kim@...il.com>,
        Michal Hocko <mhocko@...e.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        linux-kernel@...r.kernel.org, kernel-team@...com
Subject: [PATCH 10/14] mm: only count actual rotations as LRU reclaim cost

When shrinking the active file list we rotate referenced pages only
when they're in an executable mapping. The others get deactivated.
When it comes to balancing scan pressure, though, we count all
referenced pages as rotated, even the deactivated ones. Yet they do
not carry the same cost to the system: the deactivated page *might*
refault later on, but the deactivation is tangible progress toward
freeing pages; rotations on the other hand cost time and effort
without getting any closer to freeing memory.

Don't treat both events as equal. The following patch will hook up LRU
balancing to cache and anon refaults, which are a much more concrete
cost signal for reclaiming one list over the other. Thus, remove the
maybe-IO cost bias from page references, and only note the CPU cost
for actual rotations that prevent the pages from getting reclaimed.

v2: readable changelog (Michal Hocko)

Signed-off-by: Johannes Weiner <hannes@...xchg.org>
Acked-by: Minchan Kim <minchan@...nel.org>
Acked-by: Michal Hocko <mhocko@...e.com>
---
 mm/vmscan.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6ff63906a288..2c3fb8dd1159 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2054,7 +2054,6 @@ static void shrink_active_list(unsigned long nr_to_scan,
 
 		if (page_referenced(page, 0, sc->target_mem_cgroup,
 				    &vm_flags)) {
-			nr_rotated += hpage_nr_pages(page);
 			/*
 			 * Identify referenced, file-backed active pages and
 			 * give them one more trip around the active list. So
@@ -2065,6 +2064,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 			 * so we ignore them here.
 			 */
 			if ((vm_flags & VM_EXEC) && page_is_file_lru(page)) {
+				nr_rotated += hpage_nr_pages(page);
 				list_add(&page->lru, &l_active);
 				continue;
 			}
@@ -2080,10 +2080,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	spin_lock_irq(&pgdat->lru_lock);
 	/*
-	 * Count referenced pages from currently used mappings as rotated,
-	 * even though only some of them are actually re-activated.  This
-	 * helps balance scan pressure between file and anonymous pages in
-	 * get_scan_count.
+	 * Rotating pages costs CPU without actually
+	 * progressing toward the reclaim goal.
 	 */
 	lru_note_cost(lruvec, file, nr_rotated);
 
-- 
2.26.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ