[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <e0009cd7362f64da08aea5883e753192e137da39.1716285099.git.baolin.wang@linux.alibaba.com>
Date: Tue, 21 May 2024 19:03:15 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: akpm@...ux-foundation.org,
hughd@...gle.com
Cc: willy@...radead.org,
david@...hat.com,
ioworker0@...il.com,
hrisl@...nel.org,
p.raghav@...sung.com,
da.gomez@...sung.com,
wangkefeng.wang@...wei.com,
ying.huang@...el.com,
21cnbao@...il.com,
ryan.roberts@....com,
shy828301@...il.com,
ziy@...dia.com,
baolin.wang@...ux.alibaba.com,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [RFC PATCH 5/8] mm: shmem: extend shmem_partial_swap_usage() to support large folio swap
To support shmem large folio swapout in the following patches, using
xa_get_order() to get the order of the swap entry to calculate the swap
usage of shmem.
Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
---
mm/shmem.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 74821a7031b8..bc099e8b9952 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -865,13 +865,16 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
struct page *page;
unsigned long swapped = 0;
unsigned long max = end - 1;
+ int order;
rcu_read_lock();
xas_for_each(&xas, page, max) {
if (xas_retry(&xas, page))
continue;
- if (xa_is_value(page))
- swapped++;
+ if (xa_is_value(page)) {
+ order = xa_get_order(xas.xa, xas.xa_index);
+ swapped += 1 << order;
+ }
if (xas.xa_index == max)
break;
if (need_resched()) {
--
2.39.3
Powered by blists - more mailing lists