[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240614221525.19170-3-shivankg@amd.com>
Date: Sat, 15 Jun 2024 03:45:22 +0530
From: Shivank Garg <shivankg@....com>
To: <akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
CC: <bharata@....com>, <raghavendra.kodsarathimmappa@....com>,
<Michael.Day@....com>, <dmaengine@...r.kernel.org>, <vkoul@...nel.org>,
<shivankg@....com>
Subject: [RFC PATCH 2/5] mm: add folios_copy() for copying pages in batch during migration
This patch introduces the folios_copy() function to copy the folio content
from the list of src folios to the list of dst folios. This is preparatory
patch for batch page migration offloading.
Signed-off-by: Shivank Garg <shivankg@....com>
---
include/linux/mm.h | 1 +
mm/util.c | 22 ++++++++++++++++++++++
2 files changed, 23 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f5a97dec5169..cd5f37ec72f0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1300,6 +1300,7 @@ void put_pages_list(struct list_head *pages);
void split_page(struct page *page, unsigned int order);
void folio_copy(struct folio *dst, struct folio *src);
+void folios_copy(struct list_head *dst_list, struct list_head *src_list);
unsigned long nr_free_buffer_pages(void);
diff --git a/mm/util.c b/mm/util.c
index 5a6a9802583b..3a278db28429 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -811,6 +811,28 @@ void folio_copy(struct folio *dst, struct folio *src)
}
EXPORT_SYMBOL(folio_copy);
+/**
+ * folios_copy - Copy the contents of list of folios.
+ * @dst_list: Folios to copy to.
+ * @src_list: Folios to copy from.
+ *
+ * The folio contents are copied from @src_list to @dst_list.
+ * Assume the caller has validated that lists are not empty and both lists
+ * have equal number of folios. This may sleep.
+ */
+void folios_copy(struct list_head *dst_list,
+ struct list_head *src_list)
+{
+ struct folio *src, *dst;
+
+ dst = list_first_entry(dst_list, struct folio, lru);
+ list_for_each_entry(src, src_list, lru) {
+ cond_resched();
+ folio_copy(dst, src);
+ dst = list_next_entry(dst, lru);
+ }
+}
+
int sysctl_overcommit_memory __read_mostly = OVERCOMMIT_GUESS;
int sysctl_overcommit_ratio __read_mostly = 50;
unsigned long sysctl_overcommit_kbytes __read_mostly;
--
2.34.1
Powered by blists - more mailing lists