[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250625174841.1094510-3-vishal.moola@gmail.com>
Date: Wed, 25 Jun 2025 10:48:40 -0700
From: "Vishal Moola (Oracle)" <vishal.moola@...il.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Jordan Rome <linux@...danrome.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>
Subject: [RFC PATCH 2/3] mm/memory.c: convert __access_remote_vm() to folios
Use kmap_local_folio() instead of kmap_local_page().
Replaces 2 calls to compound_head() with one.
This prepares us for the removal of unmap_and_put_page(), and helps
prepare for the eventual gup folio conversions since this function
now supports individual subpages from large folios.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@...il.com>
---
mm/memory.c | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 747866060658..5eeca95b9c61 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6696,8 +6696,9 @@ static int __access_remote_vm(struct mm_struct *mm, unsigned long addr,
/* ignore errors, just check how much was successfully transferred */
while (len) {
- int bytes, offset;
+ int bytes, folio_offset;
void *maddr;
+ struct folio *folio;
struct vm_area_struct *vma = NULL;
struct page *page = get_user_page_vma_remote(mm, addr,
gup_flags, &vma);
@@ -6729,21 +6730,25 @@ static int __access_remote_vm(struct mm_struct *mm, unsigned long addr,
if (bytes <= 0)
break;
} else {
+ folio = page_folio(page);
bytes = len;
- offset = addr & (PAGE_SIZE-1);
- if (bytes > PAGE_SIZE-offset)
- bytes = PAGE_SIZE-offset;
+ folio_offset = offset_in_folio(folio, addr);
+
+ if (bytes > PAGE_SIZE - offset_in_page(folio_offset))
+ bytes = PAGE_SIZE - offset_in_page(folio_offset);
- maddr = kmap_local_page(page);
+ maddr = kmap_local_folio(folio, folio_offset);
if (write) {
- copy_to_user_page(vma, page, addr,
- maddr + offset, buf, bytes);
- set_page_dirty_lock(page);
+ copy_to_user_page(vma,
+ folio_page(folio, folio_offset / PAGE_SIZE),
+ addr, maddr, buf, bytes);
+ folio_mark_dirty_lock(folio);
} else {
- copy_from_user_page(vma, page, addr,
- buf, maddr + offset, bytes);
+ copy_from_user_page(vma,
+ folio_page(folio, folio_offset / PAGE_SIZE),
+ addr, buf, maddr, bytes);
}
- unmap_and_put_page(page, maddr);
+ folio_release_kmap(folio, maddr);
}
len -= bytes;
buf += bytes;
--
2.49.0
Powered by blists - more mailing lists