[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230227173632.3292573-27-surenb@google.com>
Date: Mon, 27 Feb 2023 09:36:25 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: akpm@...ux-foundation.org
Cc: michel@...pinasse.org, jglisse@...gle.com, mhocko@...e.com,
vbabka@...e.cz, hannes@...xchg.org, mgorman@...hsingularity.net,
dave@...olabs.net, willy@...radead.org, liam.howlett@...cle.com,
peterz@...radead.org, ldufour@...ux.ibm.com, paulmck@...nel.org,
mingo@...hat.com, will@...nel.org, luto@...nel.org,
songliubraving@...com, peterx@...hat.com, david@...hat.com,
dhowells@...hat.com, hughd@...gle.com, bigeasy@...utronix.de,
kent.overstreet@...ux.dev, punit.agrawal@...edance.com,
lstoakes@...il.com, peterjung1337@...il.com, rientjes@...gle.com,
chriscli@...gle.com, axelrasmussen@...gle.com, joelaf@...gle.com,
minchan@...gle.com, rppt@...nel.org, jannh@...gle.com,
shakeelb@...gle.com, tatashin@...gle.com, edumazet@...gle.com,
gthelen@...gle.com, gurua@...gle.com, arjunroy@...gle.com,
soheil@...gle.com, leewalsh@...gle.com, posk@...gle.com,
michalechner92@...glemail.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com,
Suren Baghdasaryan <surenb@...gle.com>,
Laurent Dufour <laurent.dufour@...ibm.com>
Subject: [PATCH v4 26/33] mm: prevent do_swap_page from handling page faults
under VMA lock
Due to the possibility of do_swap_page dropping mmap_lock, abort fault
handling under VMA lock and retry holding mmap_lock. This can be handled
more gracefully in the future.
Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
Reviewed-by: Laurent Dufour <laurent.dufour@...ibm.com>
---
mm/memory.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 8855846a361b..af3c2c59cd11 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3689,6 +3689,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
if (!pte_unmap_same(vmf))
goto out;
+ if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
+ ret = VM_FAULT_RETRY;
+ goto out;
+ }
+
entry = pte_to_swp_entry(vmf->orig_pte);
if (unlikely(non_swap_entry(entry))) {
if (is_migration_entry(entry)) {
--
2.39.2.722.g9855ee24e9-goog
Powered by blists - more mailing lists