[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191127152118.6314b99074b0626d4c5a8835@gmail.com>
Date: Wed, 27 Nov 2019 15:21:18 +0100
From: Vitaly Wool <vitalywool@...il.com>
To: <linux-mm@...ck.org>, linux-kernel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>
Subject: [PATCH 1/3] z3fold: avoid subtle race when freeing slots
There is a subtle race between freeing slots and setting the last
slot to zero since the OPRPHANED flag was set after the rwlock
had been released. Fix that to avoid rare memory leaks caused by
this race.
Signed-off-by: Vitaly Wool <vitaly.vul@...y.com>
---
mm/z3fold.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index d48d0ec3bcdd..36bd2612f609 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -327,6 +327,10 @@ static inline void free_handle(unsigned long handle)
zhdr->foreign_handles--;
is_free = true;
read_lock(&slots->lock);
+ if (!test_bit(HANDLES_ORPHANED, &slots->pool)) {
+ read_unlock(&slots->lock);
+ return;
+ }
for (i = 0; i <= BUDDY_MASK; i++) {
if (slots->slot[i]) {
is_free = false;
@@ -335,7 +339,7 @@ static inline void free_handle(unsigned long handle)
}
read_unlock(&slots->lock);
- if (is_free && test_and_clear_bit(HANDLES_ORPHANED, &slots->pool)) {
+ if (is_free) {
struct z3fold_pool *pool = slots_to_pool(slots);
kmem_cache_free(pool->c_handle, slots);
@@ -531,12 +535,12 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
break;
}
}
+ if (!is_free)
+ set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
read_unlock(&zhdr->slots->lock);
if (is_free)
kmem_cache_free(pool->c_handle, zhdr->slots);
- else
- set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
if (locked)
z3fold_page_unlock(zhdr);
--
2.17.1
Powered by blists - more mailing lists