[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211115185909.3949505-1-minchan@kernel.org>
Date: Mon, 15 Nov 2021 10:59:00 -0800
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Minchan Kim <minchan@...nel.org>
Subject: [PATCH v2 0/9] zsmalloc: remove bit_spin_lock
The zsmalloc has used bit_spin_lock to minimize space overhead
since it's zpage granularity lock. However, it causes zsmalloc
non-working under PREEMPT_RT as well as adding too much
complication.
This patchset tries to replace the bit_spin_lock with per-pool
rwlock. It also removes unnecessary zspage isolation logic
from class, which was the other part too much complication
added into zsmalloc.
Last patch changes the get_cpu_var to local_lock to make it
work in PREEMPT_RT.
Mike Galbraith (1):
zsmalloc: replace get_cpu_var with local_lock
Minchan Kim (8):
zsmalloc: introduce some helper functions
zsmalloc: rename zs_stat_type to class_stat_type
zsmalloc: decouple class actions from zspage works
zsmalloc: introduce obj_allocated
zsmalloc: move huge compressed obj from page to zspage
zsmalloc: remove zspage isolation for migration
locking/rwlocks: introduce write_lock_nested
zsmalloc: replace per zpage lock with pool->migrate_lock
include/linux/rwlock.h | 6 +
include/linux/rwlock_api_smp.h | 9 +
include/linux/rwlock_rt.h | 6 +
include/linux/spinlock_api_up.h | 1 +
kernel/locking/spinlock.c | 6 +
kernel/locking/spinlock_rt.c | 12 +
mm/zsmalloc.c | 529 ++++++++++++--------------------
7 files changed, 228 insertions(+), 341 deletions(-)
--
* from v1 - https://lore.kernel.org/linux-mm/20211110185433.1981097-1-minchan@kernel.org/
* add write_lock_nested for rwlock
* change fromline to "Mike Galbraith" - bigeasy@
2.34.0.rc1.387.gb447b232ab-goog
Powered by blists - more mailing lists