[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZkJ1PTW7V25ePbLF@redhat.com>
Date: Mon, 13 May 2024 15:17:01 -0500
From: David Teigland <teigland@...hat.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, gfs2@...ts.linux.dev
Subject: [GIT PULL] dlm updates for 6.10
Hi Linus,
Please pull dlm updates from tag:
git://git.kernel.org/pub/scm/linux/kernel/git/teigland/linux-dlm.git dlm-6.10
This set includes some small fixes, and some big internal changes:
- Fix a long standing race between the unlock callback for the last lkb
struct, and removing the rsb that became unused after the final unlock.
This could lead different nodes to inconsistent info about the rsb master
node.
- Remove unnecessary refcounting on callback structs, returning to the way
things were done in the past.
- Do message processing in softirq context. This allows dlm messages to
be cleared more quickly and efficiently, reducing long lists of incomplete
requests. A future change to run callbacks directly from this context
will make this more effective.
- The softirq message processing involved a number of patches changing
mutexes to spinlocks and rwlocks, and a fair amount of code re-org in
preparation.
- Use an rhashtable for rsb structs, rather than our old internal hash
table implementation. This also required some re-org of lists and locks
preparation for the change.
- Drop the dlm_scand kthread, and use timers to clear unused rsb structs.
Scanning all rsb's periodically was a lot of wasted work.
- Fix recent regression in logic for copying LVB data in user space lock
requests.
Thanks,
Dave
Alexander Aring (32):
dlm: fix user space lock decision to copy lvb
dlm: remove lkb from callback tracepoints
dlm: remove callback queue debugfs functionality
dlm: save callback debug info earlier
dlm: combine switch case fail and default statements
dlm: fix race between final callback and remove
dlm: remove callback reference counting
dlm: remove allocation parameter in msg allocation
dlm: switch to GFP_ATOMIC in dlm allocations
dlm: move root_list functionality to recover.c
dlm: use a new list for recovery of master rsb names
dlm: move rsb root_list to ls_recover() stack
dlm: add new struct to save position in dlm_copy_master_names
dlm: drop mutex use in waiters recovery
dlm: convert ls_waiters_mutex to spinlock
dlm: convert res_lock to spinlock
dlm: avoid blocking receive at the end of recovery
dlm: convert ls_recv_active from rw_semaphore to rwlock
dlm: remove schedule in receive path
dlm: use spin_lock_bh for message processing
dlm: do message processing in softirq context
dlm: increment ls_count for dlm_scand
dlm: change to single hashtable lock
dlm: merge toss and keep hash table lists into one list
dlm: add rsb lists for iteration
dlm: switch to use rhashtable for rsbs
dlm: do not use ref counts for rsb in the toss state
dlm: drop dlm_scand kthread and use timers
dlm: use rwlock for rsb hash table
dlm: use rwlock for lkbidr
dlm: fix sleep in atomic context
dlm: return -ENOMEM if ls_recover_buf fails
Kunwu Chan (2):
dlm: Simplify the allocation of slab caches in dlm_midcomms_cache_create
dlm: Simplify the allocation of slab caches in dlm_lowcomms_msg_cache_create
fs/dlm/ast.c | 216 ++++-----
fs/dlm/ast.h | 13 +-
fs/dlm/config.c | 8 +
fs/dlm/config.h | 2 +
fs/dlm/debug_fs.c | 323 +++-----------
fs/dlm/dir.c | 157 +++++--
fs/dlm/dir.h | 3 +-
fs/dlm/dlm_internal.h | 129 +++---
fs/dlm/lock.c | 1068 +++++++++++++++++++++++++-------------------
fs/dlm/lock.h | 12 +-
fs/dlm/lockspace.c | 212 +++------
fs/dlm/lowcomms.c | 62 +--
fs/dlm/lowcomms.h | 5 +-
fs/dlm/member.c | 25 +-
fs/dlm/memory.c | 18 +-
fs/dlm/memory.h | 4 +-
fs/dlm/midcomms.c | 67 ++-
fs/dlm/midcomms.h | 3 +-
fs/dlm/rcom.c | 33 +-
fs/dlm/recover.c | 149 ++----
fs/dlm/recover.h | 10 +-
fs/dlm/recoverd.c | 142 ++++--
fs/dlm/requestqueue.c | 43 +-
fs/dlm/user.c | 135 ++----
include/trace/events/dlm.h | 46 +-
25 files changed, 1379 insertions(+), 1506 deletions(-)
Powered by blists - more mailing lists