[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aK-iSiXtuaDj_fyW@dread.disaster.area>
Date: Thu, 28 Aug 2025 10:26:50 +1000
From: Dave Chinner <david@...morbit.com>
To: Josef Bacik <josef@...icpanda.com>
Cc: linux-fsdevel@...r.kernel.org, linux-btrfs@...r.kernel.org,
kernel-team@...com, linux-ext4@...r.kernel.org,
linux-xfs@...r.kernel.org, brauner@...nel.org,
viro@...iv.linux.org.uk, amir73il@...il.com
Subject: Re: [PATCH v2 53/54] fs: remove I_LRU_ISOLATING flag
On Tue, Aug 26, 2025 at 11:39:53AM -0400, Josef Bacik wrote:
> If the inode is on the LRU it has a full reference and thus no longer
> needs to be pinned while it is being isolated.
>
> Remove the I_LRU_ISOLATING flag and associated helper functions
> (inode_pin_lru_isolating, inode_unpin_lru_isolating, and
> inode_wait_for_lru_isolating) as they are no longer needed.
>
> Signed-off-by: Josef Bacik <josef@...icpanda.com>
....
> @@ -745,34 +742,32 @@ is_uncached_acl(struct posix_acl *acl)
> * I_CACHED_LRU Inode is cached because it is dirty or isn't shrinkable,
> * and thus is on the s_cached_inode_lru list.
> *
> - * __I_{SYNC,NEW,LRU_ISOLATING} are used to derive unique addresses to wait
> - * upon. There's one free address left.
> + * __I_{SYNC,NEW} are used to derive unique addresses to wait upon. There are
> + * two free address left.
> */
>
> enum inode_state_bits {
> __I_NEW = 0U,
> - __I_SYNC = 1U,
> - __I_LRU_ISOLATING = 2U
> + __I_SYNC = 1U
> };
>
> enum inode_state_flags_t {
> I_NEW = (1U << __I_NEW),
> I_SYNC = (1U << __I_SYNC),
> - I_LRU_ISOLATING = (1U << __I_LRU_ISOLATING),
> - I_DIRTY_SYNC = (1U << 3),
> - I_DIRTY_DATASYNC = (1U << 4),
> - I_DIRTY_PAGES = (1U << 5),
> - I_CLEAR = (1U << 6),
> - I_LINKABLE = (1U << 7),
> - I_DIRTY_TIME = (1U << 8),
> - I_WB_SWITCH = (1U << 9),
> - I_OVL_INUSE = (1U << 10),
> - I_CREATING = (1U << 11),
> - I_DONTCACHE = (1U << 12),
> - I_SYNC_QUEUED = (1U << 13),
> - I_PINNING_NETFS_WB = (1U << 14),
> - I_LRU = (1U << 15),
> - I_CACHED_LRU = (1U << 16)
> + I_DIRTY_SYNC = (1U << 2),
> + I_DIRTY_DATASYNC = (1U << 3),
> + I_DIRTY_PAGES = (1U << 4),
> + I_CLEAR = (1U << 5),
> + I_LINKABLE = (1U << 6),
> + I_DIRTY_TIME = (1U << 7),
> + I_WB_SWITCH = (1U << 8),
> + I_OVL_INUSE = (1U << 9),
> + I_CREATING = (1U << 10),
> + I_DONTCACHE = (1U << 11),
> + I_SYNC_QUEUED = (1U << 12),
> + I_PINNING_NETFS_WB = (1U << 13),
> + I_LRU = (1U << 14),
> + I_CACHED_LRU = (1U << 15)
> };
This is a bit of a mess - we should reserve the first 4 bits for the
waitable inode_state_bits right from the start and not renumber the
other flag bits into that range. i.e. start the first non-waitable
bit at bit 4. That way every time we add/remove a waitable bit, we
don't have to rewrite the entire set of flags. i.e: something like:
enum inode_state_flags_t {
I_NEW = (1U << __I_NEW),
I_SYNC = (1U << __I_SYNC),
// waitable bit 2 unused
// waitable bit 3 unused
I_DIRTY_SYNC = (1U << 4),
....
This will be much more blame friendly if we do it this way from the
start of this patch set.
-Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists