[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGudoHH-asEyPj7CUNF+ApVhRoG1C4tmQYuko1SLNQ0o-LXaaw@mail.gmail.com>
Date: Wed, 20 Nov 2024 20:47:50 +0100
From: Mateusz Guzik <mjguzik@...il.com>
To: Christian Brauner <brauner@...nel.org>
Cc: viro@...iv.linux.org.uk, jack@...e.cz, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, hughd@...gle.com, linux-ext4@...r.kernel.org,
tytso@....edu, linux-mm@...ck.org
Subject: Re: [PATCH v2 0/3] symlink length caching
On Wed, Nov 20, 2024 at 12:13 PM Christian Brauner <brauner@...nel.org> wrote:
>
> On Wed, Nov 20, 2024 at 11:42:33AM +0100, Mateusz Guzik wrote:
> > Interestingly even __read_seqcount_begin (used *twice* in path_init())
> > is missing one. I sent a patch to fix it long time ago but the
> > recipient did not respond
>
> I snatched it.
Thanks.
But I have to say having *two* counters to check for each lookup is
bothering me and making me wonder if they could be unified (or another
counter added to cover for either of those?)? No clue about
feasibility, is there a known showstopper?
Both are defined like so:
__cacheline_aligned_in_smp DEFINE_SEQLOCK(mount_lock);
__cacheline_aligned_in_smp DEFINE_SEQLOCK(rename_lock);
Suppose nothing can be done to only look at one counter on lookup.
In that case how about combining the suckers into one cacheline at
least? Sure, this will result in new bounces for threads modifying
these, but this is relatively infrequent compared to how often lookups
performed and with these slapped together there will be only one line
spent on it, instead of two.
Just RFC'ing it here.
--
Mateusz Guzik <mjguzik gmail.com>
Powered by blists - more mailing lists