[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHpGcMLWgiJ9-9onrYPowoMMgDHbsyLSyZ2wXp4RSRtmUeTgEw@mail.gmail.com>
Date: Tue, 22 Dec 2015 13:20:58 +0100
From: Andreas Grünbacher <andreas.gruenbacher@...il.com>
To: Jan Kara <jack@...e.cz>
Cc: Ted Tso <tytso@....edu>, linux-ext4@...r.kernel.org,
Laurent GUERBY <laurent@...rby.net>,
Andreas Dilger <adilger@...ger.ca>
Subject: Re: [PATCH 1/6] mbcache2: Reimplement mbcache
2015-12-16 16:52 GMT+01:00 Jan Kara <jack@...e.cz>:
> On Tue 15-12-15 12:08:09, Jan Kara wrote:
>> > > +/*
>> > > + * mb2_cache_entry_delete - delete entry from cache
>> > > + * @cache - cache where the entry is
>> > > + * @entry - entry to delete
>> > > + *
>> > > + * Delete entry from cache. The entry is unhashed and deleted from the lru list
>> > > + * so it cannot be found. We also drop the reference to @entry caller gave us.
>> > > + * However entry need not be freed if there's someone else still holding a
>> > > + * reference to it. Freeing happens when the last reference is dropped.
>> > > + */
>> > > +void mb2_cache_entry_delete(struct mb2_cache *cache,
>> > > + struct mb2_cache_entry *entry)
>> >
>> > This function should become static; there are no external users.
>>
>> It's actually completely unused. But if we end up removing entries for
>> blocks where refcount hit maximum, then it will be used by the fs. Thinking
>> about removal of entries with max refcount, the slight complication is that
>> when refcount decreases again, we won't insert the entry in cache unless
>> someone calls listattr or getattr for inode with that block. So we'll
>> probably need some more complex logic to avoid this.
>>
>> I'll first gather some statistics on the lengths of hash chains and hash
>> chain scanning when there are few unique xattrs to see whether the
>> complexity is worth it.
>
> So I did some experiments with observing length of hash chains with lots of
> same xattr blocks. Indeed hash chains get rather long in such case as you
> expected - for F files having V different xattr blocks hash chain lenght is
> around F/V/1024 as expected.
>
> I've also implemented logic that removes entry from cache when the refcount
> of xattr block reaches maximum and adds it back when refcount drops. But
> this doesn't make hash chains significantly shorter because most of xattr
> blocks end up close to max refcount but not quite at the maximum (as the
> benchmark ends up adding & removing references to blocks mostly
> randomly).
>
> That made me realize that any strategy based solely on xattr block refcount
> isn't going to significantly improve the situation.
That test scenario probably isn't very realistic: xattrs are mostly
initialized at or immediately after file create time; they rarely
removed. Hash chains should shrink significantly for that scenario.
In addition, if the hash table is sized reasonably, long hash chains
won't hurt that much because we can stop searching them as soon as we
find the first reusable block. This won't help when there are hash
conflicts, but those should be unlikely.
We are currently using a predictable hash algorithm so attacks on the
hash table are possible; it's probably not worth protecting against
that though.
> What we'd have to do is something like making sure that we cache only
> one xattr block with given contents.
No, when that one cached block reaches its maximum refcount, we would
have to allocate another block because we didn't cache the other
identical, reusable blocks; this would hurt significantly.
> However that would make insertions more costly as we'd have to
> compare full xattr blocks for duplicates instead of just hashes.
I don't understand, why would we turn to comparing blocks?
Thanks,
Andreas
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists