lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Dec 2015 16:52:09 +0100
From:	Jan Kara <jack@...e.cz>
To:	Andreas Grünbacher 
	<andreas.gruenbacher@...il.com>
Cc:	Jan Kara <jack@...e.cz>, Ted Tso <tytso@....edu>,
	linux-ext4@...r.kernel.org, Laurent GUERBY <laurent@...rby.net>,
	Andreas Dilger <adilger@...ger.ca>
Subject: Re: [PATCH 1/6] mbcache2: Reimplement mbcache

On Tue 15-12-15 12:08:09, Jan Kara wrote:
> > > +/*
> > > + * mb2_cache_entry_delete - delete entry from cache
> > > + * @cache - cache where the entry is
> > > + * @entry - entry to delete
> > > + *
> > > + * Delete entry from cache. The entry is unhashed and deleted from the lru list
> > > + * so it cannot be found. We also drop the reference to @entry caller gave us.
> > > + * However entry need not be freed if there's someone else still holding a
> > > + * reference to it. Freeing happens when the last reference is dropped.
> > > + */
> > > +void mb2_cache_entry_delete(struct mb2_cache *cache,
> > > +                           struct mb2_cache_entry *entry)
> > 
> > This function should become static; there are no external users.
> 
> It's actually completely unused. But if we end up removing entries for
> blocks where refcount hit maximum, then it will be used by the fs. Thinking
> about removal of entries with max refcount, the slight complication is that
> when refcount decreases again, we won't insert the entry in cache unless
> someone calls listattr or getattr for inode with that block. So we'll
> probably need some more complex logic to avoid this.
> 
> I'll first gather some statistics on the lengths of hash chains and hash
> chain scanning when there are few unique xattrs to see whether the
> complexity is worth it.

So I did some experiments with observing length of hash chains with lots of
same xattr blocks. Indeed hash chains get rather long in such case as you
expected - for F files having V different xattr blocks hash chain lenght is
around F/V/1024 as expected.

I've also implemented logic that removes entry from cache when the refcount
of xattr block reaches maximum and adds it back when refcount drops. But
this doesn't make hash chains significantly shorter because most of xattr
blocks end up close to max refcount but not quite at the maximum (as the
benchmark ends up adding & removing references to blocks mostly
randomly).

That made me realize that any strategy based solely on xattr block refcount
isn't going to significantly improve the situation. What we'd have to do is
something like making sure that we cache only one xattr block with given
contents. However that would make insertions more costly as we'd have to
compare full xattr blocks for duplicates instead of just hashes.

So overall I don't think optimizing this case is really worth it for now.
If we see some real world situation where this matters, we can reconsider
the decision.

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ