lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 16 Jun 2022 19:28:10 +0200
From:   Jan Kara <jack@...e.cz>
To:     Ritesh Harjani <ritesh.list@...il.com>
Cc:     Jan Kara <jack@...e.cz>, Ted Tso <tytso@....edu>,
        linux-ext4@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH 02/10] mbcache: Add functions to delete entry if unused

On Thu 16-06-22 20:17:22, Ritesh Harjani wrote:
> On 22/06/14 06:05PM, Jan Kara wrote:
> > Add function mb_cache_entry_try_delete() to delete mbcache entry if it
> > is unused and also add a function to wait for entry to become unused -
> > mb_cache_entry_wait_unused(). We do not share code between the two
> > deleting function as one of them will go away soon.
> >
> > CC: stable@...r.kernel.org
> > Fixes: 82939d7999df ("ext4: convert to mbcache2")
> > Signed-off-by: Jan Kara <jack@...e.cz>

...

> > +/* mb_cache_entry_try_delete - try to remove a cache entry
> > + * @cache - cache we work with
> > + * @key - key
> > + * @value - value
> > + *
> > + * Remove entry from cache @cache with key @key and value @value. The removal
> > + * happens only if the entry is unused. The function returns NULL in case the
> > + * entry was successfully removed or there's no entry in cache. Otherwise the
> > + * function returns the entry that we failed to delete because it has users.
> 
> "...Also increment it's refcnt in case if the entry is returned. So the
> caller is responsible to call for mb_cache_entry_put() later."

Definitely, I'll expand the comment.

> Do you think comment should be added too? And the api naming should be
> mb_cache_entry_try_delete_or_get().
> 
> Looks like e_refcnt increment is done quitely in case of the entry is found
> where as the api just says mb_cache_entry_try_delete().

It's a bit long name but I agree it describes the function better. OK,
let's rename.
 
> Other than that, all other code logic looks right to me.

Thanks for review!

								Honza


> > + */
> > +struct mb_cache_entry *mb_cache_entry_try_delete(struct mb_cache *cache,
> > +						 u32 key, u64 value)
> > +{
> > +	struct hlist_bl_node *node;
> > +	struct hlist_bl_head *head;
> > +	struct mb_cache_entry *entry;
> > +
> > +	head = mb_cache_entry_head(cache, key);
> > +	hlist_bl_lock(head);
> > +	hlist_bl_for_each_entry(entry, node, head, e_hash_list) {
> > +		if (entry->e_key == key && entry->e_value == value) {
> > +			if (atomic_read(&entry->e_refcnt) > 2) {
> > +				atomic_inc(&entry->e_refcnt);
> > +				hlist_bl_unlock(head);
> > +				return entry;
> > +			}
> > +			/* We keep hash list reference to keep entry alive */
> > +			hlist_bl_del_init(&entry->e_hash_list);
> > +			hlist_bl_unlock(head);
> > +			spin_lock(&cache->c_list_lock);
> > +			if (!list_empty(&entry->e_list)) {
> > +				list_del_init(&entry->e_list);
> > +				if (!WARN_ONCE(cache->c_entry_count == 0,
> > +		"mbcache: attempt to decrement c_entry_count past zero"))
> > +					cache->c_entry_count--;
> > +				atomic_dec(&entry->e_refcnt);
> > +			}
> > +			spin_unlock(&cache->c_list_lock);
> > +			mb_cache_entry_put(cache, entry);
> > +			return NULL;
> > +		}
> > +	}
> > +	hlist_bl_unlock(head);
> > +
> > +	return NULL;
> > +}
> > +EXPORT_SYMBOL(mb_cache_entry_try_delete);
> > +
> >  /* mb_cache_entry_touch - cache entry got used
> >   * @cache - cache the entry belongs to
> >   * @entry - entry that got used
> > diff --git a/include/linux/mbcache.h b/include/linux/mbcache.h
> > index 20f1e3ff6013..1176fdfb8d53 100644
> > --- a/include/linux/mbcache.h
> > +++ b/include/linux/mbcache.h
> > @@ -30,15 +30,23 @@ void mb_cache_destroy(struct mb_cache *cache);
> >  int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
> >  			  u64 value, bool reusable);
> >  void __mb_cache_entry_free(struct mb_cache_entry *entry);
> > +void mb_cache_entry_wait_unused(struct mb_cache_entry *entry);
> >  static inline int mb_cache_entry_put(struct mb_cache *cache,
> >  				     struct mb_cache_entry *entry)
> >  {
> > -	if (!atomic_dec_and_test(&entry->e_refcnt))
> > +	unsigned int cnt = atomic_dec_return(&entry->e_refcnt);
> > +
> > +	if (cnt > 0) {
> > +		if (cnt <= 3)
> > +			wake_up_var(&entry->e_refcnt);
> >  		return 0;
> > +	}
> >  	__mb_cache_entry_free(entry);
> >  	return 1;
> >  }
> >
> > +struct mb_cache_entry *mb_cache_entry_try_delete(struct mb_cache *cache,
> > +						 u32 key, u64 value);
> >  void mb_cache_entry_delete(struct mb_cache *cache, u32 key, u64 value);
> >  struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *cache, u32 key,
> >  					  u64 value);
> > --
> > 2.35.3
> >
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ