[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190724040118.GA31214@sultan-box.localdomain>
Date: Tue, 23 Jul 2019 22:01:18 -0600
From: Sultan Alsawaf <sultan@...neltoast.com>
To: Andreas Dilger <adilger@...ger.ca>
Cc: Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mbcache: Speed up cache entry creation
On Tue, Jul 23, 2019 at 10:56:05AM -0600, Andreas Dilger wrote:
> Do you have any kind of performance metrics that show this is an actual
> improvement in performance? This would be either macro-level benchmarks
> (e.g. fio, but this seems unlikely to show any benefit), or micro-level
> measurements (e.g. flame graph) that show a net reduction in CPU cycles,
> lock contention, etc. in this part of the code.
Hi Andreas,
Here are some basic micro-benchmark results:
Before:
[ 3.162896] mb_cache_entry_create: AVG cycles: 75
[ 3.054701] mb_cache_entry_create: AVG cycles: 78
[ 3.152321] mb_cache_entry_create: AVG cycles: 77
After:
[ 3.043380] mb_cache_entry_create: AVG cycles: 68
[ 3.194321] mb_cache_entry_create: AVG cycles: 71
[ 3.038100] mb_cache_entry_create: AVG cycles: 69
The performance difference is probably more drastic when free memory is low,
since an unnecessary call to kmem_cache_alloc() can result in a long wait for
pages to be freed.
The micro-benchmark code is attached.
Thanks,
Sultan
---
diff --git a/fs/mbcache.c b/fs/mbcache.c
index 289f3664061e..e0f22ff8fab8 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -82,7 +82,7 @@ static inline struct mb_bucket *mb_cache_entry_bucket(struct mb_cache *cache,
* -EBUSY if entry with the same key and value already exists in cache.
* Otherwise 0 is returned.
*/
-int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+static int __mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
u64 value, bool reusable)
{
struct mb_cache_entry *entry, *dup;
@@ -148,6 +148,29 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
return 0;
}
+
+int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+ u64 value, bool reusable)
+{
+ static unsigned long count, sum;
+ static DEFINE_MUTEX(lock);
+ volatile cycles_t start, delta;
+ int ret;
+
+ mutex_lock(&lock);
+ local_irq_disable();
+ start = get_cycles();
+ ret = __mb_cache_entry_create(cache, mask, key, value, reusable);
+ delta = get_cycles() - start;
+ local_irq_enable();
+
+ sum += delta;
+ if (++count == 1000)
+ printk("%s: AVG cycles: %lu\n", __func__, sum / count);
+ mutex_unlock(&lock);
+
+ return ret;
+}
EXPORT_SYMBOL(mb_cache_entry_create);
void __mb_cache_entry_free(struct mb_cache_entry *entry)
Powered by blists - more mailing lists