[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160503042902.GA25545@swordfish>
Date: Tue, 3 May 2016 13:29:02 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] zram: user per-cpu compression streams
On (05/03/16 11:30), Sergey Senozhatsky wrote:
> > We are concerning about returing back to no per-cpu options but actually,
> > I don't want. If duplicate compression is really problem(But It's really
> > unlikely), we should try to solve the problem itself with different way
> > rather than roll-back to old, first of all.
> >
> > I hope we can. So let's not add big worry about adding new dup stat. :)
>
> ok, no prob. do you want it a separate sysfs node or a column in mm_stat?
> I'd prefer mm_stat column, or somewhere in those cumulative files; not a
> dedicated node: we decided to get rid of them some time ago.
>
will io_stat node work for you?
I'll submit a formal patch later today. when you have time, can you
take a look at http://marc.info/?l=linux-kernel&m=146217628030970 ?
I think we can fold this one into 0002. it will make 0002 slightly
bigger, but there nothing complicated in there, just cleanup.
====
From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: [PATCH] zram: export the number of re-compressions
Make the number of re-compressions visible via the io_stat node,
so we will be able to track down any issues caused by per-cpu
compression streams.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Suggested-by: Minchan Kim <minchan@...nel.org>
---
Documentation/blockdev/zram.txt | 3 +++
drivers/block/zram/zram_drv.c | 7 +++++--
drivers/block/zram/zram_drv.h | 1 +
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
index 5bda503..386d260 100644
--- a/Documentation/blockdev/zram.txt
+++ b/Documentation/blockdev/zram.txt
@@ -183,6 +183,8 @@ mem_limit RW the maximum amount of memory ZRAM can use to store
pages_compacted RO the number of pages freed during compaction
(available only via zram<id>/mm_stat node)
compact WO trigger memory compaction
+num_recompress RO the number of times fast compression paths failed
+ and zram performed re-compression via a slow path
WARNING
=======
@@ -215,6 +217,7 @@ whitespace:
failed_writes
invalid_io
notify_free
+ num_recompress
File /sys/block/zram<id>/mm_stat
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 817e511..11b19c9 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -395,7 +395,8 @@ static ssize_t io_stat_show(struct device *dev,
(u64)atomic64_read(&zram->stats.failed_reads),
(u64)atomic64_read(&zram->stats.failed_writes),
(u64)atomic64_read(&zram->stats.invalid_io),
- (u64)atomic64_read(&zram->stats.notify_free));
+ (u64)atomic64_read(&zram->stats.notify_free),
+ (u64)atomic64_read(&zram->stats.num_recompress));
up_read(&zram->init_lock);
return ret;
@@ -721,8 +722,10 @@ compress_again:
handle = zs_malloc(meta->mem_pool, clen,
GFP_NOIO | __GFP_HIGHMEM);
- if (handle)
+ if (handle) {
+ atomic64_inc(&zram->stats.num_recompress);
goto compress_again;
+ }
pr_err("Error allocating memory for compressed page: %u, size=%zu\n",
index, clen);
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index 06b1636..78d7e8f 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -85,6 +85,7 @@ struct zram_stats {
atomic64_t zero_pages; /* no. of zero filled pages */
atomic64_t pages_stored; /* no. of pages currently stored */
atomic_long_t max_used_pages; /* no. of maximum pages stored */
+ atomic64_t num_recompress; /* no. of failed compression fast paths */
};
struct zram_meta {
--
2.8.2
Powered by blists - more mailing lists