[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160503050348.GA17316@bbox>
Date: Tue, 3 May 2016 14:03:48 +0900
From: Minchan Kim <minchan@...nel.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] zram: user per-cpu compression streams
On Tue, May 03, 2016 at 01:29:02PM +0900, Sergey Senozhatsky wrote:
> On (05/03/16 11:30), Sergey Senozhatsky wrote:
> > > We are concerning about returing back to no per-cpu options but actually,
> > > I don't want. If duplicate compression is really problem(But It's really
> > > unlikely), we should try to solve the problem itself with different way
> > > rather than roll-back to old, first of all.
> > >
> > > I hope we can. So let's not add big worry about adding new dup stat. :)
> >
> > ok, no prob. do you want it a separate sysfs node or a column in mm_stat?
> > I'd prefer mm_stat column, or somewhere in those cumulative files; not a
> > dedicated node: we decided to get rid of them some time ago.
> >
>
> will io_stat node work for you?
Firstly, I thought io_stat would be better. However, on second thought,
I want to withdraw.
I think io_stat should go away.
failed_read
failed_write
invalid_io
I think Above things are really unneeded. If something is fail, upper
class on top of zram, for example, FSes or Swap should emit the warning.
So, I don't think we need to maintain it in zram layer.
notify_free
It's kins of discard command for the point of block device so I think
general block should take care of it like read and write. If block will
do it, remained thing about notify_free is only zram_slot_free_notify
so I think we can move it from io_stat to mm_stat because it's related
to memory, not block I/O.
With hoping with above things, I suggest let's not add anything to
io_stat any more from now on and let's remove it sometime.
Instead of it, let's add new dup stat.
What do you think about it?
> I'll submit a formal patch later today. when you have time, can you
> take a look at http://marc.info/?l=linux-kernel&m=146217628030970 ?
Oops, Sorry I missed. I will take a look.
> I think we can fold this one into 0002. it will make 0002 slightly
> bigger, but there nothing complicated in there, just cleanup.
>
> ====
>
> From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
> Subject: [PATCH] zram: export the number of re-compressions
>
> Make the number of re-compressions visible via the io_stat node,
> so we will be able to track down any issues caused by per-cpu
> compression streams.
>
> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
> Suggested-by: Minchan Kim <minchan@...nel.org>
> ---
> Documentation/blockdev/zram.txt | 3 +++
> drivers/block/zram/zram_drv.c | 7 +++++--
> drivers/block/zram/zram_drv.h | 1 +
> 3 files changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> index 5bda503..386d260 100644
> --- a/Documentation/blockdev/zram.txt
> +++ b/Documentation/blockdev/zram.txt
> @@ -183,6 +183,8 @@ mem_limit RW the maximum amount of memory ZRAM can use to store
> pages_compacted RO the number of pages freed during compaction
> (available only via zram<id>/mm_stat node)
> compact WO trigger memory compaction
> +num_recompress RO the number of times fast compression paths failed
> + and zram performed re-compression via a slow path
>
> WARNING
> =======
> @@ -215,6 +217,7 @@ whitespace:
> failed_writes
> invalid_io
> notify_free
> + num_recompress
>
> File /sys/block/zram<id>/mm_stat
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 817e511..11b19c9 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -395,7 +395,8 @@ static ssize_t io_stat_show(struct device *dev,
> (u64)atomic64_read(&zram->stats.failed_reads),
> (u64)atomic64_read(&zram->stats.failed_writes),
> (u64)atomic64_read(&zram->stats.invalid_io),
> - (u64)atomic64_read(&zram->stats.notify_free));
> + (u64)atomic64_read(&zram->stats.notify_free),
> + (u64)atomic64_read(&zram->stats.num_recompress));
> up_read(&zram->init_lock);
>
> return ret;
> @@ -721,8 +722,10 @@ compress_again:
>
> handle = zs_malloc(meta->mem_pool, clen,
> GFP_NOIO | __GFP_HIGHMEM);
> - if (handle)
> + if (handle) {
> + atomic64_inc(&zram->stats.num_recompress);
> goto compress_again;
> + }
>
> pr_err("Error allocating memory for compressed page: %u, size=%zu\n",
> index, clen);
> diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> index 06b1636..78d7e8f 100644
> --- a/drivers/block/zram/zram_drv.h
> +++ b/drivers/block/zram/zram_drv.h
> @@ -85,6 +85,7 @@ struct zram_stats {
> atomic64_t zero_pages; /* no. of zero filled pages */
> atomic64_t pages_stored; /* no. of pages currently stored */
> atomic_long_t max_used_pages; /* no. of maximum pages stored */
> + atomic64_t num_recompress; /* no. of failed compression fast paths */
> };
>
> struct zram_meta {
> --
> 2.8.2
>
Powered by blists - more mailing lists