lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160503022046.GB3642@bbox>
Date:	Tue, 3 May 2016 11:20:46 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc:	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] zram: user per-cpu compression streams

Hi Sergey!

On Tue, May 03, 2016 at 10:53:33AM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
> 
> On (05/03/16 10:40), Minchan Kim wrote:
> > > 
> > > ...hm...  inc ->failed_writes?
> [..]
> > Okay, let's add the knob to the existing sysfs(There is no different
> > between sysfs and debugfs with point of userspace once they start to
> > use it) because no need to add new code to avoid such mess.
> > 
> > Any thoughts?
> 
> so you don't want to account failed fast-path writes in failed_writes?

Yes, I think we cannot reuse the field.

> it sort of kind of fits, to some extent. re-compression is, basically,
> a new write operation -- allocate handle, map the page again, compress,
> etc., etc. so in a sense failed fast-path write is _almost_ a failed write,
> except that we took extra care of handling it and retried the op inside
> of zram, not from bio or fs layer.

Maybe, I don't understand you but my point is simple.
Documentation/ABI/testing/sysfs-block-zram says

What:           /sys/block/zram<id>/failed_writes
Date:           February 2014
Contact:        Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Description:
                The failed_writes file is read-only and specifies the number of
                failed writes happened on this device.

If user read it, they will think failed_writes means write-error on
the device. As well, it's true until now. :)

If we piggy back on that to represent duplicate compression, users
will start to see the increased count and they wonder "Hmm, fs or swap
on zram might be corrupted" and don't use zram any more. Otherwise,
they will report it to us. :) We should explain to them "Hey, it's not
failed write but just duplicated I/O blah blah. Please read document
and we already corrected the wording in the document to represent
current status." which would be painful both users and us.

Rather than piggy backing on failed_writes, actually I want to remove
failed_[read|write] which is no point stat, I think.

We are concerning about returing back to no per-cpu options but actually,
I don't want. If duplicate compression is really problem(But It's really
unlikely), we should try to solve the problem itself with different way
rather than roll-back to old, first of all.

I hope we can. So let's not add big worry about adding new dup stat. :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ