lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200727075822.GA5355@lst.de>
Date:   Mon, 27 Jul 2020 09:58:22 +0200
From:   Christoph Hellwig <hch@....de>
To:     Minchan Kim <minchan@...nel.org>
Cc:     Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
        Song Liu <song@...nel.org>,
        Hans de Goede <hdegoede@...hat.com>,
        Richard Weinberger <richard@....at>,
        linux-mtd@...ts.infradead.org, dm-devel@...hat.com,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        drbd-dev@...ts.linbit.com, linux-raid@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
        cgroups@...r.kernel.org
Subject: Re: [PATCH 10/14] bdi: remove BDI_CAP_SYNCHRONOUS_IO

On Sun, Jul 26, 2020 at 12:06:39PM -0700, Minchan Kim wrote:
> > @@ -528,8 +530,7 @@ static ssize_t backing_dev_store(struct device *dev,
> >  	 * freely but in fact, IO is going on so finally could cause
> >  	 * use-after-free when the IO is really done.
> >  	 */
> > -	zram->disk->queue->backing_dev_info->capabilities &=
> > -			~BDI_CAP_SYNCHRONOUS_IO;
> > +	zram->disk->fops = &zram_wb_devops;
> >  	up_write(&zram->init_lock);
> 
> For zram, regardless of BDI_CAP_SYNCHRONOUS_IO, it have used rw_page
> every time on read/write path. This one with next patch will make zram
> use bio instead of rw_page when it's declared !BDI_CAP_SYNCHRONOUS_IO,
> which introduce regression for performance.

It really should not matter, as the overhead of setting up a bio
is minimal.  It also is only used in the legacy mpage buffered I/O
code outside of the swap code, which has so many performance issues on
its own that even if this made a difference it wouldn't matter.

If you want magic treatment for your zram swap code you really need
to integrate it with the swap code instead of burding the block layer
with all this mess.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ