lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Oct 2021 09:15:20 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Luis Chamberlain <mcgrof@...nel.org>
Cc:     Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>, tj@...nel.org,
        gregkh@...uxfoundation.org, akpm@...ux-foundation.org,
        minchan@...nel.org, jeyu@...nel.org, shuah@...nel.org,
        bvanassche@....org, dan.j.williams@...el.com, joe@...ches.com,
        tglx@...utronix.de, keescook@...omium.org, rostedt@...dmis.org,
        linux-spdx@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-block@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate

On Tue, Oct 19, 2021 at 12:36:42PM -0700, Luis Chamberlain wrote:
> On Wed, Oct 20, 2021 at 12:29:53AM +0800, Ming Lei wrote:
> > On Tue, Oct 19, 2021 at 08:28:21AM -0700, Luis Chamberlain wrote:
> > > On Tue, Oct 19, 2021 at 10:34:41AM +0800, Ming Lei wrote:
> > > > Please try the following patch against upstream(linus or next) tree(basically
> > > > fold revised 2 and 3 of V1, and cover two issues: not fail zram_remove in
> > > > module_exit(), race between zram_remove() and disksize_store()), and see if
> > > > everything is fine for you:
> > > 
> > > Page fault ...
> > > 
> > > [   18.284256] zram: Removed device: zram0
> > > [   18.312974] BUG: unable to handle page fault for address:
> > > ffffad86de903008
> > > [   18.313707] #PF: supervisor read access in kernel mode
> > > [   18.314248] #PF: error_code(0x0000) - not-present page
> > > [   18.314797] PGD 100000067 P4D 100000067 PUD 10031e067 PMD 136a28067
> > 
> > That is another race between zram_reset_device() and disksize_store(),
> > which is supposed to be covered by ->init_lock, and follows the delta fix
> > against the last patch I posted, and the whole patch can be found in the
> > github link:
> > 
> > https://github.com/ming1/linux/commit/fa6045b1371eb301f392ac84adaf3ad53bb16894
> > 
> > 
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index d0cae7a42f4d..a14ba3d350ea 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -1704,12 +1704,12 @@ static void zram_reset_device(struct zram *zram)
> >  	set_capacity_and_notify(zram->disk, 0);
> >  	part_stat_set_all(zram->disk->part0, 0);
> >  
> > -	up_write(&zram->init_lock);
> >  	/* I/O operation under all of CPU are done so let's free */
> >  	zram_meta_free(zram, disksize);
> >  	memset(&zram->stats, 0, sizeof(zram->stats));
> >  	zcomp_destroy(comp);
> >  	reset_bdev(zram);
> > +	up_write(&zram->init_lock);
> >  }
> >  
> >  static ssize_t disksize_store(struct device *dev,
> 
> With this, it still ends up in a state where we loop and can't get out of:
> 
> zram: Can't change algorithm for initialized device

Again, you are running two zram02.sh[1] on /dev/zram0, that isn't unexpected
behavior. Here the difference is just timing. In my test VM,
this message shows a while on one task, then it may be switched to
another task.

Just run your patches a while, nothing real difference here, and the
following message can be dumped from one task for long time:

	can't set '107374182400' to /sys/block/zram0/disksize

Also you did not answer my question about your test expected result when
running the following script from two terminal concurrently:

	while true; do
		PATH=$PATH:$PWD:$PWD/../../../lib/ ./zram02.sh;
	done




Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ