lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210405190023.GX4332@42.do-not-panic.com>
Date:   Mon, 5 Apr 2021 19:00:23 +0000
From:   Luis Chamberlain <mcgrof@...nel.org>
To:     Minchan Kim <minchan@...nel.org>
Cc:     keescook@...omium.org, dhowells@...hat.com, hch@...radead.org,
        mbenes@...e.com, gregkh@...uxfoundation.org, ngupta@...are.org,
        sergey.senozhatsky.work@...il.com, axboe@...nel.dk,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] zram: fix crashes due to use of cpu hotplug
 multistate

On Mon, Apr 05, 2021 at 10:07:24AM -0700, Minchan Kim wrote:
> On Thu, Apr 01, 2021 at 11:59:25PM +0000, Luis Chamberlain wrote:
> > And come to think of it the last patch I had sent with a new
> > DECLARE_RWSEM(zram_unload) also has this same issue making most
> > sysfs attributes rather fragile.
> 
> Thanks for looking the way. I agree the single zram_index_rwlock is
> not the right approach to fix it. However, I still hope we find more
> generic solution to fix them at once since I see it's zram instance
> racing problem.

They are 3 separate different problems. Related, but different.
At this point I think it would be difficult to resolve all 3
with one solution without creating side issues, but hey,
I'm curious if you find a solution that does not create other
issues.

> A approach I am considering is to make struct zram include kobject
> and then make zram sysfs auto populated under the kobject. So, zram/sysfs
> lifetime should be under the kobject. With it, sysfs race probem I
> mentioned above should be gone. Furthermore, zram_remove should fail
> if one of the alive zram objects is existing
> (i.e., zram->kobject->refcount > 1) so module_exit will fail, too.

If the idea then is to busy out rmmod if a sysfs attribute is being
read, that could then mean rmmod can sometimes never complete. Hogging
up / busying out sysfs attributes means the module cannto be removed.
Which is why the *try_module_get()* I think is much more suitable, as
it will always fails if we're already going down.

> I see one of the problems is how I could make new zram object's
> attribute group for zram knobs under /sys/block/zram0 since block
> layer already made zram0 kobject via device_add_disk.

Right.. well the syfs attribute races uncovered here actually do
apply to any block driver as well. And which is why I was aiming
for something generic if possible.

I am not sure if you missed the last hunks of the generic solution,
but that would resolve the issue you noted. Here is the same approach
but in a non-generic solution, specific to just one attribute so far
and to zram:

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 494695ff227e..b566916e4ad9 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1724,6 +1724,11 @@ static ssize_t disksize_store(struct device *dev,
 
 	mutex_lock(&zram_index_mutex);
 
+	if (!bdgrab(dev_to_bdev(dev))) {
+		err = -ENODEV;
+		goto out_nodev;
+	}
+
 	if (!zram_up || zram->claim) {
 		err = -ENODEV;
 		goto out;
@@ -1760,6 +1765,7 @@ static ssize_t disksize_store(struct device *dev,
 	zram->disksize = disksize;
 	set_capacity_and_notify(zram->disk, zram->disksize >> SECTOR_SHIFT);
 	up_write(&zram->init_lock);
+	bdput(dev_to_bdev(dev);
 
 	mutex_unlock(&zram_index_mutex);
 
@@ -1770,6 +1776,8 @@ static ssize_t disksize_store(struct device *dev,
 out_unlock:
 	up_write(&zram->init_lock);
 out:
+	bdput(dev_to_bdev(dev);
+out_nodev:
 	mutex_unlock(&zram_index_mutex);
 	return err;
 }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ