lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210312183238.GW4332@42.do-not-panic.com>
Date:   Fri, 12 Mar 2021 18:32:38 +0000
From:   Luis Chamberlain <mcgrof@...nel.org>
To:     Minchan Kim <minchan@...nel.org>
Cc:     gregkh@...uxfoundation.org, ngupta@...are.org,
        sergey.senozhatsky.work@...il.com, axboe@...nel.dk,
        mbenes@...e.com, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] zram: fix crashes due to use of cpu hotplug
 multistate

On Thu, Mar 11, 2021 at 06:14:40PM -0800, Minchan Kim wrote:
> On Wed, Mar 10, 2021 at 09:21:28PM +0000, Luis Chamberlain wrote:
> > On Mon, Mar 08, 2021 at 06:55:30PM -0800, Minchan Kim wrote:
> > > If I understand correctly, bugs you found were related to module
> > > unloading race while the zram are still working.
> > 
> > No, that is a simplifcation of the issue. The issue consists of
> > two separate issues:
> > 
> >  a) race against module unloading in light of incorrect racty use of
> >     cpu hotplug multistate support
> 
> 
> Could you add some pusedo code sequence to show the race cleary?

Let us deal with each issue one at time. First, let's address
understanding the kernel warning can be reproduced easily by
triggering zram02.sh from LTP twice:

kernel: ------------[ cut here ]------------
kernel: Error: Removing state 63 which has instances left.
kernel: WARNING: CPU: 7 PID: 70457 at kernel/cpu.c:2069 __cpuhp_remove_state_cpuslocked+0xf9/0x100
kernel: Modules linked in: zram(E-) zsmalloc(E) <etc>

The first patch prevents this race. This race is possible because on
module init we associate callbacks for CPU hotplug add / remove:

static int __init zram_init(void)                                               
{
	...
	ret = cpuhp_setup_state_multi(CPUHP_ZCOMP_PREPARE, "block/zram:prepare",
	                              zcomp_cpu_up_prepare, zcomp_cpu_dead); 
	...
}

The zcomp_cpu_dead() accesses the zcom->comp, and if zcomp->comp is
removed and this function is called, clearly we'll be accessing some
random data here and can easily crash afterwards:

int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node)                   
{
	struct zcomp *comp = hlist_entry(node, struct zcomp, node);
	struct zcomp_strm *zstrm;

	zstrm = per_cpu_ptr(comp->stream, cpu);
	zcomp_strm_free(zstrm);
	return 0;
}

And zram's syfs reset_store() lets userspace call zram_reset_device()
which calls zcomp_destroy():

void zcomp_destroy(struct zcomp *comp)
{
	cpuhp_state_remove_instance(CPUHP_ZCOMP_PREPARE, &comp->node);
	free_percpu(comp->stream);
	kfree(comp);
}

> It would be great if it goes in the description, too since it's
> more clear to show the problme.

Does the above do it?
> 
> >  b) module unload race with sysfs attribute race on *any* driver which
> >     has sysfs attributes which also shares the same lock as used during
> >     module unload
> 
> Yub, that part I missed. Maybe, we need some wrapper to zram sysfs
> to get try_module_get in the warapper funnction and then call sub
> rountine only if it got the refcount.
> 
> zram_sysfs_wrapper(func, A, B)
>     if (!try_module_get(THIS_MODULE)
>         return -ENODEV;
>     ret = func(A,B);
>     module_put(THIS_MODULE);
>     return ret;

I'd much prefer this be resolved in kernfs later, if you look at the kernel
there are already some drivers which may have realized this requirement
the hard way. Open coding this I think makes the race / intent clearer.

Right now we have no semantics possible for a generic solution, but I
can work on one later.

  Luis

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ