lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YFjHvUolScp3btJ9@google.com>
Date:   Mon, 22 Mar 2021 09:37:17 -0700
From:   Minchan Kim <minchan@...nel.org>
To:     Luis Chamberlain <mcgrof@...nel.org>
Cc:     gregkh@...uxfoundation.org, ngupta@...are.org,
        sergey.senozhatsky.work@...il.com, axboe@...nel.dk,
        mbenes@...e.com, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] zram: fix crashes due to use of cpu hotplug
 multistate

On Fri, Mar 19, 2021 at 07:09:24PM +0000, Luis Chamberlain wrote:
> On Fri, Mar 12, 2021 at 11:28:21AM -0800, Minchan Kim wrote:
> > On Fri, Mar 12, 2021 at 06:32:38PM +0000, Luis Chamberlain wrote:
> > > On Thu, Mar 11, 2021 at 06:14:40PM -0800, Minchan Kim wrote:
> > > > On Wed, Mar 10, 2021 at 09:21:28PM +0000, Luis Chamberlain wrote:
> > > > > On Mon, Mar 08, 2021 at 06:55:30PM -0800, Minchan Kim wrote:
> > > > > > If I understand correctly, bugs you found were related to module
> > > > > > unloading race while the zram are still working.
> > > > > 
> > > > > No, that is a simplifcation of the issue. The issue consists of
> > > > > two separate issues:
> > > > > 
> > > > >  a) race against module unloading in light of incorrect racty use of
> > > > >     cpu hotplug multistate support
> > > > 
> > > > 
> > > > Could you add some pusedo code sequence to show the race cleary?
> > > 
> > > Let us deal with each issue one at time. First, let's address
> > > understanding the kernel warning can be reproduced easily by
> > > triggering zram02.sh from LTP twice:
> > > 
> > > kernel: ------------[ cut here ]------------
> > > kernel: Error: Removing state 63 which has instances left.
> > > kernel: WARNING: CPU: 7 PID: 70457 at kernel/cpu.c:2069 __cpuhp_remove_state_cpuslocked+0xf9/0x100
> > > kernel: Modules linked in: zram(E-) zsmalloc(E) <etc>
> > > 
> > > The first patch prevents this race. This race is possible because on
> > > module init we associate callbacks for CPU hotplug add / remove:
> > > 
> > > static int __init zram_init(void)                                               
> > > {
> > > 	...
> > > 	ret = cpuhp_setup_state_multi(CPUHP_ZCOMP_PREPARE, "block/zram:prepare",
> > > 	                              zcomp_cpu_up_prepare, zcomp_cpu_dead); 
> > > 	...
> > > }
> > > 
> > > The zcomp_cpu_dead() accesses the zcom->comp, and if zcomp->comp is
> > > removed and this function is called, clearly we'll be accessing some
> > > random data here and can easily crash afterwards:
> > 
> > I am trying to understand this crash. You mentioned the warning
> > above but here mention crash(I understand sysfs race but this is
> > different topic). What's the relation with above warning and
> > crash here? You are talking the warning could change to the
> > crash sometimes? 
> 

Hi Luis,

> Indeed one issue is a consequence of the other but a bit better
> description can be put together for both separately.
> 
> The warning comes up when cpu hotplug detects that the callback
> is being removed, but yet "instances" for multistate are left behind,
> ie, not removed. CPU hotplug multistate allows us to dedicate a callback
> for zram so that it gets called every time a CPU hot plugs or unplugs.
> In the zram driver's case we want to trigger the callback per each
> struct zcomp, one is used per supported and used supported compression

     you meant "each one is used per supported compression"?

> algorithm. The zram driver allocates a struct zcomp for an compression
> algorithm on a need basis. The default compressor is CONFIG_ZRAM_DEF_COMP
> which today is CONFIG_ZRAM_DEF_COMP_LZORLE ("lzo-rle"). Although we may
> have different zram devices, zram devices which use the same compression
> algorithm share the same struct zcomp. The "multi" in CPU hotplug multstate

It allocates each own zcomp, not sharing even though zram devices
use the same compression algorithm.

> refers to these different struct zcomp instances, each of these will
> have the callback routine registered run for it. The kernel's CPU
> hotplug multistate keeps a linked list of these different structures
> so that it will iterate over them on CPU transitions. By default at
> driver initialization we will create just one zram device (num_devices=1)
> and a zcomp structure then set for the lzo-rle comrpession algorithm.
> At driver removal we first remove each zram device, and so we destroy
> the struct zcomp. But since we expose sysfs attributes to create new
> devices or reset / initialize existing ones, we can easily end up
> re-initializing a struct zcomp before the exit routine of the module
> removes the cpu hotplug callback. When this happens the kernel's
> CPU hotplug will detect that at least one instance (struct zcomp for
> us) exists.

It's very helpful to understand. Thanks a lot!S

So IIUC, it's fundamentally race between destroy_devices(i.e., module
unload) and every sysfs knobs in zram. Isn't it?
Below specific example is one of them and every sysfs code in zram
could access freed object(e.g., zram->something).
And you are claiming there isn't good way to fix it in kernfs(please
add it in the description, too) even though it's general problem.
(If we had, we may detect the race and make zram_remove_cb fails
so unloading modulies fails, finally)

So, shouldn't we introcude a global rw_semaphore?

destroy_devices
  class_unregister
  down_write(&zram_unload);
  idr_for_each(zram_remove_cb);
  up_write(&zram_unload);

And every sysfs code hold the lock with down_read in the first place
and release the lock right before return. So, introduce a zram sysfs
wrapper to centralize all of locking logic.

Does it make sense?

> 
> And so we can have:
> 
> static void destroy_devices(void)
> {
> 	class_unregister(&zram_control_class);
> 	idr_for_each(&zram_index_idr, &zram_remove_cb, NULL);
> 	zram_debugfs_destroy();
> 	idr_destroy(&zram_index_idr);
> 	unregister_blkdev(zram_major, "zram");
> 	cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE);
> }
> 
> While destroy_devices() runs we can have this race:
> 
> 
> CPU 1                            CPU 2
> 
> class_unregister(...);
> idr_for_each(...);
				  I think sysfs was already remove here.
> zram_debugfs_destroy();
>                                 disksize_store(...)
> idr_destroy(...);
> unregister_blkdev(...);
> cpuhp_remove_multi_state(...);

So,

	CPU 1                            CPU 2
 
destroy_devices
..
				 disksize_store()
				   zcomp_create
				   zcomp_init
idr_for_each(zram_remove_cb
  zram_remove
  zram_reset
  zcomp_destroy
  cpuhp_state_remove_instance
				   cpuhp_state_add_instance
..

cpuhp_remove_multi_state(...)
  Detect!
    kernel: Error: Removing state 63 which has instances left.

> 
> 
> The warning comes up on cpuhp_remove_multi_state() when it sees
> that the state for CPUHP_ZCOMP_PREPARE does not have an empty
> instance linked list.
> 
> After the idr_destory() its also easy to run into a crash. The
> above predicament also leads to memory leaks.
> 
> And so we need to have a state machine for when we're up and when
> we're going down generically.
> 
> Let me know if it makes sense now, if so I can amend the commit log
> accordingly.

Yub, It's much better. Let's have it in the commit log.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ