lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190926072645.GA20255@dhcp22.suse.cz>
Date:   Thu, 26 Sep 2019 09:26:45 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Qian Cai <cai@....pw>
Cc:     David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Oscar Salvador <osalvador@...e.de>,
        Pavel Tatashin <pasha.tatashin@...een.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock

On Wed 25-09-19 14:20:59, Qian Cai wrote:
> On Wed, 2019-09-25 at 19:48 +0200, Michal Hocko wrote:
> > On Wed 25-09-19 12:01:02, Qian Cai wrote:
> > > On Wed, 2019-09-25 at 09:02 +0200, David Hildenbrand wrote:
> > > > On 24.09.19 20:54, Qian Cai wrote:
> > > > > On Tue, 2019-09-24 at 17:11 +0200, Michal Hocko wrote:
> > > > > > On Tue 24-09-19 11:03:21, Qian Cai wrote:
> > > > > > [...]
> > > > > > > While at it, it might be a good time to rethink the whole locking over there, as
> > > > > > > it right now read files under /sys/kernel/slab/ could trigger a possible
> > > > > > > deadlock anyway.
> > > > > > > 
> > > > > > 
> > > > > > [...]
> > > > > > > [  442.452090][ T5224] -> #0 (mem_hotplug_lock.rw_sem){++++}:
> > > > > > > [  442.459748][ T5224]        validate_chain+0xd10/0x2bcc
> > > > > > > [  442.464883][ T5224]        __lock_acquire+0x7f4/0xb8c
> > > > > > > [  442.469930][ T5224]        lock_acquire+0x31c/0x360
> > > > > > > [  442.474803][ T5224]        get_online_mems+0x54/0x150
> > > > > > > [  442.479850][ T5224]        show_slab_objects+0x94/0x3a8
> > > > > > > [  442.485072][ T5224]        total_objects_show+0x28/0x34
> > > > > > > [  442.490292][ T5224]        slab_attr_show+0x38/0x54
> > > > > > > [  442.495166][ T5224]        sysfs_kf_seq_show+0x198/0x2d4
> > > > > > > [  442.500473][ T5224]        kernfs_seq_show+0xa4/0xcc
> > > > > > > [  442.505433][ T5224]        seq_read+0x30c/0x8a8
> > > > > > > [  442.509958][ T5224]        kernfs_fop_read+0xa8/0x314
> > > > > > > [  442.515007][ T5224]        __vfs_read+0x88/0x20c
> > > > > > > [  442.519620][ T5224]        vfs_read+0xd8/0x10c
> > > > > > > [  442.524060][ T5224]        ksys_read+0xb0/0x120
> > > > > > > [  442.528586][ T5224]        __arm64_sys_read+0x54/0x88
> > > > > > > [  442.533634][ T5224]        el0_svc_handler+0x170/0x240
> > > > > > > [  442.538768][ T5224]        el0_svc+0x8/0xc
> > > > > > 
> > > > > > I believe the lock is not really needed here. We do not deallocated
> > > > > > pgdat of a hotremoved node nor destroy the slab state because an
> > > > > > existing slabs would prevent hotremove to continue in the first place.
> > > > > > 
> > > > > > There are likely details to be checked of course but the lock just seems
> > > > > > bogus.
> > > > > 
> > > > > Check 03afc0e25f7f ("slab: get_online_mems for
> > > > > kmem_cache_{create,destroy,shrink}"). It actually talk about the races during
> > > > > memory as well cpu hotplug, so it might even that cpu_hotplug_lock removal is
> > > > > problematic?
> > > > > 
> > > > 
> > > > Which removal are you referring to? get_online_mems() does not mess with
> > > > the cpu hotplug lock (and therefore this patch).
> > > 
> > > The one in your patch. I suspect there might be races among the whole NUMA node
> > > hotplug, kmem_cache_create, and show_slab_objects(). See bfc8c90139eb ("mem-
> > > hotplug: implement get/put_online_mems")
> > > 
> > > "kmem_cache_{create,destroy,shrink} need to get a stable value of cpu/node
> > > online mask, because they init/destroy/access per-cpu/node kmem_cache parts,
> > > which can be allocated or destroyed on cpu/mem hotplug."
> > 
> > I still have to grasp that code but if the slub allocator really needs
> > a stable cpu mask then it should be using the explicit cpu hotplug
> > locking rather than rely on side effect of memory hotplug locking.
> > 
> > > Both online_pages() and show_slab_objects() need to get a stable value of
> > > cpu/node online mask.
> > 
> > Could tou be more specific why online_pages need a stable cpu online
> > mask? I do not think that show_slab_objects is a real problem because a
> > potential race shouldn't be critical.
> 
> build_all_zonelists()
>   __build_all_zonelists()
>     for_each_online_cpu(cpu)

OK, this is using for_each_online_cpu but why is this a problem? Have
you checked what the code actually does? Let's say that online_pages is
racing with cpu hotplug. A new CPU appears/disappears from the online
mask while we are iterating it, right? Let's start with cpu offlining
case. We have two choices, either the cpu is still visible and we update
its local node configuration even though it will disappear shortly which
is ok because we are not touching any data that disappears (it's all
per-cpu). Case when the cpu is no longer there is not really
interesting. For the online case we might miss a cpu but that should be
tolerateable because that is not any different from triggering the
online independently of the memory hotplug. So there has to be a hook
from that code path as well. If there is none then this is buggy
irrespective of the locking.

Makes sense?
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ