lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e0531762-6ef7-d3bf-e6a2-91642b4eeb63@alibaba-inc.com>
Date:   Mon, 02 Oct 2017 23:40:11 +0800
From:   "Yang Shi" <yang.s@...baba-inc.com>
To:     Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc:     mhocko@...nel.org, cl@...ux.com, penberg@...nel.org,
        rientjes@...gle.com, iamjoonsoo.kim@....com,
        akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2 v8] oom: capture unreclaimable slab info in oom
 message



On 9/30/17 4:00 AM, Tetsuo Handa wrote:
> Yang Shi wrote:
>> On 9/28/17 1:45 PM, Tetsuo Handa wrote:
>>> Yang Shi wrote:
>>>> On 9/28/17 12:57 PM, Tetsuo Handa wrote:
>>>>> Yang Shi wrote:
>>>>>> On 9/27/17 9:36 PM, Tetsuo Handa wrote:
>>>>>>> On 2017/09/28 6:46, Yang Shi wrote:
>>>>>>>> Changelog v7 -> v8:
>>>>>>>> * Adopted Michal’s suggestion to dump unreclaim slab info when unreclaimable slabs amount > total user memory. Not only in oom panic path.
>>>>>>>
>>>>>>> Holding slab_mutex inside dump_unreclaimable_slab() was refrained since V2
>>>>>>> because there are
>>>>>>>
>>>>>>> 	mutex_lock(&slab_mutex);
>>>>>>> 	kmalloc(GFP_KERNEL);
>>>>>>> 	mutex_unlock(&slab_mutex);
>>>>>>>
>>>>>>> users. If we call dump_unreclaimable_slab() for non OOM panic path, aren't we
>>>>>>> introducing a risk of crash (i.e. kernel panic) for regular OOM path?
>>>>>>
>>>>>> I don't see the difference between regular oom path and oom path other
>>>>>> than calling panic() at last.
>>>>>>
>>>>>> And, the slab dump may be called by panic path too, it is for both
>>>>>> regular and panic path.
>>>>>
>>>>> Calling a function that might cause kerneloops immediately before calling panic()
>>>>> would be tolerable, for the kernel will panic after all. But calling a function
>>>>> that might cause kerneloops when there is no plan to call panic() is a bug.
>>>>
>>>> I got your point. slab_mutex is used to protect the list of all the
>>>> slabs, since we are already in oom, there should be not kmem cache
>>>> destroy happen during the list traverse. And, list_for_each_entry() has
>>>> been replaced to list_for_each_entry_safe() to make the traverse more
>>>> robust.
>>>
>>> I consider that OOM event and kmem chache destroy event can run concurrently
>>> because slab_mutex is not held by OOM event (and unfortunately cannot be held
>>> due to possibility of deadlock) in order to protect the list of all the slabs.
>>>
>>> I don't think replacing list_for_each_entry() with list_for_each_entry_safe()
>>> makes the traverse more robust, for list_for_each_entry_safe() does not defer
>>> freeing of memory used by list element. Rather, replacing list_for_each_entry()
>>> with list_for_each_entry_rcu() (and making relevant changes such as
>>> rcu_read_lock()/rcu_read_unlock()/synchronize_rcu()) will make the traverse safe.
>>
>> I'm not sure if rcu could satisfy this case. rcu just can protect
>> slab_caches_to_rcu_destroy list, which is used by SLAB_TYPESAFE_BY_RCU
>> slabs.
> 
> I'm not sure why you are talking about SLAB_TYPESAFE_BY_RCU.
> What I meant is that
> 
>    Upon registration:
> 
>      // do initialize/setup stuff here
>      synchronize_rcu(); // <= for dump_unreclaimable_slab()
>      list_add_rcu(&kmem_cache->list, &slab_caches);
> 
>    Upon unregistration:
> 
>      list_del_rcu(&kmem_cache->list);
>      synchronize_rcu(); // <= for dump_unreclaimable_slab()
>      // do finalize/cleanup stuff here
> 
> then (if my understanding is correct)
> 
> 	rcu_read_lock();
> 	list_for_each_entry_rcu(s, &slab_caches, list) {
> 		if (!is_root_cache(s) || (s->flags & SLAB_RECLAIM_ACCOUNT))
> 			continue;
> 
> 		memset(&sinfo, 0, sizeof(sinfo));
> 		get_slabinfo(s, &sinfo);
> 
> 		if (sinfo.num_objs > 0)
> 			pr_info("%-17s %10luKB %10luKB\n", cache_name(s),
> 				(sinfo.active_objs * s->size) / 1024,
> 				(sinfo.num_objs * s->size) / 1024);
> 	}
> 	rcu_read_unlock();
> 
> will make dump_unreclaimable_slab() safe.

Thanks for the detailed description. However, it sounds this change is  
too much for slub, I'm not sure if this may change the subtle behavior  
of slub.

trylock sounds like a good alternative.

Yang

> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ