[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a0dd923-7b5c-e1ed-708a-5fdfe8c662dc@alibaba-inc.com>
Date: Sat, 30 Sep 2017 06:15:10 +0800
From: "Yang Shi" <yang.s@...baba-inc.com>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
mhocko@...nel.org
Cc: cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
iamjoonsoo.kim@....com, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2 v8] oom: capture unreclaimable slab info in oom
message
On 9/28/17 1:45 PM, Tetsuo Handa wrote:
> Yang Shi wrote:
>> On 9/28/17 12:57 PM, Tetsuo Handa wrote:
>>> Yang Shi wrote:
>>>> On 9/27/17 9:36 PM, Tetsuo Handa wrote:
>>>>> On 2017/09/28 6:46, Yang Shi wrote:
>>>>>> Changelog v7 -> v8:
>>>>>> * Adopted Michal’s suggestion to dump unreclaim slab info when unreclaimable slabs amount > total user memory. Not only in oom panic path.
>>>>>
>>>>> Holding slab_mutex inside dump_unreclaimable_slab() was refrained since V2
>>>>> because there are
>>>>>
>>>>> mutex_lock(&slab_mutex);
>>>>> kmalloc(GFP_KERNEL);
>>>>> mutex_unlock(&slab_mutex);
>>>>>
>>>>> users. If we call dump_unreclaimable_slab() for non OOM panic path, aren't we
>>>>> introducing a risk of crash (i.e. kernel panic) for regular OOM path?
>>>>
>>>> I don't see the difference between regular oom path and oom path other
>>>> than calling panic() at last.
>>>>
>>>> And, the slab dump may be called by panic path too, it is for both
>>>> regular and panic path.
>>>
>>> Calling a function that might cause kerneloops immediately before calling panic()
>>> would be tolerable, for the kernel will panic after all. But calling a function
>>> that might cause kerneloops when there is no plan to call panic() is a bug.
>>
>> I got your point. slab_mutex is used to protect the list of all the
>> slabs, since we are already in oom, there should be not kmem cache
>> destroy happen during the list traverse. And, list_for_each_entry() has
>> been replaced to list_for_each_entry_safe() to make the traverse more
>> robust.
>
> I consider that OOM event and kmem chache destroy event can run concurrently
> because slab_mutex is not held by OOM event (and unfortunately cannot be held
> due to possibility of deadlock) in order to protect the list of all the slabs.
>
> I don't think replacing list_for_each_entry() with list_for_each_entry_safe()
> makes the traverse more robust, for list_for_each_entry_safe() does not defer
> freeing of memory used by list element. Rather, replacing list_for_each_entry()
> with list_for_each_entry_rcu() (and making relevant changes such as
> rcu_read_lock()/rcu_read_unlock()/synchronize_rcu()) will make the traverse safe.
I'm not sure if rcu could satisfy this case. rcu just can protect
slab_caches_to_rcu_destroy list, which is used by SLAB_TYPESAFE_BY_RCU
slabs.
Yang
>
Powered by blists - more mailing lists