lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 Sep 2017 01:21:27 +0800
From:   "Yang Shi" <yang.s@...baba-inc.com>
To:     Christopher Lameter <cl@...ux.com>
Cc:     penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
        akpm@...ux-foundation.org, mhocko@...nel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] mm: oom: show unreclaimable slab info when kernel
 panic



On 9/27/17 12:14 AM, Christopher Lameter wrote:
> On Wed, 27 Sep 2017, Yang Shi wrote:
> 
>> Print out unreclaimable slab info (used size and total size) which
>> actual memory usage is not zero (num_objs * size != 0) when:
>>    - unreclaimable slabs : all user memory > unreclaim_slabs_oom_ratio
>>    - panic_on_oom is set or no killable process
> 
> Ok. I like this much more than the earlier releases.
> 
>> diff --git a/mm/slab.h b/mm/slab.h
>> index 0733628..b0496d1 100644
>> --- a/mm/slab.h
>> +++ b/mm/slab.h
>> @@ -505,6 +505,14 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
>>   void memcg_slab_stop(struct seq_file *m, void *p);
>>   int memcg_slab_show(struct seq_file *m, void *p);
>>
>> +#ifdef CONFIG_SLABINFO
>> +void dump_unreclaimable_slab(void);
>> +#else
>> +static inline void dump_unreclaimable_slab(void)
>> +{
>> +}
>> +#endif
> 
> CONFIG_SLABINFO? How does this relate to the oom info? /proc/slabinfo
> support is optional. Oom info could be included even if CONFIG_SLABINFO
> goes away. Remove the #ifdef?

Because we want to dump the unreclaimable slab info in oom info.

Thanks,
Yang

> 

Powered by blists - more mailing lists