lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 16 Sep 2017 01:40:17 +0800
From:   "Yang Shi" <yang.s@...baba-inc.com>
To:     Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>, cl@...ux.com,
        penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
        akpm@...ux-foundation.org
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] mm: oom: show unreclaimable slab info when kernel
 panic



On 9/15/17 5:00 AM, Tetsuo Handa wrote:
> On 2017/09/15 2:14, Yang Shi wrote:
>> @@ -1274,6 +1276,29 @@ static int slab_show(struct seq_file *m, void *p)
>>   	return 0;
>>   }
>>   
>> +void show_unreclaimable_slab()
>> +{
>> +	struct kmem_cache *s = NULL;
>> +	struct slabinfo sinfo;
>> +
>> +	memset(&sinfo, 0, sizeof(sinfo));
>> +
>> +	printk("Unreclaimable slabs:\n");
>> +	mutex_lock(&slab_mutex);
> 
> Please avoid sleeping locks which potentially depend on memory allocation.
> There are
> 
> 	mutex_lock(&slab_mutex);
> 	kmalloc(GFP_KERNEL);
> 	mutex_unlock(&slab_mutex);
> 
> users which will fail to call panic() if they hit this path
Thanks for the heads up. Since this is just called by oom in panic path, 
so it sounds safe to just discard the mutex_lock()/mutex_unlock call 
since nobody can allocate memory without GFP_ATOMIC to change the 
statistics of slab.

Even though some GFP_ATOMIC callers allocate memory successfully, it 
should not have obvious impact to the slabinfo we need capture since 
typically GFP_ATOMIC allocation is small.

I will drop the mutext in v2 if no one has objection.

Thanks,
Yang

> 
>> +	list_for_each_entry(s, &slab_caches, list) {
>> +		if (!is_root_cache(s))
>> +			continue;
>> +
>> +		get_slabinfo(s, &sinfo);
>> +
>> +		if (!is_reclaimable(s) && sinfo.num_objs > 0)
>> +			printk("%-17s %luKB\n", cache_name(s), K(sinfo.num_objs * s->size));
>> +	}
>> +	mutex_unlock(&slab_mutex);
>> +}
>> +EXPORT_SYMBOL(show_unreclaimable_slab);
>> +#undef K
>> +
>>   #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
>>   void *memcg_slab_start(struct seq_file *m, loff_t *pos)
>>   {
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ