lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54b6c6a1-f9e4-5002-c828-3084c5203489@i-love.sakura.ne.jp>
Date:   Sun, 1 Dec 2019 10:17:38 +0900
From:   Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Christopher Lameter <cl@...ux.com>
Cc:     Yu Zhao <yuzhao@...gle.com>, Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        "Kirill A . Shutemov" <kirill@...temov.name>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [FIX] slub: Remove kmalloc under list_lock from
 list_slab_objects() V2

On 2019/12/01 8:09, Andrew Morton wrote:
>> Perform the allocation in free_partial() before the list_lock is taken.
> 
> No response here?  It looks a lot simpler than the originally proposed
> patch?
> 
>> --- linux.orig/mm/slub.c	2019-10-15 13:54:57.032655296 +0000
>> +++ linux/mm/slub.c	2019-11-11 15:52:11.616397853 +0000
>> @@ -3690,14 +3690,15 @@ error:
>>  }
>>
>>  static void list_slab_objects(struct kmem_cache *s, struct page *page,
>> -							const char *text)
>> +					const char *text, unsigned long *map)
>>  {
>>  #ifdef CONFIG_SLUB_DEBUG
>>  	void *addr = page_address(page);
>>  	void *p;
>> -	unsigned long *map = bitmap_zalloc(page->objects, GFP_ATOMIC);

Changing from !(__GFP_IO | __GFP_FS) allocation to

>> +
>>  	if (!map)
>>  		return;
>> +
>>  	slab_err(s, page, text, s->name);
>>  	slab_lock(page);
>>
>> @@ -3723,6 +3723,11 @@ static void free_partial(struct kmem_cac
>>  {
>>  	LIST_HEAD(discard);
>>  	struct page *page, *h;
>> +	unsigned long *map = NULL;
>> +
>> +#ifdef CONFIG_SLUB_DEBUG
>> +	map = bitmap_alloc(oo_objects(s->max), GFP_KERNEL);

__GFP_IO | __GFP_FS allocation.
How is this path guaranteed to be safe to perform __GFP_IO | __GFP_FS reclaim?

>> +#endif
>>
>>  	BUG_ON(irqs_disabled());
>>  	spin_lock_irq(&n->list_lock);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ