lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKTCnzmt6SkxzwmL=qk3ZfDqvFZ8E9OM-8gJmi+0kFsq6xNrvQ@mail.gmail.com>
Date:	Thu, 29 Jan 2015 14:02:38 +0530
From:	Balbir Singh <bsingharora@...il.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Vladimir Davydov <vdavydov@...allels.com>,
	Christoph Lameter <cl@...ux.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...e.cz>, linux-mm <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH -mm v2 1/3] slub: never fail to shrink cache

On Thu, Jan 29, 2015 at 3:27 AM, Andrew Morton
<akpm@...ux-foundation.org> wrote:
> On Wed, 28 Jan 2015 19:22:49 +0300 Vladimir Davydov <vdavydov@...allels.com> wrote:
>
>> SLUB's version of __kmem_cache_shrink() not only removes empty slabs,
>> but also tries to rearrange the partial lists to place slabs filled up
>> most to the head to cope with fragmentation. To achieve that, it
>> allocates a temporary array of lists used to sort slabs by the number of
>> objects in use. If the allocation fails, the whole procedure is aborted.
>>
>> This is unacceptable for the kernel memory accounting extension of the
>> memory cgroup, where we want to make sure that kmem_cache_shrink()
>> successfully discarded empty slabs. Although the allocation failure is
>> utterly unlikely with the current page allocator implementation, which
>> retries GFP_KERNEL allocations of order <= 2 infinitely, it is better
>> not to rely on that.
>>
>> This patch therefore makes __kmem_cache_shrink() allocate the array on
>> stack instead of calling kmalloc, which may fail. The array size is
>> chosen to be equal to 32, because most SLUB caches store not more than
>> 32 objects per slab page. Slab pages with <= 32 free objects are sorted
>> using the array by the number of objects in use and promoted to the head
>> of the partial list, while slab pages with > 32 free objects are left in
>> the end of the list without any ordering imposed on them.
>>
>> ...
>>
>> @@ -3375,51 +3376,56 @@ int __kmem_cache_shrink(struct kmem_cache *s)
>>       struct kmem_cache_node *n;
>>       struct page *page;
>>       struct page *t;
>> -     int objects = oo_objects(s->max);
>> -     struct list_head *slabs_by_inuse =
>> -             kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
>> +     LIST_HEAD(discard);
>> +     struct list_head promote[SHRINK_PROMOTE_MAX];
>
> 512 bytes of stack.  The call paths leading to __kmem_cache_shrink()
> are many and twisty.  How do we know this isn't a problem?
>
> The logic behind choosing "32" sounds rather rubbery.  What goes wrong
> if we use, say, "4"?

This much space in the stack may be fertile grounds for kernel stack
overflow code execution :) Another perspective could be that there
should be allocations that are not penalized to a particular cgroup
(from an accounting perspective), should come from the reserved  pool.

Balbir Singh.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ