[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <55190F23.4020009@samsung.com>
Date: Mon, 30 Mar 2015 11:53:55 +0300
From: Andrey Ryabinin <a.ryabinin@...sung.com>
To: David Rientjes <rientjes@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Dave Kleikamp <shaggy@...nel.org>,
Christoph Hellwig <hch@....de>,
Sebastian Ott <sebott@...ux.vnet.ibm.com>,
Mikulas Patocka <mpatocka@...hat.com>,
Catalin Marinas <catalin.marinas@....com>,
LKML <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
jfs-discussion@...ts.sourceforge.net
Subject: Re: [patch v2 4/4] mm, mempool: poison elements backed by page
allocator
On 03/27/2015 01:50 AM, David Rientjes wrote:
> On Thu, 26 Mar 2015, Andrey Ryabinin wrote:
>
>>> +static void check_element(mempool_t *pool, void *element)
>>> +{
>>> + /* Mempools backed by slab allocator */
>>> + if (pool->free == mempool_free_slab || pool->free == mempool_kfree)
>>> + __check_element(pool, element, ksize(element));
>>> +
>>> + /* Mempools backed by page allocator */
>>> + if (pool->free == mempool_free_pages) {
>>> + int order = (int)(long)pool->pool_data;
>>> + void *addr = page_address(element);
>>> +
>>> + __check_element(pool, addr, 1UL << (PAGE_SHIFT + order));
>>> }
>>> }
>>>
>>> -static void poison_slab_element(mempool_t *pool, void *element)
>>> +static void __poison_element(void *element, size_t size)
>>> {
>>> - if (pool->alloc == mempool_alloc_slab ||
>>> - pool->alloc == mempool_kmalloc) {
>>> - size_t size = ksize(element);
>>> - u8 *obj = element;
>>> + u8 *obj = element;
>>> +
>>> + memset(obj, POISON_FREE, size - 1);
>>> + obj[size - 1] = POISON_END;
>>> +}
>>> +
>>> +static void poison_element(mempool_t *pool, void *element)
>>> +{
>>> + /* Mempools backed by slab allocator */
>>> + if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
>>> + __poison_element(element, ksize(element));
>>> +
>>> + /* Mempools backed by page allocator */
>>> + if (pool->alloc == mempool_alloc_pages) {
>>> + int order = (int)(long)pool->pool_data;
>>> + void *addr = page_address(element);
>>>
>>> - memset(obj, POISON_FREE, size - 1);
>>> - obj[size - 1] = POISON_END;
>>> + __poison_element(addr, 1UL << (PAGE_SHIFT + order));
>>
>> I think, it would be better to use kernel_map_pages() here and in
>> check_element().
>
> Hmm, interesting suggestion.
>
>> This implies that poison_element()/check_element() has to be moved out of
>> CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON ifdef (keeping only slab
>> poisoning under this ifdef).
>
> The mempool poisoning introduced here is really its own poisoning built on
> top of whatever the mempool allocator is. Otherwise, it would have called
> into the slab subsystem to do the poisoning and include any allocated
> space beyond the object size itself.
Perhaps, that would be a good thing to do. I mean it makes sense to check redzone
for corruption.
> Mempool poisoning is agnostic to the
> underlying memory just like the chain of elements is, mempools don't even
> store size.
>
> We don't have a need to set PAGE_EXT_DEBUG_POISON on these pages sitting
> in the reserved pool, nor do we have a need to do kmap_atomic() since it's
> already mapped and must be mapped to be on the reserved pool, which is
> handled by mempool_free().
>
Well, yes. But this applies only to architectures that don't have ARCH_SUPPORTS_DEBUG_PAGEALLOC.
The rest of arches will only benefit from this as kernel_map_pages() potentially could find more bugs.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists