[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5087EFA2.6030607@parallels.com>
Date: Wed, 24 Oct 2012 17:39:46 +0400
From: Glauber Costa <glommer@...allels.com>
To: JoonSoo Kim <js1304@...il.com>
CC: Christoph Lameter <cl@...ux.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH 2/2] slab: move kmem_cache_free to common code
On 10/23/2012 07:43 PM, JoonSoo Kim wrote:
> 2012/10/23 Glauber Costa <glommer@...allels.com>:
>> On 10/23/2012 12:07 PM, Glauber Costa wrote:
>>> On 10/23/2012 04:48 AM, JoonSoo Kim wrote:
>>>> Hello, Glauber.
>>>>
>>>> 2012/10/23 Glauber Costa <glommer@...allels.com>:
>>>>> On 10/22/2012 06:45 PM, Christoph Lameter wrote:
>>>>>> On Mon, 22 Oct 2012, Glauber Costa wrote:
>>>>>>
>>>>>>> + * kmem_cache_free - Deallocate an object
>>>>>>> + * @cachep: The cache the allocation was from.
>>>>>>> + * @objp: The previously allocated object.
>>>>>>> + *
>>>>>>> + * Free an object which was previously allocated from this
>>>>>>> + * cache.
>>>>>>> + */
>>>>>>> +void kmem_cache_free(struct kmem_cache *s, void *x)
>>>>>>> +{
>>>>>>> + __kmem_cache_free(s, x);
>>>>>>> + trace_kmem_cache_free(_RET_IP_, x);
>>>>>>> +}
>>>>>>> +EXPORT_SYMBOL(kmem_cache_free);
>>>>>>> +
>>>>>>
>>>>>> This results in an additional indirection if tracing is off. Wonder if
>>>>>> there is a performance impact?
>>>>>>
>>>>> if tracing is on, you mean?
>>>>>
>>>>> Tracing already incurs overhead, not sure how much a function call would
>>>>> add to the tracing overhead.
>>>>>
>>>>> I would not be concerned with this, but I can measure, if you have any
>>>>> specific workload in mind.
>>>>
>>>> With this patch, kmem_cache_free() invokes __kmem_cache_free(),
>>>> that is, it add one more "call instruction" than before.
>>>>
>>>> I think that Christoph's comment means above fact.
>>>
>>> Ah, this. Ok, I got fooled by his mention to tracing.
>>>
>>> I do agree, but since freeing is ultimately dependent on the allocator
>>> layout, I don't see a clean way of doing this without dropping tears of
>>> sorrow around. The calls in slub/slab/slob would have to be somehow
>>> inlined. Hum... maybe it is possible to do it from
>>> include/linux/sl*b_def.h...
>>>
>>> Let me give it a try and see what I can come up with.
>>>
>>
>> Ok.
>>
>> I am attaching a PoC for this for your appreciation. This gets quite
>> ugly, but it's the way I found without including sl{a,u,o}b.c directly -
>> which would be even worse.
>
> Hmm...
> This is important issue for sl[aou]b common allocators.
> Because there are similar functions like as kmem_cache_alloc, ksize, kfree, ...
> So it is good time to resolve this issue.
>
> As far as I know, now, we have 3 solutions.
>
> 1. include/linux/slab.h
> __always_inline kmem_cache_free()
> {
> __kmem_cache_free();
> blablabla...
> }
>
> 2. define macro like as Glauber's solution
> 3. include sl[aou]b.c directly.
>
> Is there other good solution?
> Among them, I prefer "solution 3", because future developing cost may
> be minimum among them.
>
> "Solution 2" may be error-prone for future developing.
> "Solution 1" may make compile-time longer and larger code.
>
> Is my understanding right?
> Is "Solution 3" really ugly?
>
I just gave it a try. Turns out the result is not *that* bad for my eyes.
I'll post soon.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists