[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACygaLACTORD53caxZXU8xZQxki-374BFPxCaMg=qFX1bH8RaA@mail.gmail.com>
Date: Tue, 8 Dec 2015 19:36:33 +0800
From: Wenwei Tao <ww.tao0320@...il.com>
To: Matias Bjørling <mb@...htnvm.io>
Cc: linux-kernel@...r.kernel.org, linux-block@...r.kernel.org
Subject: Re: [PATCH] lightnvm: change rrpc slab creation/destruction time
Hi Matias
In my understanding kmem_cache_create only allocate and setup some
basic structures, the actual slab memory is allocated from buddy
system when we use these slabs.
Do you think the memory consumed by these structures is a issue
compared to the lock contention and slab status check in
rrpc_core_init,or you have any other concerns ?
2015-12-08 3:45 GMT+08:00 Matias Bjørling <mb@...htnvm.io>:
> On Mon, Dec 7, 2015 at 1:16 PM, Wenwei Tao <ww.tao0320@...il.com> wrote:
>>
>> create rrpc slabs during rrpc module init,
>> thus eliminate the lock contention and slab
>> status check in rrpc_core_init. And destroy
>> them when rrpc exit.
>>
>> Signed-off-by: Wenwei Tao <ww.tao0320@...il.com>
>> ---
>> drivers/lightnvm/rrpc.c | 54
>> ++++++++++++++++++++++++++++++-------------------
>> 1 file changed, 33 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
>> index 75e59c3..a36f39a 100644
>> --- a/drivers/lightnvm/rrpc.c
>> +++ b/drivers/lightnvm/rrpc.c
>> @@ -17,7 +17,6 @@
>> #include "rrpc.h"
>>
>> static struct kmem_cache *rrpc_gcb_cache, *rrpc_rq_cache;
>> -static DECLARE_RWSEM(rrpc_lock);
>>
>> static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
>> struct nvm_rq *rqd, unsigned long flags);
>> @@ -1019,26 +1018,6 @@ static int rrpc_map_init(struct rrpc *rrpc)
>>
>> static int rrpc_core_init(struct rrpc *rrpc)
>> {
>> - down_write(&rrpc_lock);
>> - if (!rrpc_gcb_cache) {
>> - rrpc_gcb_cache = kmem_cache_create("rrpc_gcb",
>> - sizeof(struct rrpc_block_gc), 0, 0, NULL);
>> - if (!rrpc_gcb_cache) {
>> - up_write(&rrpc_lock);
>> - return -ENOMEM;
>> - }
>> -
>> - rrpc_rq_cache = kmem_cache_create("rrpc_rq",
>> - sizeof(struct nvm_rq) + sizeof(struct
>> rrpc_rq),
>> - 0, 0, NULL);
>> - if (!rrpc_rq_cache) {
>> - kmem_cache_destroy(rrpc_gcb_cache);
>> - up_write(&rrpc_lock);
>> - return -ENOMEM;
>> - }
>> - }
>> - up_write(&rrpc_lock);
>> -
>> rrpc->page_pool = mempool_create_page_pool(PAGE_POOL_SIZE, 0);
>> if (!rrpc->page_pool)
>> return -ENOMEM;
>> @@ -1338,14 +1317,47 @@ static struct nvm_tgt_type tt_rrpc = {
>> .exit = rrpc_exit,
>> };
>>
>> +static int __init rrpc_slab_init(void)
>> +{
>> + rrpc_gcb_cache = kmem_cache_create("rrpc_gcb",
>> + sizeof(struct rrpc_block_gc), 0, 0, NULL);
>> + if (!rrpc_gcb_cache)
>> + goto out;
>> +
>> + rrpc_rq_cache = kmem_cache_create("rrpc_rq",
>> + sizeof(struct nvm_rq) + sizeof(struct rrpc_rq),
>> + 0, 0, NULL);
>> + if (!rrpc_rq_cache)
>> + goto out_free;
>> +
>> + return 0;
>> +
>> +out_free:
>> + kmem_cache_destroy(rrpc_gcb_cache);
>> +out:
>> + return -ENOMEM;
>> +}
>> +
>> +static inline void rrpc_slab_free(void)
>> +{
>> + kmem_cache_destroy(rrpc_gcb_cache);
>> + kmem_cache_destroy(rrpc_rq_cache);
>> +}
>> +
>> static int __init rrpc_module_init(void)
>> {
>> + int ret;
>> +
>> + ret = rrpc_slab_init();
>> + if (ret)
>> + return ret;
>> return nvm_register_target(&tt_rrpc);
>> }
>>
>> static void rrpc_module_exit(void)
>> {
>> nvm_unregister_target(&tt_rrpc);
>> + rrpc_slab_free();
>> }
>>
>> module_init(rrpc_module_init);
>
>
> Thanks Tao. I think the previous behavior is better. That way we don't
> consume any memory until the module is in use.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists