[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49d294xozb.fsf@segfault.boston.devel.redhat.com>
Date: Mon, 03 Nov 2014 10:50:32 -0500
From: Jeff Moyer <jmoyer@...hat.com>
To: "Elliott\, Robert \(Server Storage\)" <Elliott@...com>
Cc: Ross Zwisler <ross.zwisler@...ux.intel.com>,
Jens Axboe <axboe@...nel.dk>,
Matthew Wilcox <matthew.r.wilcox@...el.com>,
Boaz Harrosh <boaz@...xistor.com>,
Nick Piggin <npiggin@...nel.dk>,
"Kani\, Toshimitsu" <toshi.kani@...com>,
"Knippers\, Linda" <linda.knippers@...com>,
"linux-fsdevel\@vger.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-nvdimm\@lists.01.org" <linux-nvdimm@...1.01.org>
Subject: Re: [PATCH 1/4] pmem: Initial version of persistent memory driver
"Elliott, Robert (Server Storage)" <Elliott@...com> writes:
>> +config BLK_DEV_PMEM_COUNT
>> + int "Default number of PMEM disks"
>> + default "4"
>
> For real use I think a default of 1 would be better.
For "real" use, it will be the number of actual DIMMs, not a config
option, I would think. I don't take any issue with defaulting to 4.
This will go away.
>> + pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
>> + if (!pmem->pmem_queue)
>> + goto out_free_dev;
>> +
>
> General comment:
> This driver should be blk-mq based. Not doing so loses
> out on a number of SMP and NUMA performance improvements.
> For example, blk_alloc_queue calls blk_alloc_queue_node
> with NUMA_NO_NODE. If it called blk_mq_init_queue, then
> each CPU would get structures located on its node.
No need to use blk-mq just to set the numa node, as the driver could
just call blk_alloc_queue_node itself. I'd chalk this up to another
thing that could be fixed when the driver is used for actual hardware
that describes its own proximity domain. Further, there is no DMA
engine, here, so what is the benefit of using blk-mq? I/O completes in
the submission path, and I don't see any locking that's preventing
parallelism.
>
>> + blk_queue_make_request(pmem->pmem_queue, pmem_make_request);
>> + blk_queue_max_hw_sectors(pmem->pmem_queue, 1024);
>
> Many storage controllers support 1 MiB IOs now, and increasing
> amounts of software feel comfortable generating those sizes.
> That means this should be at least 2048.
>
>> + blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
>> +
>
> It might be appropriate to call blk_queue_rq_timeout
> and set up a very short timeout, since persistent memory is
> very fast. I'm not sure what the default is for non-blk-mq
> drivers; for blk-mq, I think it is 30 seconds, which is
> far too long for memory based storage.
If you timeout on a memcpy, then we've definitely got problems. :)
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists