lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140602100821.GB30612@infradead.org>
Date:	Mon, 2 Jun 2014 03:08:21 -0700
From:	Christoph Hellwig <hch@...radead.org>
To:	Matias Bj??rling <m@...rling.me>
Cc:	willy@...ux.intel.com, keith.busch@...el.com, sbradshaw@...ron.com,
	axboe@...nel.dk, linux-kernel@...r.kernel.org,
	linux-nvme@...ts.infradead.org
Subject: Re: [PATCH v4] NVMe: basic conversion to blk-mq

> +static int nvme_map_rq(struct nvme_queue *nvmeq, struct nvme_iod *iod,
> +		struct request *req, enum dma_data_direction dma_dir,
> +		int psegs)
>  {
>  	sg_init_table(iod->sg, psegs);
> +	iod->nents = blk_rq_map_sg(req->q, req, iod->sg);
>  
> +	if (!dma_map_sg(nvmeq->q_dmadev, iod->sg, iod->nents, dma_dir))
>  		return -ENOMEM;
>  
> +	return iod->nents;

Given how simple I'd suggest merging this into the only caller.

> +static int nvme_submit_iod(struct nvme_queue *nvmeq, struct nvme_iod *iod,
> +							struct nvme_ns *ns)
>  {
> +	struct request *req = iod->private;
>  	struct nvme_command *cmnd;
> +	u16 control = 0;
> +	u32 dsmgmt = 0;
>  
> +	spin_lock_irq(&nvmeq->q_lock);
> +	if (nvmeq->q_suspended) {
> +		spin_unlock_irq(&nvmeq->q_lock);
> +		return -EBUSY;
> +	}
>  
> +	if (req->cmd_flags & REQ_DISCARD) {
> +		nvme_submit_discard(nvmeq, ns, req, iod);
> +		goto end_submit;
> +	}
> +	if (req->cmd_flags & REQ_FLUSH) {
> +		nvme_submit_flush(nvmeq, ns, req->tag);
> +		goto end_submit;
> +	}

It would be nicer to have the locking and the the suspend check
in the caller, and then branch out to one function for each type
of request, especially as the caller already has special cases for
discard and zero-payload requests anyway.

> +static int nvme_queue_request(struct blk_mq_hw_ctx *hctx, struct request *req)
> +{

Can you call this nvme_queue_rq to match the method name?  Makes
grepping so much easier..  (ditto for the admin queue).

> +	struct nvme_ns *ns = hctx->queue->queuedata;
> +	struct nvme_queue *nvmeq = hctx->driver_data;
>  
> +	return nvme_submit_req_queue(nvmeq, ns, req);

What's the point of the serparate nvme_submit_req_queue function?

>  	spin_lock(&nvmeq->q_lock);
> -	nvme_process_cq(nvmeq);
> -	result = nvmeq->cqe_seen ? IRQ_HANDLED : IRQ_NONE;
> -	nvmeq->cqe_seen = 0;
> +	result = nvme_process_cq(nvmeq) ? IRQ_HANDLED : IRQ_NONE;

No other caller checks the nvme_process_cq return value, so it might
as well return the IRQ_ values directly.

> +static struct blk_mq_ops nvme_mq_admin_ops = {
> +	.queue_rq	= nvme_queue_admin_request,
> +	.map_queue	= blk_mq_map_queue,
> +	.init_hctx	= nvme_init_admin_hctx,
> +	.init_request	= nvme_init_admin_request,
> +	.timeout	= nvme_timeout,

Care to name these nvme_admin_<methodname> for easier grep-ability?

> +static int nvme_alloc_admin_tags(struct nvme_dev *dev)
> +{
> +	if (!dev->admin_rq) {

Why do you need the NULL check here?

> +		dev->admin_tagset.reserved_tags = 1;

What is the reserved tag for?

> +		dev->admin_rq = blk_mq_init_queue(&dev->admin_tagset);
> +		if (!dev->admin_rq) {
> +			memset(&dev->admin_tagset, 0,
> +						sizeof(dev->admin_tagset));
> +			blk_mq_free_tag_set(&dev->admin_tagset);

Why do you zero the tagset here before freeing it?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ