[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210201205759.GA2128135@dhcp-10-100-145-180.wdc.com>
Date: Mon, 1 Feb 2021 12:57:59 -0800
From: Keith Busch <kbusch@...nel.org>
To: Jianxiong Gao <jxgao@...gle.com>
Cc: erdemaktas@...gle.com, marcorr@...gle.com, hch@....de,
m.szyprowski@...sung.com, robin.murphy@....com,
gregkh@...uxfoundation.org, saravanak@...gle.com,
heikki.krogerus@...ux.intel.com, rafael.j.wysocki@...el.com,
andriy.shevchenko@...ux.intel.com, dan.j.williams@...el.com,
bgolaszewski@...libre.com, jroedel@...e.de,
iommu@...ts.linux-foundation.org, konrad.wilk@...cle.com,
axboe@...com, sagi@...mberg.me, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2 3/3] Adding
device_dma_parameters->offset_preserve_mask to NVMe driver.
On Mon, Feb 01, 2021 at 10:30:17AM -0800, Jianxiong Gao wrote:
> @@ -868,12 +871,24 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
> if (!iod->nents)
> goto out_free_sg;
>
> + offset_ret = dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
> + if (offset_ret) {
> + dev_warn(dev->dev, "dma_set_min_align_mask failed to set offset\n");
> + goto out_free_sg;
> + }
> +
> if (is_pci_p2pdma_page(sg_page(iod->sg)))
> nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg,
> iod->nents, rq_dma_dir(req), DMA_ATTR_NO_WARN);
> else
> nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents,
> rq_dma_dir(req), DMA_ATTR_NO_WARN);
> +
> + offset_ret = dma_set_min_align_mask(dev->dev, 0);
> + if (offset_ret) {
> + dev_warn(dev->dev, "dma_set_min_align_mask failed to reset offset\n");
> + goto out_free_sg;
> + }
> if (!nr_mapped)
> goto out_free_sg;
Why is this setting being done and undone on each IO? Wouldn't it be
more efficient to set it once during device initialization?
And more importantly, this isn't thread safe: one CPU may be setting the
device's dma alignment mask to 0 while another CPU is expecting it to be
NVME_CTRL_PAGE_SIZE - 1.
Powered by blists - more mailing lists