lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 27 Jan 2022 17:26:11 -0700 From: Logan Gunthorpe <logang@...tatee.com> To: linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org, linux-block@...r.kernel.org, linux-pci@...r.kernel.org, linux-mm@...ck.org, iommu@...ts.linux-foundation.org Cc: Stephen Bates <sbates@...thlin.com>, Christoph Hellwig <hch@....de>, Dan Williams <dan.j.williams@...el.com>, Jason Gunthorpe <jgg@...pe.ca>, Christian König <christian.koenig@....com>, John Hubbard <jhubbard@...dia.com>, Don Dutile <ddutile@...hat.com>, Matthew Wilcox <willy@...radead.org>, Daniel Vetter <daniel.vetter@...ll.ch>, Jakowski Andrzej <andrzej.jakowski@...el.com>, Minturn Dave B <dave.b.minturn@...el.com>, Jason Ekstrand <jason@...kstrand.net>, Dave Hansen <dave.hansen@...ux.intel.com>, Xiong Jianxin <jianxin.xiong@...el.com>, Bjorn Helgaas <helgaas@...nel.org>, Ira Weiny <ira.weiny@...el.com>, Robin Murphy <robin.murphy@....com>, Martin Oliveira <martin.oliveira@...eticom.com>, Chaitanya Kulkarni <ckulkarnilinux@...il.com>, Ralph Campbell <rcampbell@...dia.com>, Logan Gunthorpe <logang@...tatee.com> Subject: [PATCH v5 21/24] block: set FOLL_PCI_P2PDMA in bio_map_user_iov() When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed from userspace and enables the NVMe passthru requests to use P2PDMA pages. Signed-off-by: Logan Gunthorpe <logang@...tatee.com> --- block/blk-map.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/block/blk-map.c b/block/blk-map.c index 4526adde0156..7508448e290c 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -234,6 +234,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, gfp_t gfp_mask) { unsigned int max_sectors = queue_max_hw_sectors(rq->q); + unsigned int flags = 0; struct bio *bio; int ret; int j; @@ -246,13 +247,17 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, return -ENOMEM; bio->bi_opf |= req_op(rq); + if (blk_queue_pci_p2pdma(rq->q)) + flags |= FOLL_PCI_P2PDMA; + while (iov_iter_count(iter)) { struct page **pages; ssize_t bytes; size_t offs, added = 0; int npages; - bytes = iov_iter_get_pages_alloc(iter, &pages, LONG_MAX, &offs); + bytes = iov_iter_get_pages_alloc_flags(iter, &pages, LONG_MAX, + &offs, flags); if (unlikely(bytes <= 0)) { ret = bytes ? bytes : -EFAULT; goto out_unmap; -- 2.30.2
Powered by blists - more mailing lists