[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yyxy4HFMhpbU/wLu@infradead.org>
Date: Thu, 22 Sep 2022 07:36:16 -0700
From: Christoph Hellwig <hch@...radead.org>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: Jan Kara <jack@...e.cz>, Christoph Hellwig <hch@...radead.org>,
John Hubbard <jhubbard@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jens Axboe <axboe@...nel.dk>,
Miklos Szeredi <miklos@...redi.hu>,
"Darrick J . Wong" <djwong@...nel.org>,
Trond Myklebust <trond.myklebust@...merspace.com>,
Anna Schumaker <anna@...nel.org>,
David Hildenbrand <david@...hat.com>,
Logan Gunthorpe <logang@...tatee.com>,
linux-block@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-xfs@...r.kernel.org, linux-nfs@...r.kernel.org,
linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 4/7] iov_iter: new iov_iter_pin_pages*() routines
On Tue, Sep 20, 2022 at 06:02:11AM +0100, Al Viro wrote:
> nvme target: nvme read requests end up with somebody allocating and filling
> sglist, followed by reading from file into it (using ITER_BVEC). Then the
> pages are sent out, presumably
Yes.
> . I would be very surprised if it turned out
> to be anything other than anon pages allocated by the driver, but I'd like
> to see that confirmed by nvme folks. Probably doesn't need pinning.
They are anon pages allocated by the driver using sgl_alloc().
> drivers/target/target_core_file.c:292: iov_iter_bvec(&iter, is_write, aio_cmd->bvecs, sgl_nents, len);
Same as nvme target.
> The picture so far looks like we mostly need to take care of pinning when
> we obtain the references from iov_iter_get_pages(). What's more, it looks
> like ITER_BVEC/ITER_XARRAY/ITER_PIPE we really don't need to pin anything on
> get_pages/pin_pages - they are already protected (or, in case of ITER_PIPE,
> allocated by iov_iter itself and not reachable by anybody outside).
That's what I've been trying to say for a while..
Powered by blists - more mailing lists