[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230322162638.d940201434ac3a3a29968979@linux-foundation.org>
Date: Wed, 22 Mar 2023 16:26:38 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Jens Axboe <axboe@...nel.dk>
Cc: David Howells <dhowells@...hat.com>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Next Mailing List <linux-next@...r.kernel.org>,
Lorenzo Stoakes <lstoakes@...il.com>
Subject: Re: linux-next: manual merge of the block tree with the mm tree
On Wed, 22 Mar 2023 17:15:48 -0600 Jens Axboe <axboe@...nel.dk> wrote:
> On 3/22/23 5:13 PM, David Howells wrote:
> > Stephen Rothwell <sfr@...b.auug.org.au> wrote:
> >
> >> + if (unlikely(iov_iter_is_pipe(i))) {
> >> + copied = copy_page_to_iter_pipe(page, offset, bytes, i);
> >> + goto out;
> >> + }
> >
> > This bit would need to be removed from copy_page_to_iter_atomic() as the two
> > functions it calls should be removed by the patch in the block tree.
>
> Maybe it'd be better to consolidate rather than split the changes over
> two trees?
fyi, Lorenzo has sent out v7 of this series. I'll be pushing this in
an hour or so. After which I suggest Stephen removes those (now) two
lines and sends out one of his "build fix" emails which can be the
basis for Linus's resolution.
Or I can just steal "iov_iter: Kill ITER_PIPE"...
From: Lorenzo Stoakes <lstoakes@...il.com>
Subject: iov_iter: add copy_page_to_iter_nofault()
Date: Wed, 22 Mar 2023 18:57:03 +0000
Provide a means to copy a page to user space from an iterator, aborting if
a page fault would occur. This supports compound pages, but may be passed
a tail page with an offset extending further into the compound page, so we
cannot pass a folio.
This allows for this function to be called from atomic context and _try_
to user pages if they are faulted in, aborting if not.
The function does not use _copy_to_iter() in order to not specify
might_fault(), this is similar to copy_page_from_iter_atomic().
This is being added in order that an iteratable form of vread() can be
implemented while holding spinlocks.
Link: https://lkml.kernel.org/r/19734729defb0f498a76bdec1bef3ac48a3af3e8.1679511146.git.lstoakes@gmail.com
Signed-off-by: Lorenzo Stoakes <lstoakes@...il.com>
Cc: Alexander Viro <viro@...iv.linux.org.uk>
Cc: Baoquan He <bhe@...hat.com>
Cc: David Hildenbrand <david@...hat.com>
Cc: Jens Axboe <axboe@...nel.dk>
Cc: Jiri Olsa <jolsa@...nel.org>
Cc: Liu Shixin <liushixin2@...wei.com>
Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
Cc: Uladzislau Rezki (Sony) <urezki@...il.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
---
--- a/include/linux/uio.h~iov_iter-add-copy_page_to_iter_nofault
+++ a/include/linux/uio.h
@@ -173,6 +173,8 @@ static inline size_t copy_folio_to_iter(
{
return copy_page_to_iter(&folio->page, offset, bytes, i);
}
+size_t copy_page_to_iter_nofault(struct page *page, unsigned offset,
+ size_t bytes, struct iov_iter *i);
static __always_inline __must_check
size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
--- a/lib/iov_iter.c~iov_iter-add-copy_page_to_iter_nofault
+++ a/lib/iov_iter.c
@@ -172,6 +172,18 @@ static int copyout(void __user *to, cons
return n;
}
+static int copyout_nofault(void __user *to, const void *from, size_t n)
+{
+ long res;
+
+ if (should_fail_usercopy())
+ return n;
+
+ res = copy_to_user_nofault(to, from, n);
+
+ return res < 0 ? n : res;
+}
+
static int copyin(void *to, const void __user *from, size_t n)
{
size_t res = n;
@@ -734,6 +746,42 @@ size_t copy_page_to_iter(struct page *pa
}
EXPORT_SYMBOL(copy_page_to_iter);
+size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, size_t bytes,
+ struct iov_iter *i)
+{
+ size_t res = 0;
+
+ if (!page_copy_sane(page, offset, bytes))
+ return 0;
+ if (WARN_ON_ONCE(i->data_source))
+ return 0;
+ if (unlikely(iov_iter_is_pipe(i)))
+ return copy_page_to_iter_pipe(page, offset, bytes, i);
+ page += offset / PAGE_SIZE; // first subpage
+ offset %= PAGE_SIZE;
+ while (1) {
+ void *kaddr = kmap_local_page(page);
+ size_t n = min(bytes, (size_t)PAGE_SIZE - offset);
+
+ iterate_and_advance(i, n, base, len, off,
+ copyout_nofault(base, kaddr + offset + off, len),
+ memcpy(base, kaddr + offset + off, len)
+ )
+ kunmap_local(kaddr);
+ res += n;
+ bytes -= n;
+ if (!bytes || !n)
+ break;
+ offset += n;
+ if (offset == PAGE_SIZE) {
+ page++;
+ offset = 0;
+ }
+ }
+ return res;
+}
+EXPORT_SYMBOL(copy_page_to_iter_nofault);
+
size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
struct iov_iter *i)
{
_
Powered by blists - more mailing lists