[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7950398c-1993-4006-9af9-8e924ddb2dda@lucifer.local>
Date: Tue, 18 Apr 2023 17:25:08 +0100
From: Lorenzo Stoakes <lstoakes@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
Jens Axboe <axboe@...nel.dk>,
Pavel Begunkov <asml.silence@...il.com>,
io-uring@...r.kernel.org
Subject: Re: [PATCH v4 4/6] io_uring: rsrc: avoid use of vmas parameter in
pin_user_pages()
On Tue, Apr 18, 2023 at 05:55:48PM +0200, David Hildenbrand wrote:
> On 18.04.23 17:49, Lorenzo Stoakes wrote:
> > We are shortly to remove pin_user_pages(), and instead perform the required
> > VMA checks ourselves. In most cases there will be a single VMA so this
> > should caues no undue impact on an already slow path.
> >
> > Doing this eliminates the one instance of vmas being used by
> > pin_user_pages().
> >
> > Suggested-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> > Signed-off-by: Lorenzo Stoakes <lstoakes@...il.com>
> > ---
> > io_uring/rsrc.c | 55 ++++++++++++++++++++++++++++---------------------
> > 1 file changed, 31 insertions(+), 24 deletions(-)
> >
> > diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
> > index 7a43aed8e395..3a927df9d913 100644
> > --- a/io_uring/rsrc.c
> > +++ b/io_uring/rsrc.c
> > @@ -1138,12 +1138,37 @@ static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
> > return ret;
> > }
> > +static int check_vmas_locked(unsigned long addr, unsigned long len)
>
> TBH, the whole "_locked" suffix is a bit confusing.
>
> I was wondering why you'd want to check whether the VMAs are locked ...
>
Yeah it's annoying partly because GUP itself is super inconsistent about
it. Idea is to try to indicate that you need to hold mmap_lock
obviously... let's drop _locked then since we're inconsistent with it anyway.
> > +{
> > + struct file *file;
> > + VMA_ITERATOR(vmi, current->mm, addr);
> > + struct vm_area_struct *vma = vma_next(&vmi);
> > + unsigned long end = addr + len;
> > +
> > + if (WARN_ON_ONCE(!vma))
> > + return -EINVAL;
> > +
> > + file = vma->vm_file;
> > + if (file && !is_file_hugepages(file))
> > + return -EOPNOTSUPP;
>
> You'd now be rejecting vma_is_shmem() here, no?
>
Good spot, I guess I was confused that io_uring would actually want to do
that... not sure who'd want to actually mapping some shmem for this
purpose!
I'll update to make it match the existing code.
>
> --
> Thanks,
>
> David / dhildenb
>
To avoid spam, here's a -fix patch for both:-
----8<----
>From 62838d66ee01e631c7c8aa3848b6892d1478c5b6 Mon Sep 17 00:00:00 2001
From: Lorenzo Stoakes <lstoakes@...il.com>
Date: Tue, 18 Apr 2023 17:11:01 +0100
Subject: [PATCH] io_uring: rsrc: avoid use of vmas parameter in
pin_user_pages()
Rename function to avoid confusion and correct shmem check as suggested by
David.
---
io_uring/rsrc.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 3a927df9d913..483b975e31b3 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -1138,7 +1138,7 @@ static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
return ret;
}
-static int check_vmas_locked(unsigned long addr, unsigned long len)
+static int check_vmas_compatible(unsigned long addr, unsigned long len)
{
struct file *file;
VMA_ITERATOR(vmi, current->mm, addr);
@@ -1149,15 +1149,16 @@ static int check_vmas_locked(unsigned long addr, unsigned long len)
return -EINVAL;
file = vma->vm_file;
- if (file && !is_file_hugepages(file))
- return -EOPNOTSUPP;
/* don't support file backed memory */
for_each_vma_range(vmi, vma, end) {
if (vma->vm_file != file)
return -EINVAL;
- if (file && !vma_is_shmem(vma))
+ if (!file)
+ continue;
+
+ if (!vma_is_shmem(vma) && !is_file_hugepages(file))
return -EOPNOTSUPP;
}
@@ -1185,7 +1186,7 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages)
pages, NULL);
if (pret == nr_pages) {
- ret = check_vmas_locked(ubuf, len);
+ ret = check_vmas_compatible(ubuf, len);
*npages = nr_pages;
} else {
ret = pret < 0 ? pret : -EFAULT;
--
2.40.0
Powered by blists - more mailing lists