[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y8/sr4SfnuoXxApR@nvidia.com>
Date: Tue, 24 Jan 2023 10:35:27 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Alistair Popple <apopple@...dia.com>
Cc: linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, jhubbard@...dia.com,
tjmercier@...gle.com, hannes@...xchg.org, surenb@...gle.com,
mkoutny@...e.com, daniel@...ll.ch,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
virtualization@...ts.linux-foundation.org
Subject: Re: [RFC PATCH 03/19] drivers/vdpa: Convert vdpa to use the new
vm_structure
On Tue, Jan 24, 2023 at 04:42:32PM +1100, Alistair Popple wrote:
> @@ -990,8 +989,8 @@ static int vduse_dev_reg_umem(struct vduse_dev *dev,
>
> mmap_read_lock(current->mm);
>
> - lock_limit = PFN_DOWN(rlimit(RLIMIT_MEMLOCK));
> - if (npages + atomic64_read(¤t->mm->pinned_vm) > lock_limit)
> + vm_account_init_current(&umem->vm_account);
> + if (vm_account_pinned(&umem->vm_account, npages))
> goto out;
>
> pinned = pin_user_pages(uaddr, npages, FOLL_LONGTERM | FOLL_WRITE,
> @@ -1006,22 +1005,21 @@ static int vduse_dev_reg_umem(struct vduse_dev *dev,
> if (ret)
> goto out;
>
> - atomic64_add(npages, ¤t->mm->pinned_vm);
Mention in the commit message that this fixes a bug where vdpa would
race the update of mm->pinned_vm and might go past the limit.
Jason
Powered by blists - more mailing lists