[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SA0PR15MB391928BD9B0B20F85A061A2E99D39@SA0PR15MB3919.namprd15.prod.outlook.com>
Date: Mon, 30 Jan 2023 13:27:53 +0000
From: Bernard Metzler <BMT@...ich.ibm.com>
To: Alistair Popple <apopple@...dia.com>
CC: Jason Gunthorpe <jgg@...dia.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"jhubbard@...dia.com" <jhubbard@...dia.com>,
"tjmercier@...gle.com" <tjmercier@...gle.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"surenb@...gle.com" <surenb@...gle.com>,
"mkoutny@...e.com" <mkoutny@...e.com>,
"daniel@...ll.ch" <daniel@...ll.ch>,
Leon Romanovsky <leon@...nel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>
Subject: RE: Re: [RFC PATCH 05/19] RMDA/siw: Convert to use vm_account
> -----Original Message-----
> From: Alistair Popple <apopple@...dia.com>
> Sent: Monday, 30 January 2023 12:35
> To: Bernard Metzler <BMT@...ich.ibm.com>
> Cc: Jason Gunthorpe <jgg@...dia.com>; linux-mm@...ck.org;
> cgroups@...r.kernel.org; linux-kernel@...r.kernel.org; jhubbard@...dia.com;
> tjmercier@...gle.com; hannes@...xchg.org; surenb@...gle.com;
> mkoutny@...e.com; daniel@...ll.ch; Leon Romanovsky <leon@...nel.org>;
> linux-rdma@...r.kernel.org
> Subject: [EXTERNAL] Re: [RFC PATCH 05/19] RMDA/siw: Convert to use
> vm_account
>
>
> Bernard Metzler <BMT@...ich.ibm.com> writes:
>
> >> -----Original Message-----
> >> From: Jason Gunthorpe <jgg@...dia.com>
> >> Sent: Tuesday, 24 January 2023 15:37
> >> To: Alistair Popple <apopple@...dia.com>
> >> Cc: linux-mm@...ck.org; cgroups@...r.kernel.org; linux-
> >> kernel@...r.kernel.org; jhubbard@...dia.com; tjmercier@...gle.com;
> >> hannes@...xchg.org; surenb@...gle.com; mkoutny@...e.com;
> daniel@...ll.ch;
> >> Bernard Metzler <BMT@...ich.ibm.com>; Leon Romanovsky <leon@...nel.org>;
> >> linux-rdma@...r.kernel.org
> >> Subject: [EXTERNAL] Re: [RFC PATCH 05/19] RMDA/siw: Convert to use
> >> vm_account
> >>
> >> On Tue, Jan 24, 2023 at 04:42:34PM +1100, Alistair Popple wrote:
> >>
> >> > @@ -385,20 +382,16 @@ struct siw_umem *siw_umem_get(u64 start, u64
> len,
> >> bool writable)
> >> > if (!umem)
> >> > return ERR_PTR(-ENOMEM);
> >> >
> >> > - mm_s = current->mm;
> >> > - umem->owning_mm = mm_s;
> >> > umem->writable = writable;
> >> >
> >> > - mmgrab(mm_s);
> >> > + vm_account_init_current(&umem->vm_account);
> >> >
> >> > if (writable)
> >> > foll_flags |= FOLL_WRITE;
> >> >
> >> > - mmap_read_lock(mm_s);
> >> > + mmap_read_lock(current->mm);
> >> >
> >> > - mlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> >> > -
> >> > - if (num_pages + atomic64_read(&mm_s->pinned_vm) > mlock_limit) {
> >> > + if (vm_account_pinned(&umem->vm_account, num_pages)) {
> >> > rv = -ENOMEM;
> >> > goto out_sem_up;
> >> > }
> >> > @@ -429,7 +422,6 @@ struct siw_umem *siw_umem_get(u64 start, u64 len,
> >> bool writable)
> >> > goto out_sem_up;
> >> >
> >> > umem->num_pages += rv;
> >> > - atomic64_add(rv, &mm_s->pinned_vm);
> >>
> >> Also fixes the race bug
> >
> > But introduces another one. In that loop, umem->num_pages keeps the
> > number of pages currently pinned, not the target number. The current
> > patch uses that umem->num_pages to call vm_unaccount_pinned() in
> > siw_umem_release(). Bailing out before all pages are pinned would
> > mess up that accounting during release. Maybe introduce another
> > parameter to siw_umem_release(), or better have another umem member
> > 'umem->num_pages_accounted' for correct accounting during release.
>
> Yes, I see the problem thanks for pointing it out. Will fix for the next
> version.
Thank you! Let me send a patch to the original code,
just checking if not all pages are pinned and fix the
counter accordingly. Maybe you can go from there..?
Thank you,
Bernard.
Powered by blists - more mailing lists