[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAFqt6zYC0Or-7KSqoJY0gFT7NEUVy4unUW8a7rpb4XSxf65cOg@mail.gmail.com>
Date: Tue, 29 Dec 2020 13:18:55 +0530
From: Souptick Joarder <jrdr.linux@...il.com>
To: mchehab@...nel.org, sakari.ailus@...ux.intel.com,
Greg KH <gregkh@...uxfoundation.org>, gustavoars@...nel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>, daniel.m.jordan@...cle.com,
Michel Lespinasse <walken@...gle.com>
Cc: linux-media@...r.kernel.org,
"open list:ANDROID DRIVERS" <devel@...verdev.osuosl.org>,
linux-kernel@...r.kernel.org, John Hubbard <jhubbard@...dia.com>,
Ira Weiny <ira.weiny@...el.com>,
Dan Carpenter <dan.carpenter@...cle.com>
Subject: Re: [PATCH v2] media: atomisp: Fixed error handling path
On Wed, Dec 9, 2020 at 1:18 AM Souptick Joarder <jrdr.linux@...il.com> wrote:
>
> On Thu, Nov 19, 2020 at 1:06 AM Souptick Joarder <jrdr.linux@...il.com> wrote:
> >
> > On Wed, Nov 4, 2020 at 7:32 AM Souptick Joarder <jrdr.linux@...il.com> wrote:
> > >
> > > Inside alloc_user_pages() based on flag value either pin_user_pages()
> > > or get_user_pages_fast() will be called. However, these API might fail.
> > >
> > > But free_user_pages() called in error handling path doesn't bother
> > > about return value and will try to unpin bo->pgnr pages, which is
> > > incorrect.
> > >
> > > Fix this by passing the page_nr to free_user_pages(). If page_nr > 0
> > > pages will be unpinned based on bo->mem_type. This will also take care
> > > of non error handling path.
> > >
> > > Fixes: 14a638ab96c5 ("media: atomisp: use pin_user_pages() for memory
> > > allocation")
> > > Signed-off-by: Souptick Joarder <jrdr.linux@...il.com>
> > > Reviewed-by: Dan Carpenter <dan.carpenter@...cle.com>
> > > Cc: John Hubbard <jhubbard@...dia.com>
> > > Cc: Ira Weiny <ira.weiny@...el.com>
> > > Cc: Dan Carpenter <dan.carpenter@...cle.com>
> > > ---
> > > v2:
> > > Added review tag.
> >
> > Any further comment ? If no, can we get this patch in queue for 5.11 ?
>
> Can we get this patch in the queue for 5.11 ?
Any further comment on this patch ?
>
> > >
> > > drivers/staging/media/atomisp/pci/hmm/hmm_bo.c | 13 ++++++++-----
> > > 1 file changed, 8 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
> > > index f13af23..0168f98 100644
> > > --- a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
> > > +++ b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
> > > @@ -857,16 +857,17 @@ static void free_private_pages(struct hmm_buffer_object *bo,
> > > kfree(bo->page_obj);
> > > }
> > >
> > > -static void free_user_pages(struct hmm_buffer_object *bo)
> > > +static void free_user_pages(struct hmm_buffer_object *bo,
> > > + unsigned int page_nr)
> > > {
> > > int i;
> > >
> > > hmm_mem_stat.usr_size -= bo->pgnr;
> > >
> > > if (bo->mem_type == HMM_BO_MEM_TYPE_PFN) {
> > > - unpin_user_pages(bo->pages, bo->pgnr);
> > > + unpin_user_pages(bo->pages, page_nr);
> > > } else {
> > > - for (i = 0; i < bo->pgnr; i++)
> > > + for (i = 0; i < page_nr; i++)
> > > put_page(bo->pages[i]);
> > > }
> > > kfree(bo->pages);
> > > @@ -942,6 +943,8 @@ static int alloc_user_pages(struct hmm_buffer_object *bo,
> > > dev_err(atomisp_dev,
> > > "get_user_pages err: bo->pgnr = %d, pgnr actually pinned = %d.\n",
> > > bo->pgnr, page_nr);
> > > + if (page_nr < 0)
> > > + page_nr = 0;
> > > goto out_of_mem;
> > > }
> > >
> > > @@ -954,7 +957,7 @@ static int alloc_user_pages(struct hmm_buffer_object *bo,
> > >
> > > out_of_mem:
> > >
> > > - free_user_pages(bo);
> > > + free_user_pages(bo, page_nr);
> > >
> > > return -ENOMEM;
> > > }
> > > @@ -1037,7 +1040,7 @@ void hmm_bo_free_pages(struct hmm_buffer_object *bo)
> > > if (bo->type == HMM_BO_PRIVATE)
> > > free_private_pages(bo, &dynamic_pool, &reserved_pool);
> > > else if (bo->type == HMM_BO_USER)
> > > - free_user_pages(bo);
> > > + free_user_pages(bo, bo->pgnr);
> > > else
> > > dev_err(atomisp_dev, "invalid buffer type.\n");
> > > mutex_unlock(&bo->mutex);
> > > --
> > > 1.9.1
> > >
Powered by blists - more mailing lists