[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200527012628.1100649-2-jhubbard@nvidia.com>
Date: Tue, 26 May 2020 18:26:26 -0700
From: John Hubbard <jhubbard@...dia.com>
To: LKML <linux-kernel@...r.kernel.org>
CC: Souptick Joarder <jrdr.linux@...il.com>,
John Hubbard <jhubbard@...dia.com>,
Derek Kiernan <derek.kiernan@...inx.com>,
Dragan Cvetic <dragan.cvetic@...inx.com>,
Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Michal Simek <michal.simek@...inx.com>,
<linux-arm-kernel@...ts.infradead.org>
Subject: [PATCH 1/3] misc: xilinx-sdfec: improve get_user_pages_fast() error handling
This fixes the case of get_user_pages_fast() returning a -errno.
The result needs to be stored in a signed integer. And for safe
signed/unsigned comparisons, it's best to keep everything signed.
And get_user_pages_fast() also expects a signed value for number
of pages to pin.
Therefore, change most relevant variables, from u32 to int. Leave
"n" unsigned, for convenience in checking for overflow. And provide
a WARN_ON_ONCE() and early return, if overflow occurs.
Also, as long as we're tidying up: rename the page array from page,
to pages, in order to match the conventions used in most other call
sites.
Fixes: 20ec628e8007e ("misc: xilinx_sdfec: Add ability to configure LDPC")
Cc: Derek Kiernan <derek.kiernan@...inx.com>
Cc: Dragan Cvetic <dragan.cvetic@...inx.com>
Cc: Arnd Bergmann <arnd@...db.de>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Michal Simek <michal.simek@...inx.com>
Cc: linux-arm-kernel@...ts.infradead.org
Signed-off-by: John Hubbard <jhubbard@...dia.com>
---
drivers/misc/xilinx_sdfec.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/drivers/misc/xilinx_sdfec.c b/drivers/misc/xilinx_sdfec.c
index 71bbaa56bdb5..e2766aad9e14 100644
--- a/drivers/misc/xilinx_sdfec.c
+++ b/drivers/misc/xilinx_sdfec.c
@@ -602,10 +602,10 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
const u32 depth)
{
u32 reg = 0;
- u32 res;
- u32 n, i;
+ int res, i, nr_pages;
+ u32 n;
u32 *addr = NULL;
- struct page *page[MAX_NUM_PAGES];
+ struct page *pages[MAX_NUM_PAGES];
/*
* Writes that go beyond the length of
@@ -622,15 +622,22 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
if ((len * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE)
n += 1;
- res = get_user_pages_fast((unsigned long)src_ptr, n, 0, page);
- if (res < n) {
- for (i = 0; i < res; i++)
- put_page(page[i]);
+ if (WARN_ON_ONCE(n > INT_MAX))
+ return -EINVAL;
+
+ nr_pages = n;
+
+ res = get_user_pages_fast((unsigned long)src_ptr, nr_pages, 0, pages);
+ if (res < nr_pages) {
+ if (res > 0) {
+ for (i = 0; i < res; i++)
+ put_page(pages[i]);
+ }
return -EINVAL;
}
- for (i = 0; i < n; i++) {
- addr = kmap(page[i]);
+ for (i = 0; i < nr_pages; i++) {
+ addr = kmap(pages[i]);
do {
xsdfec_regwrite(xsdfec,
base_addr + ((offset + reg) *
@@ -639,7 +646,7 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
reg++;
} while ((reg < len) &&
((reg * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE));
- put_page(page[i]);
+ put_page(pages[i]);
}
return reg;
}
--
2.26.2
Powered by blists - more mailing lists