[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170201212917.11278-1-dillow@google.com>
Date: Wed, 1 Feb 2017 13:29:17 -0800
From: David Dillow <dillow@...gle.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, David Dillow <dillow@...gle.com>
Subject: [PATCH] scatterlist: don't overflow length field
When called with a region of contiguous pages totaling > 4 GB of memory,
sg_alloc_table_from_pages() will overflow the length field, leading to a
corrupt scatter list. Fix this by tracking the number of pages we've
merged and start a new chunk when we would overflow.
Tested by building various page lists with contiguous 8GB regions and
observing that they are correctly split without overflowing length.
Signed-off-by: David Dillow <dillow@...gle.com>
---
lib/scatterlist.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 004fc70fc56a..539dd344f1c5 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -394,17 +394,26 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
unsigned long offset, unsigned long size,
gfp_t gfp_mask)
{
+ unsigned int chunk_pages;
unsigned int chunks;
unsigned int i;
unsigned int cur_page;
int ret;
struct scatterlist *s;
+ BUILD_BUG_ON(!typecheck(typeof(s->length), unsigned int));
+
/* compute number of contiguous chunks */
chunks = 1;
- for (i = 1; i < n_pages; ++i)
- if (page_to_pfn(pages[i]) != page_to_pfn(pages[i - 1]) + 1)
+ chunk_pages = 1;
+ for (i = 1; i < n_pages; ++i) {
+ if (page_to_pfn(pages[i]) != page_to_pfn(pages[i - 1]) + 1 ||
+ chunk_pages >= UINT_MAX >> PAGE_SHIFT) {
++chunks;
+ chunk_pages = 0;
+ }
+ ++chunk_pages;
+ }
ret = sg_alloc_table(sgt, chunks, gfp_mask);
if (unlikely(ret))
@@ -417,10 +426,15 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
unsigned int j;
/* look for the end of the current chunk */
- for (j = cur_page + 1; j < n_pages; ++j)
+ chunk_pages = 1;
+ for (j = cur_page + 1; j < n_pages; ++j) {
if (page_to_pfn(pages[j]) !=
- page_to_pfn(pages[j - 1]) + 1)
+ page_to_pfn(pages[j - 1]) + 1 ||
+ chunk_pages >= UINT_MAX >> PAGE_SHIFT) {
break;
+ }
+ ++chunk_pages;
+ }
chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
sg_set_page(s, pages[cur_page], min(size, chunk_size), offset);
--
2.11.0.483.g087da7b7c-goog
Powered by blists - more mailing lists