[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220705115621.172267766@linuxfoundation.org>
Date: Tue, 5 Jul 2022 13:59:03 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Roger Pau Monné <roger.pau@...rix.com>,
Jan Beulich <jbeulich@...e.com>,
Juergen Gross <jgross@...e.com>
Subject: [PATCH 5.18 097/102] xen/blkfront: fix leaking data in shared pages
From: Roger Pau Monne <roger.pau@...rix.com>
commit 2f446ffe9d737e9a844b97887919c4fda18246e7 upstream.
When allocating pages to be used for shared communication with the
backend always zero them, this avoids leaking unintended data present
on the pages.
This is CVE-2022-26365, part of XSA-403.
Signed-off-by: Roger Pau Monné <roger.pau@...rix.com>
Reviewed-by: Jan Beulich <jbeulich@...e.com>
Reviewed-by: Juergen Gross <jgross@...e.com>
Signed-off-by: Juergen Gross <jgross@...e.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/block/xen-blkfront.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -313,7 +313,7 @@ static int fill_grant_buffer(struct blkf
goto out_of_memory;
if (info->feature_persistent) {
- granted_page = alloc_page(GFP_NOIO);
+ granted_page = alloc_page(GFP_NOIO | __GFP_ZERO);
if (!granted_page) {
kfree(gnt_list_entry);
goto out_of_memory;
@@ -1689,7 +1689,7 @@ static int setup_blkring(struct xenbus_d
for (i = 0; i < info->nr_ring_pages; i++)
rinfo->ring_ref[i] = GRANT_INVALID_REF;
- sring = alloc_pages_exact(ring_size, GFP_NOIO);
+ sring = alloc_pages_exact(ring_size, GFP_NOIO | __GFP_ZERO);
if (!sring) {
xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
return -ENOMEM;
@@ -2206,7 +2206,8 @@ static int blkfront_setup_indirect(struc
BUG_ON(!list_empty(&rinfo->indirect_pages));
for (i = 0; i < num; i++) {
- struct page *indirect_page = alloc_page(GFP_KERNEL);
+ struct page *indirect_page = alloc_page(GFP_KERNEL |
+ __GFP_ZERO);
if (!indirect_page)
goto out_of_memory;
list_add(&indirect_page->lru, &rinfo->indirect_pages);
Powered by blists - more mailing lists