[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1453672883-2708-78-git-send-email-luis.henriques@canonical.com>
Date: Sun, 24 Jan 2016 22:00:32 +0000
From: Luis Henriques <luis.henriques@...onical.com>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
kernel-team@...ts.ubuntu.com
Cc: Roger Pau Monné <roger.pau@...rix.com>,
David Vrabel <david.vrabel@...rix.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Luis Henriques <luis.henriques@...onical.com>
Subject: [PATCH 3.16.y-ckt 077/128] xen-blkback: read from indirect descriptors only once
3.16.7-ckt23 -stable review patch. If anyone has any objections, please let me know.
---8<------------------------------------------------------------
From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@...rix.com>
commit 18779149101c0dd43ded43669ae2a92d21b6f9cb upstream.
Since indirect descriptors are in memory shared with the frontend, the
frontend could alter the first_sect and last_sect values after they have
been validated but before they are recorded in the request. This may
result in I/O requests that overflow the foreign page, possibly
overwriting local pages when the I/O request is executed.
When parsing indirect descriptors, only read first_sect and last_sect
once.
This is part of XSA155.
Signed-off-by: Roger Pau Monné <roger.pau@...rix.com>
Signed-off-by: David Vrabel <david.vrabel@...rix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
[ luis: backported to 3.16:
- Use ACCESS_ONCE instead of READ_ONCE
- Use PAGE_SIZE instead of XEN_PAGE_SIZE ]
Signed-off-by: Luis Henriques <luis.henriques@...onical.com>
---
drivers/block/xen-blkback/blkback.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 02004e101678..c42c22e778d8 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -861,6 +861,8 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
goto unmap;
for (n = 0, i = 0; n < nseg; n++) {
+ uint8_t first_sect, last_sect;
+
if ((n % SEGS_PER_INDIRECT_FRAME) == 0) {
/* Map indirect segments */
if (segments)
@@ -868,15 +870,18 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
segments = kmap_atomic(pages[n/SEGS_PER_INDIRECT_FRAME]->page);
}
i = n % SEGS_PER_INDIRECT_FRAME;
+
pending_req->segments[n]->gref = segments[i].gref;
- seg[n].nsec = segments[i].last_sect -
- segments[i].first_sect + 1;
- seg[n].offset = (segments[i].first_sect << 9);
- if ((segments[i].last_sect >= (PAGE_SIZE >> 9)) ||
- (segments[i].last_sect < segments[i].first_sect)) {
+
+ first_sect = ACCESS_ONCE(segments[i].first_sect);
+ last_sect = ACCESS_ONCE(segments[i].last_sect);
+ if (last_sect >= (PAGE_SIZE >> 9) || last_sect < first_sect) {
rc = -EINVAL;
goto unmap;
}
+
+ seg[n].nsec = last_sect - first_sect + 1;
+ seg[n].offset = first_sect << 9;
preq->nr_sects += seg[n].nsec;
}
Powered by blists - more mailing lists