[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110601080334.776776471@blue.kroah.org>
Date: Wed, 01 Jun 2011 17:00:08 +0900
From: Greg KH <gregkh@...e.de>
To: linux-kernel@...r.kernel.org, stable@...nel.org
Cc: stable-review@...nel.org, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, alan@...rguk.ukuu.org.uk,
Nicholas Bellinger <nab@...ux-iscsi.org>,
Kiran Patil <kiran.patil@...el.com>,
James Bottomley <jbottomley@...allels.com>,
Greg Kroah-Hartman <gregkh@...e.de>
Subject: [072/146] [SCSI] target: Fix multi task->task_sg[] chaining logic bug
2.6.38-stable review patch. If anyone has any objections, please let us know.
------------------
From: Nicholas Bellinger <nab@...ux-iscsi.org>
commit 97868c8905a1537153d406c4a3aa39a503a5c299 upstream.
This patch fixes a bug in transport_do_task_sg_chain() used by HW target
mode modules with sg_chain() to provide a single sg_next() walkable memory
layout for use with pci_map_sg() and friends. This patch addresses an
issue with mapping multiple small block max_sector tasks across multiple
struct se_task->task_sg[] mappings for HW target mode operation.
This was causing OOPs with (cmd->t_task->t_tasks_no > 1) I/O traffic for
HW target drivers using transport_do_task_sg_chain(), and has been tested
so far with tcm_fc(openfcoe), tcm_qla2xxx, and ib_srpt fabrics with
t_tasks_no > 1 IBLOCK backends using a smaller max_sectors to trigger the
original issue.
Signed-off-by: Nicholas Bellinger <nab@...ux-iscsi.org>
Acked-by: Kiran Patil <kiran.patil@...el.com>
Signed-off-by: James Bottomley <jbottomley@...allels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...e.de>
---
drivers/target/target_core_transport.c | 26 +++++++++++++++-----------
1 file changed, 15 insertions(+), 11 deletions(-)
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -4777,18 +4777,20 @@ void transport_do_task_sg_chain(struct s
sg_end_cur->page_link &= ~0x02;
sg_chain(sg_head, task_sg_num, sg_head_cur);
- sg_count += (task->task_sg_num + 1);
- } else
sg_count += task->task_sg_num;
+ task_sg_num = (task->task_sg_num + 1);
+ } else {
+ sg_chain(sg_head, task_sg_num, sg_head_cur);
+ sg_count += task->task_sg_num;
+ task_sg_num = task->task_sg_num;
+ }
sg_head = sg_head_cur;
sg_link = sg_link_cur;
- task_sg_num = task->task_sg_num;
continue;
}
sg_head = sg_first = &task->task_sg[0];
sg_link = &task->task_sg[task->task_sg_num];
- task_sg_num = task->task_sg_num;
/*
* Check for single task..
*/
@@ -4799,9 +4801,12 @@ void transport_do_task_sg_chain(struct s
*/
sg_end = &task->task_sg[task->task_sg_num - 1];
sg_end->page_link &= ~0x02;
- sg_count += (task->task_sg_num + 1);
- } else
sg_count += task->task_sg_num;
+ task_sg_num = (task->task_sg_num + 1);
+ } else {
+ sg_count += task->task_sg_num;
+ task_sg_num = task->task_sg_num;
+ }
}
/*
* Setup the starting pointer and total t_tasks_sg_linked_no including
@@ -4810,21 +4815,20 @@ void transport_do_task_sg_chain(struct s
T_TASK(cmd)->t_tasks_sg_chained = sg_first;
T_TASK(cmd)->t_tasks_sg_chained_no = sg_count;
- DEBUG_CMD_M("Setup T_TASK(cmd)->t_tasks_sg_chained: %p and"
- " t_tasks_sg_chained_no: %u\n", T_TASK(cmd)->t_tasks_sg_chained,
+ DEBUG_CMD_M("Setup cmd: %p T_TASK(cmd)->t_tasks_sg_chained: %p and"
+ " t_tasks_sg_chained_no: %u\n", cmd, T_TASK(cmd)->t_tasks_sg_chained,
T_TASK(cmd)->t_tasks_sg_chained_no);
for_each_sg(T_TASK(cmd)->t_tasks_sg_chained, sg,
T_TASK(cmd)->t_tasks_sg_chained_no, i) {
- DEBUG_CMD_M("SG: %p page: %p length: %d offset: %d\n",
- sg, sg_page(sg), sg->length, sg->offset);
+ DEBUG_CMD_M("SG[%d]: %p page: %p length: %d offset: %d, magic: 0x%08x\n",
+ i, sg, sg_page(sg), sg->length, sg->offset, sg->sg_magic);
if (sg_is_chain(sg))
DEBUG_CMD_M("SG: %p sg_is_chain=1\n", sg);
if (sg_is_last(sg))
DEBUG_CMD_M("SG: %p sg_is_last=1\n", sg);
}
-
}
EXPORT_SYMBOL(transport_do_task_sg_chain);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists