lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CA1777A.2050606@cs.wisc.edu>
Date:	Tue, 28 Sep 2010 00:04:58 -0500
From:	Mike Christie <michaelc@...wisc.edu>
To:	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>
CC:	linux-scsi <linux-scsi@...r.kernel.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Vasu Dev <vasu.dev@...ux.intel.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Andi Kleen <ak@...ux.intel.com>,
	Matthew Wilcox <willy@...ux.intel.com>,
	James Bottomley <James.Bottomley@...e.de>,
	Jens Axboe <jaxboe@...ionio.com>,
	James Smart <james.smart@...lex.com>,
	Andrew Vasquez <andrew.vasquez@...gic.com>,
	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
	Hannes Reinecke <hare@...e.de>,
	Joe Eykholt <jeykholt@...co.com>,
	Christoph Hellwig <hch@....de>,
	Jon Hawley <warthog9@...nel.org>,
	MPTFusionLinux <DL-MPTFusionLinux@....com>,
	"eata.c maintainer" <dario.ballabio@...ind.it>,
	Luben Tuikov <ltuikov@...oo.com>,
	mvsas maintainer <kewei@...vell.com>,
	pm8001 maintainer Jack Wang <jack_wang@...sh.com>
Subject: Re: [RFC v4 10/19] lpfc: Remove host_lock unlock() + lock() from
 lpfc_queuecommand()

On 09/27/2010 09:06 PM, Nicholas A. Bellinger wrote:
> From: Nicholas Bellinger<nab@...ux-iscsi.org>
>
> This patch removes the now legacy host_lock unlock() + lock() optimization
> from lpfc_scsi.c:lpfc_queuecommand().  This also includes setting the
> SHT->unlocked_qcmd=1 for host_lock less lpfc lpfc_queuecommand() operation.
>
> Signed-off-by: Nicholas A. Bellinger<nab@...ux-iscsi.org>
> ---
>   drivers/scsi/lpfc/lpfc_scsi.c |    4 ++--
>   1 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
> index 2e51aa6..69fe31e 100644
> --- a/drivers/scsi/lpfc/lpfc_scsi.c
> +++ b/drivers/scsi/lpfc/lpfc_scsi.c
> @@ -3023,11 +3023,9 @@ lpfc_queuecommand(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *))
>   		goto out_host_busy_free_buf;
>   	}
>   	if (phba->cfg_poll&  ENABLE_FCP_RING_POLLING) {
> -		spin_unlock(shost->host_lock);
>   		lpfc_sli_handle_fast_ring_event(phba,
>   			&phba->sli.ring[LPFC_FCP_RING], HA_R0RE_REQ);
>
> -		spin_lock(shost->host_lock);
>   		if (phba->cfg_poll&  DISABLE_FCP_RING_INT)
>   			lpfc_poll_rearm_timer(phba);
>   	}
> @@ -3723,6 +3721,7 @@ struct scsi_host_template lpfc_template = {
>   	.slave_destroy		= lpfc_slave_destroy,
>   	.scan_finished		= lpfc_scan_finished,
>   	.this_id		= -1,
> +	.unlocked_qcmd		= 1,
>   	.sg_tablesize		= LPFC_DEFAULT_SG_SEG_CNT,
>   	.cmd_per_lun		= LPFC_CMD_PER_LUN,
>   	.use_clustering		= ENABLE_CLUSTERING,
> @@ -3746,6 +3745,7 @@ struct scsi_host_template lpfc_vport_template = {
>   	.slave_destroy		= lpfc_slave_destroy,
>   	.scan_finished		= lpfc_scan_finished,
>   	.this_id		= -1,
> +	.unlocked_qcmd		= 1,
>   	.sg_tablesize		= LPFC_DEFAULT_SG_SEG_CNT,
>   	.cmd_per_lun		= LPFC_CMD_PER_LUN,
>   	.use_clustering		= ENABLE_CLUSTERING,

The FC class sets the rport state and bits with the host lock held. 
Drivers were then calling fc_remote_port_chkready from the queuecommand 
with the host lock held. If we remove the host lock from queuecommand is 
it possible that the on one proc the fc class calls fc_remote_port_add 
to re-add a rport, this sets the rport state to online, it unblocks the 
devices, but then on some other processor we start calling queuecommand 
and see that the rport is not online (maybe blocked with 
FC_RPORT_FAST_FAIL_TIMEDOUT set) and so we end up failing the IO?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ