[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1284725592.26423.60.camel@mulgrave.site>
Date: Fri, 17 Sep 2010 08:13:12 -0400
From: James Bottomley <James.Bottomley@...e.de>
To: Andi Kleen <ak@...ux.intel.com>
Cc: "Nicholas A. Bellinger" <nab@...ux-iscsi.org>,
linux-scsi <linux-scsi@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Vasu Dev <vasu.dev@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Matthew Wilcox <willy@...ux.intel.com>,
Mike Christie <michaelc@...wisc.edu>,
James Smart <james.smart@...lex.com>,
Andrew Vasquez <andrew.vasquez@...gic.com>,
FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
Hannes Reinecke <hare@...e.de>,
Joe Eykholt <jeykholt@...co.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH 1/8] scsi: Drop struct Scsi_Host->host_lock around
SHT->queuecommand()
On Fri, 2010-09-17 at 09:20 +0200, Andi Kleen wrote:
> > So at least from where I stand, my object is to reduce the number of
> > times we take and release the lock, which this doesn't do. As I said
> > before: we need to figure out the rest, which likely includes an atomic
> > for the serial number (which is almost unused). I think the check
>
> If it's unused it should be removed, make optional.
> Atomics are a scalability problem too and not much cheaper than spinlocks.
I don't disagree with the idea of removing it, especially as it has so
few users, but replacing the host lock with an atomic here would still
vastly reduce the contention, which is the initial complaint. The
contention occurs because the host lock is so widely used for other
things. The way to reduce that contention is firstly to reduce the
length of code covered by the lock and also reduce the actual number of
places where the lock is taken. Compared with host lock's current vast
footprint, and atomic here is tiny.
James
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists