[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <FA7F6940-CD5C-4460-9777-2F7AE3657B8E@gmail.com>
Date: Wed, 1 Apr 2009 08:20:14 -0400
From: Ross Walker <rswwalker@...il.com>
To: Bart Van Assche <bart.vanassche@...il.com>
Cc: "Ross S. W. Walker" <RWalker@...allion.com>,
Vladislav Bolkhovitin <vst@...b.net>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
iSCSI Enterprise Target Developer List
<iscsitarget-devel@...ts.sourceforge.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"stgt@...r.kernel.org" <stgt@...r.kernel.org>,
scst-devel <scst-devel@...ts.sourceforge.net>
Subject: Re: [Iscsitarget-devel] [Scst-devel] ISCSI-SCST performance (withalso IET and STGT data)
On Apr 1, 2009, at 2:29 AM, Bart Van Assche <bart.vanassche@...il.com>
wrote:
> On Tue, Mar 31, 2009 at 8:43 PM, Ross S. W. Walker
> <RWalker@...allion.com> wrote:
>> IET just needs to fix how it does it workload with CFQ which
>> somehow SCST has overcome. Of course SCST tweaks the Linux kernel to
>> gain some extra speed.
>
> I'm not familiar with the implementation details of CFQ, but I know
> that one of the changes between SCST 1.0.0 and SCST 1.0.1 is that the
> default number of kernel threads of the scst_vdisk kernel module has
> been increased to 5. Could this explain the performance difference
> between SCST and IET for FILEIO and BLOCKIO ?
Thank for the update. IET has used 8 threads per target for ages now,
I don't think it is that.
It may be how the I/O threads are forked in SCST that causes them to
be in the same I/O context with each other.
I'm pretty sure implementing a version of the patch that was used for
the dump command (found on the LKML) will fix this.
But thanks goes to Vlad for pointing this dificiency out so we can fix
it to help make IET even better.
-Ross
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists