lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 02 Apr 2009 19:36:26 +0400
From:	Vladislav Bolkhovitin <vst@...b.net>
To:	"Ross S. W. Walker" <RWalker@...allion.com>
CC:	James Bottomley <James.Bottomley@...senPartnership.com>,
	linux-scsi@...r.kernel.org,
	iSCSI Enterprise Target Developer List 
	<iscsitarget-devel@...ts.sourceforge.net>,
	linux-kernel@...r.kernel.org, Ross Walker <rswwalker@...il.com>,
	stgt@...r.kernel.org, scst-devel <scst-devel@...ts.sourceforge.net>
Subject: Re: [Scst-devel] [Iscsitarget-devel] ISCSI-SCST	performance (with
 also IET and STGT data)

Ross S. W. Walker, on 04/02/2009 06:06 PM wrote:
> Vladislav Bolkhovitin wrote:
>> Vladislav Bolkhovitin, on 04/02/2009 11:38 AM wrote:
>>> James Bottomley, on 04/02/2009 12:23 AM wrote:
>>>> SCST explicitly fiddles with the io context to get this to happen.  It
>>>> has a hack to block to export alloc_io_context:
>>>>
>>>> http://marc.info/?t=122893564800003
>>> Correct, although I wouldn't call it "fiddle", rather "grouping" ;)
> 
> Call it what you like,
> 
> Vladislav Bolkhovitin wrote:
>> Ross S. W. Walker, on 03/30/2009 10:33 PM wrote:
>>
>> I would be interested in knowing how your code defeats CFQ's extremely
>> high latency? Does your code reach into the io scheduler too? If not,
>> some code hints would be great.
> 
> Hmm, CFQ doesn't have any extra processing latency, especially 
> "extremely", hence there is nothing to defeat. If it had, how could it 
> been chosen as the default?
> 
> ----------
> List:       linux-scsi
> Subject:    [PATCH][RFC 13/23]: Export of alloc_io_context() function
> From:       Vladislav Bolkhovitin <vst () vlnb ! net>
> Date:       2008-12-10 18:49:19
> Message-ID: 49400F2F.4050603 () vlnb ! net
> 
> This patch exports alloc_io_context() function. For performance reasons 
> SCST queues commands using a pool of IO threads. It is considerably 
> better for performance (>30% increase on sequential reads) if threads in 
>   a pool have the same IO context. Since SCST can be built as a module, 
> it needs alloc_io_context() function exported.
> 
> <snip>
> ----------
> 
> I call that lying.
> 
>>> But that's not the only reason for good performance. Particularly, it 
>>> can't explain Bart's tmpfs results from the previous message, where the 
>>> majority of I/O done to/from RAM without any I/O scheduler involved. (Or 
>>> does I/O scheduler also involved with tmpfs?) Bart has 4GB RAM, if I 
>>> remember correctly, i.e. the test data set was 25% of RAM.
>> To remove any suspicions that I'm playing dirty games here I should note 
> <snip>
> 
> I don't know what games your playing at, but do me a favor, if your too
> stupid enough to realize when your caught in a lie and to just shut up
> then please do me the favor and leave me out of any further correspondence
> from you.

Think what you want and do what you want. You can even filter out all 
e-mails from me, that's your right. But:

1. As I wrote grouping threads into a single IO context doesn't explain 
all the performance difference and finding out reasons for other's 
performance problems isn't something I can afford at the moment.

2. CFQ doesn't have any processing latency and has never had. Learn to 
understand what are your writing about and how to correctly express 
yourself at first. You asked about that latency and I replied that there 
is nothing to defeat.

3. SCST doesn't have any hooks into CFQ and not going to have in the 
considerable future.

> Thank you,
> 
> -Ross

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ