lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 9 Jan 2014 23:43:49 +0200
From:	Sergey Meirovich <rathamahata@...il.com>
To:	dgilbert <dgilbert@...erlog.com>
Cc:	James Smart <james.smart@...lex.com>, Jan Kara <jack@...e.cz>,
	linux-scsi <linux-scsi@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Gluk <git.user@...il.com>, Christoph Hellwig <hch@...radead.org>
Subject: Re: Terrible performance of sequential O_DIRECT 4k writes in SAN
 environment. ~3 times slower then Solars 10 with the same HBA/Storage.

Hi,

On 9 January 2014 23:26, Sergey Meirovich <rathamahata@...il.com> wrote:
> Hi Duglas,
>
> On 9 January 2014 21:54, Douglas Gilbert <dgilbert@...erlog.com> wrote:
>> On 14-01-08 08:57 AM, Sergey Meirovich wrote:
> ...
>>>
>>> The strangest thing to me that this is the problem with sequential
>>> write. For example the fnic one machine is zoned to EMC XtremIO and
>>> had results: 14.43Mb/sec 3693.65 Requests/sec for sequential 4k. The
>>> same fnic machine perfrormed rather impressive for random 4k
>>> 451.11Mb/sec 115485.02 Requests/sec
>>
>>
>> You could bypass O_DIRECT and use ddpt together with
>> a bsg pass-through (bsg is a little faster than sg
>> for these purposes).
>>
>> For example:
>>
>> # lsscsi -g
>> [0:0:0:0]    disk    ATA    INTEL SSDSC2CW12 400i  /dev/sda /dev/sg0
>> [14:0:0:0]   disk    Linux  scsi_debug       0004  -        /dev/sg1
>>
>> # ddpt if=/dev/bsg/14:0:0:0 bs=512 bpt=128 count=1m
>> Output file not specified so no copy, just reading input
>> 1048576+0 records in
>> 0+0 records out
>> time to read data: 0.283566 secs at 1893.28 MB/sec
>>
>> bs= should match the block size of the storage device and
>> the size of each SCSI READ is dictated by bpt= (so 64 KB
>> in this case).
>>
>> Such a test should show you if your performance problem
>> is in the block layer or below, or above the block layer
>> (at least the point where pass-through commands are
>> injected).
>>
>> Doug Gilbert
>
> Thanks for an excellent idea!
>
> Seems like this is not Direct IO issue.Just tried it against
> fnic/XtremIO.   4k over via bsg is still 17.278 Mb/s for write.

At a second  glance seems to be natural ddpt - is suffering from the
same SAN latencies for the small chunks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ