lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 9 Jan 2014 23:26:40 +0200
From:	Sergey Meirovich <rathamahata@...il.com>
To:	dgilbert@...erlog.com
Cc:	James Smart <james.smart@...lex.com>, Jan Kara <jack@...e.cz>,
	linux-scsi <linux-scsi@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Gluk <git.user@...il.com>, Christoph Hellwig <hch@...radead.org>
Subject: Re: Terrible performance of sequential O_DIRECT 4k writes in SAN
 environment. ~3 times slower then Solars 10 with the same HBA/Storage.

Hi Duglas,

On 9 January 2014 21:54, Douglas Gilbert <dgilbert@...erlog.com> wrote:
> On 14-01-08 08:57 AM, Sergey Meirovich wrote:
...
>>
>> The strangest thing to me that this is the problem with sequential
>> write. For example the fnic one machine is zoned to EMC XtremIO and
>> had results: 14.43Mb/sec 3693.65 Requests/sec for sequential 4k. The
>> same fnic machine perfrormed rather impressive for random 4k
>> 451.11Mb/sec 115485.02 Requests/sec
>
>
> You could bypass O_DIRECT and use ddpt together with
> a bsg pass-through (bsg is a little faster than sg
> for these purposes).
>
> For example:
>
> # lsscsi -g
> [0:0:0:0]    disk    ATA    INTEL SSDSC2CW12 400i  /dev/sda /dev/sg0
> [14:0:0:0]   disk    Linux  scsi_debug       0004  -        /dev/sg1
>
> # ddpt if=/dev/bsg/14:0:0:0 bs=512 bpt=128 count=1m
> Output file not specified so no copy, just reading input
> 1048576+0 records in
> 0+0 records out
> time to read data: 0.283566 secs at 1893.28 MB/sec
>
> bs= should match the block size of the storage device and
> the size of each SCSI READ is dictated by bpt= (so 64 KB
> in this case).
>
> Such a test should show you if your performance problem
> is in the block layer or below, or above the block layer
> (at least the point where pass-through commands are
> injected).
>
> Doug Gilbert

Thanks for an excellent idea!

Seems like this is not Direct IO issue.Just tried it against
fnic/XtremIO.   4k over via bsg is still 17.278 Mb/s for write.
.
[root@...-poc-gtsxdb3 bsg]# /tmp/ddpt-0.91/src/ddpt if=/dev/zero
of=/dev/bsg/0:0:4:1 bs=512 bpt=8 count=1m
1048576+0 records in
1048576+0 records out
time to transfer data: 31.076487 secs at 17.28 MB/sec


[root@...-poc-gtsxdb3 bsg]# /tmp/ddpt-0.91/src/ddpt if=/dev/zero
of=/dev/bsg/0:0:4:1 bs=512 bpt=16 count=512k
524288+0 records in
524288+0 records out
time to transfer data: 8.511421 secs at 31.54 MB/sec


[root@...-poc-gtsxdb3 bsg]# /tmp/ddpt-0.91/src/ddpt if=/dev/zero
of=/dev/bsg/0:0:4:1 bs=512 bpt=32 count=256k
262144+0 records in
262144+0 records out
time to transfer data: 2.426037 secs at 55.32 MB/sec
[root@...-poc-gtsxdb3 bsg]#
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ