lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 8 Jan 2014 15:57:57 +0200
From:	Sergey Meirovich <rathamahata@...il.com>
To:	James Smart <james.smart@...lex.com>
Cc:	Jan Kara <jack@...e.cz>, linux-scsi <linux-scsi@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Gluk <git.user@...il.com>
Subject: Re: Terrible performance of sequential O_DIRECT 4k writes in SAN
 environment. ~3 times slower then Solars 10 with the same HBA/Storage.

Hi James,

On 7 January 2014 22:57, James Smart <james.smart@...lex.com> wrote:
> Sergey,
>
> The Thor chipset is a bit old - a 4Gig adapter.  Most of our performance
> improvements, including parallelization, have gone into the 8G and 16G
> adapters. But you still should have seen significantly beyond what you
> reported.

First of all - thanks a lot!

I took Thor because we have exactly the same Thors in some of our
Solaris servers. I've also tried 6 different qlogics (mostly 8G) and
fnic (10G) as well. Surprisingly enough Thor was the fastest one for
seqwr 4k. Though in most of the cases machines were from our different
DCs and hence each one connected to yet another storage.

>
> We did a sanity check some hardware we already had set up with a Thor
> adapter.  We saw 23555 iop/s and 92.1 MB/s without needing to do much, well
> beyond what you've reported, and still not up to what we know the card can
> do.  There are some inefficiencies from the linux kernel and some locking
> deltas between our solaris and linux drivers - but not enough to account for
> what you are seeing.
>
> I expect the Direct IO filesystem behavior is the root issue.

The strangest thing to me that this is the problem with sequential
write. For example the fnic one machine is zoned to EMC XtremIO and
had results: 14.43Mb/sec 3693.65 Requests/sec for sequential 4k. The
same fnic machine perfrormed rather impressive for random 4k
451.11Mb/sec 115485.02 Requests/sec
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ