lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <528CC36A.7080003@profihost.ag>
Date:	Wed, 20 Nov 2013 15:12:58 +0100
From:	Stefan Priebe - Profihost AG <s.priebe@...fihost.ag>
To:	Chinmay V S <cvs268@...il.com>,
	Christoph Hellwig <hch@...radead.org>
CC:	linux-fsdevel@...r.kernel.org, Al Viro <viro@...iv.linux.org.uk>,
	LKML <linux-kernel@...r.kernel.org>, matthew@....cx
Subject: Re: Why is O_DSYNC on linux so slow / what's wrong with my SSD?

Hi ChinmayVS,

Am 20.11.2013 14:34, schrieb Chinmay V S:
> Hi Stefan,
> 
> Christoph is bang on right. To further elaborate upon this, here is
> what is happening in the above case :
> By using DIRECT, SYNC/DSYNC flags on a block device (i.e. bypassing
> the file-systems layer), essentially you are enforcing a CMD_FLUSH on
> each I/O command sent to the disk. This is by design of the
> block-device driver in the Linux kernel. This severely degrades the
> performance.
> 
> A detailed walk-through of the various I/O scenarios is available at
> thecodeartist.blogspot.com/2012/08/hdd-filesystems-osync.html
> 
> Note that SYNC/DSYNC on a filesystem(eg. ext2/3/4) does NOT issue a
> CMD_FLUSH. The "SYNC" via filesystem, simply guarantees that the data
> is sent to the disk and not really flushed to the disk. It will
> continue to reside in the internal cache on the disk, waiting to be
> written to the disk platter in a optimum manner (bunch of writes
> re-ordered to be sequential on-disk and clubbed together in one go).
> This can affect performance to a large extent on modern HDDs with NCQ
> support (CMD_FLUSH simply cancels all performance benefits of NCQ).
> 
> In case of SSDs, the huge IOPS number for the disk (40,000 in case of
> Crucial M4) is again typically observed with write-cache enabled.
> For Crucial M4 SSDs,
> http://www.crucial.com/pdf/tech_specs-letter_crucial_m4_ssd_v3-11-11_online.pdf
> Footnote1 - "Typical I/O performance numbers as measured using Iometer
> with a queue depth of 32 and write cache enabled. Iometer measurements
> are performed on a 8GB span. 4k transfers used for Read/Write latency
> values."

thanks for your great and detailed reply. I'm just wondering why an
intel 520 ssd degrades the speed just by 2% in case of O_SYNC. intel 530
the newer model and replacement for the 520 degrades speed by 75% like
the crucial m4.

The Intel DC S3500 instead delivers also nearly 98% of it's performance
even under O_SYNC.

> To simply disable this behaviour and make the SYNC/DSYNC behaviour and
> performance on raw block-device I/O resemble the standard filesystem
> I/O you may want to apply the following patch to your kernel -
> https://gist.github.com/TheCodeArtist/93dddcd6a21dc81414ba
> 
> The above patch simply disables the CMD_FLUSH command support even on
> disks that claim to support it.

Is this the right one? By assing ahci_dummy_read_id we disable the
CMD_FLUSH?

What is the risk of that one?

Thanks!

Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ