lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45B8D041.8050507@cfl.rr.com>
Date:	Thu, 25 Jan 2007 10:44:01 -0500
From:	Phillip Susi <psusi@....rr.com>
To:	Denis Vlasenko <vda.linux@...glemail.com>
CC:	Michael Tokarev <mjt@....msk.ru>,
	Linus Torvalds <torvalds@...l.org>, Viktor <vvp01@...ox.ru>,
	Aubrey <aubreylee@...il.com>, Hua Zhong <hzhong@...il.com>,
	Hugh Dickins <hugh@...itas.com>, linux-kernel@...r.kernel.org,
	hch@...radead.org, kenneth.w.chen@in
Subject: Re: O_DIRECT question

Denis Vlasenko wrote:
> I will still disagree on this point (on point "use O_DIRECT, it's faster").
> There is no reason why O_DIRECT should be faster than "normal" read/write
> to large, aligned buffer. If O_DIRECT is faster on today's kernel,
> then Linux' read()/write() can be optimized more.

Ahh but there IS a reason for it to be faster: the application knows 
what data it will require, so it should tell the kernel rather than ask 
it to guess.  Even if you had the kernel playing vmsplice games to get 
avoid the copy to user space ( which still has a fair amount of overhead 
), then you still have the problem of the kernel having to guess what 
data the application will require next, and try to fetch it early.  Then 
when the application requests the data, if it is not already in memory, 
the application blocks until it is, and blocking stalls the pipeline.

> (I hoped that they can be made even *faster* than O_DIRECT, but as I said,
> you convinced me with your "error reporting" argument that reads must still
> block until entire buffer is read. Writes can avoid that - apps can do
> fdatasync/whatever to make sync writes & error checks if they want).


fdatasync() is not acceptable either because it flushes the entire file. 
  This does not allow the application to control the ordering of various 
writes unless it limits itself to a single write/fdatasync pair at a 
time.  Further, fdatasync again blocks the application.

With aio, the application can keep several read/writes going in 
parallel, thus keeping the pipeline full.  Even if the io were not 
O_DIRECT, and the kernel played vmsplice games to avoid the copy, it 
would still have more overhead, complexity and I think, very little gain 
in most cases.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ