lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Mar 2009 10:16:51 -0400
From:	Ric Wheeler <rwheeler@...hat.com>
To:	David Rees <drees76@...il.com>
CC:	Jeff Garzik <jeff@...zik.org>, Theodore Tso <tytso@....edu>,
	Jan Kara <jack@...e.cz>, Chris Mason <chris.mason@...cle.com>,
	Ric Wheeler <rwheeler@...hat.com>,
	Linux Kernel Developers List <linux-kernel@...r.kernel.org>,
	Ext4 Developers List <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH 0/3] Ext3 latency improvement patches

David Rees wrote:
> On Fri, Mar 27, 2009 at 5:14 PM, Jeff Garzik <jeff@...zik.org> wrote:
>   
>> Theodore Tso wrote:
>>     
>>> OTOH, the really big databases will tend to use direct I/O, so they
>>> won't be dirtying the page cache anyway.  So maybe it's not worth the
>>>       
>> Not necessarily...  From what I understand, a lot of the individual
>> low-level components in cloud storage, such as GoogleFS's chunk server[1] do
>> not bypass the page cache, even though they do care about the details of
>> data caching and data consistency.
>>     
>
> PostgreSQL does not use direct I/O, either (except for the
> write-ahead-logs which are written sequentially and only get read
> during database recovery).  I'm sure that most of MySQL's database
> engines, also don't.
>
> -Dave
>   

The high end, traditional databases like DB2 and Oracle definitely do 
tend to use direct I/O and manage the cache vs not cached pages 
carefully on their own.

They also tend to use database "page sizes" larger than our VM page 
size  or FS block size and work hard to send large, aligned IO's down to 
storage in the correct order so they can be fully recoverable after a 
crash (no partially updated DB pages, aka "torn pages").

A lot of the cloud storage people rely on whole files. For example, you 
implement RAID at the file level by breaking your file down into K 
chunks, each one sent over the network to different machines. That chunk 
is really a whole file and is sent to disk (hopefully with an fsync()!) 
before ack'ing the transaction. They don't worry about data integrity 
for objects less than that chunk size.

At least, this is how we did it in Centera - without doing that, you are 
definitely open to data loss.

Ric



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ