lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5273.1199998102@turing-police.cc.vt.edu>
Date:	Thu, 10 Jan 2008 15:48:22 -0500
From:	Valdis.Kletnieks@...edu
To:	Rik van Riel <riel@...hat.com>
Cc:	Jakob Oestergaard <jakob@...hought.net>,
	Anton Salikhmetov <salikhmetov@...il.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH][RFC][BUG] updating the ctime and mtime time stamps in msync()

On Wed, 09 Jan 2008 18:41:41 EST, Rik van Riel said:

> I guess a third possible time (if we want to minimize the number of
> updates) would be when natural syncing of the file data to disk, by
> other things in the VM, would be about to clear the I_DIRTY_PAGES
> flag on the inode.  That way we do not need to remember any special
> "we already flushed all dirty data, but we have not updated the mtime
> and ctime yet" state.
> 
> Does this sound reasonable?

Is it possible that a *very* large file (multi-gigabyte or even bigger database,
for example) would never get out of I_DIRTY_PAGES, because there's always a
few dozen just-recently dirtied pages that haven't made it out to disk yet?

Of course, getting a *consistent* backup of a file like that is quite the
challenge already, because of the high likelyhood of the file being changed
while the backup runs - that's why big sites often do a 'quiesce/snapshot/wakeup'
on a database and then backup the snapshot...


Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ