lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20130731221235.GB11378@thunk.org> Date: Wed, 31 Jul 2013 18:12:35 -0400 From: Theodore Ts'o <tytso@....edu> To: Jan Kara <jack@...e.cz> Cc: Ext4 Developers List <linux-ext4@...r.kernel.org> Subject: Re: [PATCH -v2] ext4: avoid reusing recently deleted inodes in no journal mode On Mon, Jul 29, 2013 at 03:32:31PM +0200, Jan Kara wrote: > I'd use dirty_expire_interval here so that we are at least tied to > flusher thread timeout... Makes sense, done. > > + /* > > + * Five seconds should be enough time for a block to be > > + * committed to the platter once it is sent to the HDD > > + */ > > + recentcy = 5; > This is completely ad-hoc and I cannot say anything about what value > would be appropriate here. Jens told me disk can hold on sectors for > *minutes* in their writeback caches when these blocks are unsuitably placed > and there's enough streaming IO going on. So the question is what value do > we want to base this on? Yes, it is completely ad-hoc. How long a disk will hold on to sectors in their writeback caches really depends on its elevator algorithms; so it is indeed completely a hueristic. I will say though that the workloads that allow a sector to be pinned for even seconds (let alone minutes) are very artifical workloads and it's very unclear whether it's realistic that they would exist on most normal productoin servers. The use cases for no journal mode is for places where performance is critical, so we really don't want to send a CACHE FLUSH command, which is really the only way you can be sure. - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists