[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200808261654.AA00216@capsicum.lab.ntt.co.jp>
Date: Wed, 27 Aug 2008 01:54:30 +0900
From: Ryusuke Konishi <konishi.ryusuke@....ntt.co.jp>
To: Jorn Engel <joern@...fs.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] nilfs2: continuous snapshotting file system
On Tue, 26 Aug 2008 12:16:19 +0200, Jorn Engel wrote:
>On Thu, 21 August 2008 01:13:45 +0900, Ryusuke Konishi wrote:
>>
>> 4. To make disk blocks relocatable, NILFS2 maintains a table file (called DAT)
>> which maps virtual disk blocks addresses to usual block addresses.
>> The lifetime information is recorded in the DAT per virtual block address.
>
>Interesting approach. Does that mean that every block lookup involves
>two disk accesses, one for the DAT and one for the actual block?
Simply stated, it's Yes.
But the actual number of disk accesses will become fewer because the DAT is
cached like regular files and read-ahead is also applied.
The cache for the DAT works well enough.
>> The current NILFS2 GC simply reclaims from the oldest segment, so the disk
>> partition acts like a ring buffer. (this behaviour can be changed by
>> replacing userland daemon).
>
>Is this userland daemon really necessary? I do all that stuff in
>kernelspace and the amount of code I have is likely less than would be
>necessary for the userspace interface alone. Apart from creating a
>plethora of research papers, I never saw much use for pluggable
>cleaners.
Well, that sounds reasonable.
Still I cannot say which is better for now.
My colleague has intention to develop other type of cleaners, and another
colleague experimentally made a cleaner with GUI.
In addition, there are possibilities to integrate attractive features
like defragmentation, background data verification, or remote backups.
>Did you encounter any nasty deadlocks and how did you solve them?
>Finding deadlocks in the vfs-interaction became a hobby of mine when
>testing logfs and at least one other lfs seems to have had similar
>problems - they exported the inode_lock in their patch. ;)
>
>Jorn
Yeah, it was very tough battle :)
Read is OK. But write was hard. I looked at the vfs code over again and
again.
We've implemented NILFS without bringing specific changes into vfs.
However, if we can find common basis for LFSes, I'm grad to cooperate
with you.
Though I don't know whether exporting inode_lock is the case or not ;)
Regards,
Ryusuke Konishi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists