lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080920.194358.07565122.ryusuke@osrg.net>
Date:	Sat, 20 Sep 2008 19:43:58 +0900 (JST)
From:	Ryusuke Konishi <konishi.ryusuke@....ntt.co.jp>
To:	joern@...fs.org
Cc:	akpm@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org, kihara.seiji@....ntt.co.jp,
	amagai.yoshiji@....ntt.co.jp
Subject: Re: [PATCH 25/27] nilfs2: block cache for garbage collection

Hi Jörn,
On Thu, 18 Sep 2008 00:49:53 +0200, Jörn Engel wrote:
> On Thu, 18 September 2008 04:09:45 +0900, Ryusuke Konishi wrote:
> > If so, the remaining problem would be the lock dependencies as you
> > mentioned before.
> 
> You should have the same problem already - in some shape or another.  If
> you can have two data structures for the same content, a real inode and
> a dummy inode, you have a race condition.  Quite possibly one involving
> data corruption.
>
> Well, one way to avoid both the race and the locking complexity is by
> stopping all writes during GC and destroying all dummy inodes before
> writes resume. 

The current version of NILFS2 really takes this approach.
Pages held by the dummy inodes will be released after they are copied
to a new log.

> But that would be inefficient in several cases.  When
> GC'ing data that is dirty in the caches, you move the old stale data
> during GC and write the new data soon after.  And you always flush the
> caches after GC, even if your machine has no better use for the memory.

As for as NILFS2, the dirty blocks and the blocks to be moved by GC
never overlap because the dirty blocks make a new generation.
So, they rather must be written individually.

Though we can reuse pages in the GC cache, the effect of this
optimization may be much lower than usual LFSes because most of
blocks in the pages may not belong to the latest generation.

Hmm, we would be better off counting frequency of true overlap if
getting to that.

> BTW: Some of the explanation you just gave me would do well as
> documentation in the source file as well.  That's the sort of background
> information new developers can spend month of mistakes and reverse
> engineering on. :)
> 
> Jörn

Well, thanks.  I'll do that.
NILFS2 needs explication than usual file systems; it needs time
perspective as well as it is an LFS. :)

Regards,
Ryusuke
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ