[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1345395217.2716.58.camel@tunafish>
Date: Sun, 19 Aug 2012 18:53:37 +0200
From: Dan Luedtke <mail@...rl.de>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] fs: Introducing Lanyard Filesystem
On Sun, 2012-08-19 at 15:27 +0100, Al Viro wrote:
> * unlimited recursion
I am already working on that one, but it's tricky.
> * unlink() does *not* truncate the file contents;
I did not know that.
> * while we are at it, neither of those should free the on-disk
> inode; again, that should happen only when the inode is evicted.
Makes sense now. Thanks!
> * I might be missing something, but copying a bunch of files
> with something like cp /foo/* /mnt seems to be guaranteed to create
> really lousy binary tree in target directory (they will go in lexicographical
> order and you don't seem to rebalance the tree at all)
You missed nothing, there is no rebalancing yet. Thats why performance
is bad at the moment as soon as the "tree" stops being a tree.
> * you are really abusing iget() there.
Noted. Thanks!
> * minor point, but endianness-flipping in place is *the* way to get
> hard-to-catch endianness bugs. foo = cpu_to_le64(foo) is a bloody bad idea;
> either use object for host-endian all along, or use it only for (in your
> case) little-endian.
I am not sure I understood this right.
At what point should I convert e.g. the file size (little endian 64bit
value stored on disk) to host endianess? When filling the inode?
Is inode->i_size = le64_to_cpu(size) bad, too?
Thank you very much for your comments! That'll keep me busy a few weeks.
regards,
Dan
PS: As Jochen Striepe pointed out, lanyfs@...relist.com behaves badly, I
removed it.
--
Dan Luedtke
http://www.danrl.de
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists