[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1350131840.1917.53.camel@kjgkr>
Date: Sat, 13 Oct 2012 21:37:20 +0900
From: Jaegeuk Kim <jaegeuk.kim@...il.com>
To: Namjae Jeon <linkinjeon@...il.com>
Cc: Jaegeuk Kim <jaegeuk.kim@...sung.com>,
Arnd Bergmann <arnd@...db.de>,
David Woodhouse <dwmw2@...radead.org>,
Luk Czerner <lczerner@...hat.com>,
Vyacheslav Dubeyko <slava@...eyko.com>,
Marco Stornelli <marco.stornelli@...il.com>,
Al Viro <viro@...iv.linux.org.uk>, tytso@....edu,
gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org,
chur.lee@...sung.com, cm224.lee@...sung.com,
jooyoung.hwang@...sung.com, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 00/16] f2fs: introduce flash-friendly file system
2012-10-13 (토), 13:26 +0900, Namjae Jeon:
> Is there high possibility that the storage device can be rapidly
> worn-out by cleaning process ? e.g. severe fragmentation situation by
> creating and removing small files.
>
Yes, the cleaning process in F2FS induces additional writes so that
flash storage can be worn out quickly.
However, how about in traditonal file systems?
As all of us know that, FTL has an wear-leveling issue too due to the
garbage collection overhead that is fundamentally similar to the
cleaning overhead in LFS or F2FS.
So, what's the difference between them?
IMHO, the major factor to reduce the cleaning or garbage collection
overhead is how to efficiently separate hot and cold data.
So, which is a better layer between FTL and file system to achieve that?
I think the answer is the file system, since the file system has much
more information on such a hotness of all the data, but FTL doesn't know
or is hard to figure out that kind of information.
Therefore, I think the LFS approach is more beneficial to span the life
time of the storage rather than traditional one.
And, in order to do this perfectly, one thing is a criteria, the
alignment between FTL and F2FS.
> And you told us only advantages of f2fs. Would you tell us the disadvantages ?
I think there is a scenario like this.
1) One big file is created and written data sequentially.
2) Many random writes are done across the whole file range.
3) User discards cached data by doing "drop_caches" or "reboot".
At this point, I worry about the sequential read performance due to the
fragmentation.
I don't know how frequently this use-case happens, but it is one of cons
in the LFS approach.
Nevertheless, I'm thinking that the performance could be enhanced by
cooperating with a readahead mechanism in VFS.
Thanks,
>
> Thanks.
--
Jaegeuk Kim
Samsung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists