lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1350051936.2299.29.camel@kjgkr>
Date:	Fri, 12 Oct 2012 23:25:36 +0900
From:	Jaegeuk Kim <jaegeuk.kim@...il.com>
To:	Vyacheslav Dubeyko <slava@...eyko.com>
Cc:	Jaegeuk Kim <jaegeuk.kim@...sung.com>,
	'Marco Stornelli' <marco.stornelli@...il.com>,
	'Al Viro' <viro@...iv.linux.org.uk>, tytso@....edu,
	gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org,
	chur.lee@...sung.com, cm224.lee@...sung.com,
	jooyoung.hwang@...sung.com, linux-fsdevel@...r.kernel.org
Subject: RE: [PATCH 00/16] f2fs: introduce flash-friendly file system

2012-10-12 (금), 16:30 +0400, Vyacheslav Dubeyko:
> On Wed, 2012-10-10 at 18:43 +0900, Jaegeuk Kim wrote:
> > [snip]
> > > > How about the following scenario?
> > > > 1. data "a" is newly written.
> > > > 2. checkpoint "A" is done.
> > > > 3. data "a" is truncated.
> > > > 4. checkpoint "B" is done.
> > > >
> > > > If fs supports multiple snapshots like "A" and "B" to users, it cannot reuse the space allocated by
> > > > data "a" after checkpoint "B" even though data "a" is safely truncated by checkpoint "B".
> > > > This is because fs should keep data "a" to prepare a roll-back to "A".
> > > > So, even though user sees some free space, LFS may suffer from cleaning due to the exhausted free
> > > space.
> > > > If users want to avoid this, they have to remove snapshots by themselves. Or, maybe automatically?
> > > >
> > > 
> > > I feel that here it exists some misunderstanding in checkpoint/snapshot terminology (especially, for
> > > the NILFS2 case). It is possible that NILFS2 volume can contain only checkpoints (if user doesn't
> > > created any snapshot). You are right, snapshot cannot be deleted because, in other word, user marked
> > > this file system state as important point. But checkpoints can be reclaimed easily. I can't see any
> > > problem to reclaim free space from checkpoints in above-mentioned scenario in the case of NILFS2. But
> > 
> > I meant that snapshot does checkpoint.
> > And, the problem is related to real file system utilization managed by NILFS2.
> >                      [fs utilization to users]   [fs utilization managed by NILFS2]
> >                                 X - 1                       X - 1
> > 1. new data "a"            X                            X
> > 2. snapshot "A"            X                            X
> > 3. truncate "a"            X - 1                       X
> > 4. snapshot "B"            X - 1                       X
> > 
> > After this, user can see X-1, but the performance will be affected by X.
> > Until the snapshot "A" is removed, user will experience the performance determined by X.
> > Do I misunderstand?
> > 
> 
> Ok. Maybe I have some misunderstanding but checkpoint and snapshot are different things for me (especially, in the case of NILFS2). :-)
> 
> The most important is that f2fs has more efficient scheme of working with checkpoints, from your point of view. If you are right then it is very good. And I need to be more familiar with f2fs code.
> 

Ok, thanks.

> [snip]
> > > As I know, NILFS2 has Garbage Collector that removes checkpoints automatically in background. But it
> > > is possible also to force removing as checkpoints as snapshots by hands with special utility using. As
> > 
> > If users may not want to remove the snapshots automatically, should they configure not to do this too?
> > 
> 
> As I know, NILFS2 doesn't delete snapshots automatically but checkpoints - yes. Moreover, it exists nilfs_cleanerd.conf configuration file that makes possible to manage by NILFS cleanerd daemon's behavior (min/max number of clean segments, selection policy, check/clean intervals and so on).
> 

Ok.

> [snip]
> > > > IMHO, user does not need to know how many snapshots there exist and track the fs utilization all the
> > > time.
> > > > (off list: I don't know why cleaning process should be tuned by users.)
> > > >
> > > 
> > > What do you plan to do in the case of users' complains about issues with free space reclaiming? If
> > > user doesn't know about checkpoints and haven't any tools for accessing to checkpoints then how is it
> > > possible to investigate issues with free space reclaiming on an user side?
> > 
> > Could you explain why reclaiming free space is an issue?
> > IMHO, that issue is caused by adopting multiple snapshots.
> > 
> 
> I didn't mean that reclaiming free space is an issue. I hope that f2fs
> is stable but unfortunately it is not possible for any software to be
> completely without bugs. So, anyway, f2fs users can have some issues
> during using. One of the possible issue can be unexpected situation
> with not reclaiming of free space. So, my question was about
> possibility to investigate such bug on the user's side. From my point
> of view, NILFS2 has very good utilities for such investigation.

You mean fsck?
Of course, we've implemented fsck tool also.
But, why I didn't open it is that code is a mess.
Another reason is that current fsck tool only checks
the consistency of f2fs.
Now we're still working on it to open.

> 
> [snip]
> > > > In our experiments *also* on android phones, we've seen many random patterns with frequent fsync
> > > calls.
> > > > We found that the main problem is database, and I think f2fs is beneficial to this.
> > > 
> > > I think that database is not main use-case on Android phones. The dominating use-case can be operation
> > > by multimedia information and operations with small files, from my point of view.
> > > 
> > > So, it is possible to extract such key points from the shared paper: (1) file has complex structure;
> > > (2) sequential access is not sequential; (3) auxiliary files dominate; (4) multiple threads perform
> > > I/O.
> > > 
> > > I am afraid that random modification of different part of files and I/O operations from multiple
> > > threads can lead to significant fragmentation as file fragments as directory meta-information because
> > > of garbage collection.
> > 
> > Could you explain in more detail?
> > 
> 
> I mean that complex structure of modern files can lead to random modification of small file's parts.
> Moreover, such modifications can occur from multiple threads.
> So, it means for me that Copy-On-Write policy can lead to file's content fragmentation.
> Then GC can make additional fragmentation also.
> But maybe I have some misunderstanding of f2fs internal techniques.
> 

Right. Random modification may cause data fragmentation due to COW in LFS.
But, this is from the host side view only.
If we consider FTL with file system adopting the in-place-update scheme,
eventually FTL should handle the fragmentation issue instead of
file system.
So, I think fragmentation is not a particular issue in LFS only.

> With the best regards,
> Vyacheslav Dubeyko.
> 
> 

-- 
Jaegeuk Kim
Samsung

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ