[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <001901cf6380$43643c80$ca2cb580$@samsung.com>
Date: Tue, 29 Apr 2014 15:53:44 +0800
From: Chao Yu <chao2.yu@...sung.com>
To: jaegeuk.kim@...sung.com
Cc: linux-f2fs-devel@...ts.sourceforge.net,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: RE: [f2fs-dev][PATCH 3/3 v3] f2fs: fix to truncate inline data in
inode page when setattr
Hi Kim,
> -----Original Message-----
> From: Jaegeuk Kim [mailto:jaegeuk.kim@...sung.com]
> Sent: Tuesday, April 29, 2014 2:16 PM
> To: Chao Yu
> Cc: linux-f2fs-devel@...ts.sourceforge.net; linux-fsdevel@...r.kernel.org;
> linux-kernel@...r.kernel.org
> Subject: Re: [f2fs-dev][PATCH 3/3 v3] f2fs: fix to truncate inline data in inode page when setattr
>
> Hi,
>
> 2014-04-29 (화), 14:03 +0800, Chao Yu:
> > Previous we do not truncate inline data in inode page when setattr, so following
> > case could still read the inline data which has already truncated:
> >
> > 1.write inline data
> > 2.ftruncate size to 0
> > 3.ftruncate size to max inline data size
> > 4.read from offset 0
> >
> > This patch introduces truncate_inline_data() to fix this problem.
> >
> > change log from v1:
> > o fix a bug and do not truncate first page data after truncate inline data.
> > change log from v2:
> > o wait to writeback for inode page if it was already submitted.
> >
> > Signed-off-by: Chao Yu <chao2.yu@...sung.com>
> > ---
> > fs/f2fs/f2fs.h | 1 +
> > fs/f2fs/file.c | 3 +++
> > fs/f2fs/inline.c | 19 +++++++++++++++++++
> > 3 files changed, 23 insertions(+)
> >
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > index 2b67679..676a2c6 100644
> > --- a/fs/f2fs/f2fs.h
> > +++ b/fs/f2fs/f2fs.h
> > @@ -1410,5 +1410,6 @@ bool f2fs_may_inline(struct inode *);
> > int f2fs_read_inline_data(struct inode *, struct page *);
> > int f2fs_convert_inline_data(struct inode *, pgoff_t);
> > int f2fs_write_inline_data(struct inode *, struct page *, unsigned int);
> > +void truncate_inline_data(struct inode *, u64);
> > int recover_inline_data(struct inode *, struct page *);
> > #endif
> > diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> > index b9f4fbf..d97e5c4 100644
> > --- a/fs/f2fs/file.c
> > +++ b/fs/f2fs/file.c
> > @@ -369,6 +369,9 @@ static void truncate_partial_data_page(struct inode *inode, u64 from)
> > unsigned offset = from & (PAGE_CACHE_SIZE - 1);
> > struct page *page;
> >
> > + if (f2fs_has_inline_data(inode))
> > + return truncate_inline_data(inode, from);
> > +
> > if (!offset)
> > return;
> >
> > diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
> > index 3258c7c..6a13997 100644
> > --- a/fs/f2fs/inline.c
> > +++ b/fs/f2fs/inline.c
> > @@ -176,6 +176,25 @@ int f2fs_write_inline_data(struct inode *inode,
> > return 0;
> > }
> >
> > +void truncate_inline_data(struct inode *inode, u64 from)
> > +{
> > + struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
> > + struct page *ipage;
> > +
> > + if (from >= MAX_INLINE_DATA)
> > + return;
> > +
> > + ipage = get_node_page(sbi, inode->i_ino);
> > + if (IS_ERR(ipage))
> > + return;
> > +
> > + f2fs_wait_on_page_writeback(ipage, NODE);
>
> The get_node_page() triggers,
> grab_cache_page_write_begin()
> -> wait_for_stable_page()
> -> wait_on_page_writeback().
Oh, it's my mistake for this missing, thanks for the mention.:)
>
> Any bugs in there?
If we wait in grab_cache_page_write_begin(), maybe we will face long time delay
or potential deadlock caused by f2fs bio cache. So how about use
grab_cache_page & f2fs_wait_on_page_writeback to avoid this?
>
> > + zero_user_segment(ipage, INLINE_DATA_OFFSET + from,
> > + INLINE_DATA_OFFSET + MAX_INLINE_DATA);
> > + set_page_dirty(ipage);
> > + f2fs_put_page(ipage, 1);
> > +}
> > +
> > int recover_inline_data(struct inode *inode, struct page *npage)
> > {
> > struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
>
> --
> Jaegeuk Kim
> Samsung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists