[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20111124105245.b252c65f.kamezawa.hiroyu@jp.fujitsu.com>
Date: Thu, 24 Nov 2011 10:52:45 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Cong Wang <amwang@...hat.com>
Cc: linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
Pekka Enberg <penberg@...nel.org>,
Christoph Hellwig <hch@....de>,
Hugh Dickins <hughd@...gle.com>,
Dave Hansen <dave@...ux.vnet.ibm.com>,
Lennart Poettering <lennart@...ttering.net>,
Kay Sievers <kay.sievers@...y.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-mm@...ck.org
Subject: Re: [V3 PATCH 1/2] tmpfs: add fallocate support
I have a question.
On Wed, 23 Nov 2011 16:53:30 +0800
Cong Wang <amwang@...hat.com> wrote:
> Systemd needs tmpfs to support fallocate [1], to be able
> to safely use mmap(), regarding SIGBUS, on files on the
> /dev/shm filesystem. The glibc fallback loop for -ENOSYS
> on fallocate is just ugly.
>
> This patch adds fallocate support to tmpfs, and as we
> already have shmem_truncate_range(), it is also easy to
> add FALLOC_FL_PUNCH_HOLE support too.
>
> 1. http://lkml.org/lkml/2011/10/20/275
>
> V2->V3:
> a) Read i_size directly after holding i_mutex;
> b) Call page_cache_release() too after shmem_getpage();
> c) Undo previous changes when -ENOSPC.
>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: Christoph Hellwig <hch@....de>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: Dave Hansen <dave@...ux.vnet.ibm.com>
> Cc: Lennart Poettering <lennart@...ttering.net>
> Cc: Kay Sievers <kay.sievers@...y.org>
> Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
> Signed-off-by: WANG Cong <amwang@...hat.com>
>
> ---
> mm/shmem.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 files changed, 65 insertions(+), 0 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index d672250..65f7a27 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -30,6 +30,7 @@
> #include <linux/mm.h>
> #include <linux/export.h>
> #include <linux/swap.h>
> +#include <linux/falloc.h>
>
> static struct vfsmount *shm_mnt;
>
> @@ -1431,6 +1432,69 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos,
> return error;
> }
>
> +static void shmem_truncate_page(struct inode *inode, pgoff_t index)
> +{
> + loff_t start = index << PAGE_CACHE_SHIFT;
> + loff_t end = ((index + 1) << PAGE_CACHE_SHIFT) - 1;
> + shmem_truncate_range(inode, start, end);
> +}
> +
> +static long shmem_fallocate(struct file *file, int mode,
> + loff_t offset, loff_t len)
> +{
> + struct inode *inode = file->f_path.dentry->d_inode;
> + pgoff_t start = offset >> PAGE_CACHE_SHIFT;
> + pgoff_t end = DIV_ROUND_UP((offset + len), PAGE_CACHE_SIZE);
> + pgoff_t index = start;
> + loff_t i_size;
> + struct page *page = NULL;
> + int ret = 0;
> +
> + mutex_lock(&inode->i_mutex);
> + i_size = inode->i_size;
> + if (mode & FALLOC_FL_PUNCH_HOLE) {
> + if (!(offset > i_size || (end << PAGE_CACHE_SHIFT) > i_size))
> + shmem_truncate_range(inode, offset,
> + (end << PAGE_CACHE_SHIFT) - 1);
> + goto unlock;
> + }
> +
> + if (!(mode & FALLOC_FL_KEEP_SIZE)) {
> + ret = inode_newsize_ok(inode, (offset + len));
> + if (ret)
> + goto unlock;
> + }
> +
> + while (index < end) {
> + ret = shmem_getpage(inode, index, &page, SGP_WRITE, NULL);
If the 'page' for index exists before this call, this will return the page without
allocaton.
Then, the page may not be zero-cleared. I think the page should be zero-cleared.
But I'm not sure when we do zero-clear them.
But I'm not sure how fallocate should work at error.
Assume some block already exists before fallocate(), possible side-effect will be
- the contents will be zero-cleared even if fallocate fails.
- the contents will be deallocated in undo path if fallocate fails.
?
maybe updates to man(2) fallocate will be appreciated...
Anyway, don't you need zero-clear when you find an existing pages here ?
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists