[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.1510291208000.3475@eggly.anvils>
Date: Thu, 29 Oct 2015 12:13:35 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Josef Bacik <jbacik@...com>, Yu Zhao <yuzhao@...gle.com>,
Ying Huang <ying.huang@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH] tmpfs: avoid a little creat and stat slowdown
LKP reports that v4.2 commit afa2db2fb6f1 ("tmpfs: truncate prealloc
blocks past i_size") causes a 14.5% slowdown in the AIM9 creat-clo
benchmark.
creat-clo does just what you'd expect from the name, and creat's O_TRUNC
on 0-length file does indeed get into more overhead now shmem_setattr()
tests "0 <= 0" instead of "0 < 0".
I'm not sure how much we care, but I think it would not be too VW-like
to add in a check for whether any pages (or swap) are allocated: if none
are allocated, there's none to remove from the radix_tree. At first I
thought that check would be good enough for the unmaps too, but no: we
should not skip the unlikely case of unmapping pages beyond the new EOF,
which were COWed from holes which have now been reclaimed, leaving none.
This gives me an 8.5% speedup: on Haswell instead of LKP's Westmere,
and running a debug config before and after: I hope those account for
the lesser speedup.
And probably someone has a benchmark where a thousand threads keep on
stat'ing the same file repeatedly: forestall that report by adjusting
v4.3 commit 44a30220bc0a ("shmem: recalculate file inode when fstat")
not to take the spinlock in shmem_getattr() when there's no work to do.
Reported-by: Ying Huang <ying.huang@...ux.intel.com>
Signed-off-by: Hugh Dickins <hughd@...gle.com>
---
mm/shmem.c | 22 ++++++++++++++--------
1 file changed, 14 insertions(+), 8 deletions(-)
--- 4.3-rc7/mm/shmem.c 2015-09-12 18:30:20.857039763 -0700
+++ linux/mm/shmem.c 2015-10-25 11:49:19.931973850 -0700
@@ -548,12 +548,12 @@ static int shmem_getattr(struct vfsmount
struct inode *inode = dentry->d_inode;
struct shmem_inode_info *info = SHMEM_I(inode);
- spin_lock(&info->lock);
- shmem_recalc_inode(inode);
- spin_unlock(&info->lock);
-
+ if (info->alloced - info->swapped != inode->i_mapping->nrpages) {
+ spin_lock(&info->lock);
+ shmem_recalc_inode(inode);
+ spin_unlock(&info->lock);
+ }
generic_fillattr(inode, stat);
-
return 0;
}
@@ -586,10 +586,16 @@ static int shmem_setattr(struct dentry *
}
if (newsize <= oldsize) {
loff_t holebegin = round_up(newsize, PAGE_SIZE);
- unmap_mapping_range(inode->i_mapping, holebegin, 0, 1);
- shmem_truncate_range(inode, newsize, (loff_t)-1);
+ if (oldsize > holebegin)
+ unmap_mapping_range(inode->i_mapping,
+ holebegin, 0, 1);
+ if (info->alloced)
+ shmem_truncate_range(inode,
+ newsize, (loff_t)-1);
/* unmap again to remove racily COWed private pages */
- unmap_mapping_range(inode->i_mapping, holebegin, 0, 1);
+ if (oldsize > holebegin)
+ unmap_mapping_range(inode->i_mapping,
+ holebegin, 0, 1);
}
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists