[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1253179879.2606.37.camel@ymzhang>
Date: Thu, 17 Sep 2009 17:31:19 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>,
Arjan van de Ven <arjan@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>
Subject: aim7 scalability issue on 4 socket machine
Aim7 result is bad on my new Nehalem machines (4*8*2 logical cpu). Perf counter
shows spinlock consumes 70% cpu time on the machine. Lock_stat shows
anon_vma->lock causes most of the spinlock contention. Function tracer shows
below call chain creates the spinlock.
do_brk => vma_merge =>vma_adjust
Aim7 consists of lots of subtests. One test is to fork lots of processes and
every process calls sbrk for 1000 times to grow/shrink the heap. All the vma of
the heap of all sub-processes point to the same anon_vma and use the same
anon_vma->lock. When sbrk is called, kernel calls do_brk => vma_merge =>vma_adjust
and lock anon_vma->lock to create spinlock contentions.
There is a comment section in front of spin_lock(&anon_vma->lock. It says
anon_vma lock can be optimized when just changing vma->vm_end. As a matter
of fact, anon_vma->lock is used to protect anon_vma->list when an entry is
deleted/inserted or the list is accessed. There is no such deletion/insertion
if only vma->end is changed in function vma_adjust.
Below patch fixes it.
Test results with kernel 2.6.31-rc8. The improvement on the machine is about 150%.
Signed-off-by: Zhang Yanmin <yanmin_zhang@...ux.intel.com>
---
--- linux-2.6.31-rc8/mm/mmap.c 2009-09-03 10:03:57.000000000 +0800
+++ linux-2.6.31-rc8_aim7/mm/mmap.c 2009-09-17 19:11:20.000000000 +0800
@@ -512,6 +512,7 @@ void vma_adjust(struct vm_area_struct *v
struct anon_vma *anon_vma = NULL;
long adjust_next = 0;
int remove_next = 0;
+ int anon_vma_use_lock;
if (next && !insert) {
if (end >= next->vm_end) {
@@ -568,22 +569,32 @@ again: remove_next = 1 + (end > next->
}
}
- /*
- * When changing only vma->vm_end, we don't really need
- * anon_vma lock: but is that case worth optimizing out?
- */
if (vma->anon_vma)
anon_vma = vma->anon_vma;
+ anon_vma_use_lock = 0;
if (anon_vma) {
- spin_lock(&anon_vma->lock);
/*
- * Easily overlooked: when mprotect shifts the boundary,
- * make sure the expanding vma has anon_vma set if the
- * shrinking vma had, to cover any anon pages imported.
+ * When changing only vma->vm_end, we don't really need
+ * anon_vma lock.
+ * ana_vma->lock is to protect the access to the list
+ * started from anon_vma->head. If we don't remove or
+ * insert a vma to the list, and also don't access
+ * the list, we don't need ana_vma->lock.
*/
- if (importer && !importer->anon_vma) {
- importer->anon_vma = anon_vma;
- __anon_vma_link(importer);
+ if (remove_next ||
+ insert ||
+ (importer && !importer->anon_vma)) {
+ anon_vma_use_lock = 1;
+ spin_lock(&anon_vma->lock);
+ /*
+ * Easily overlooked: when mprotect shifts the boundary,
+ * make sure the expanding vma has anon_vma set if the
+ * shrinking vma had, to cover any anon pages imported.
+ */
+ if (importer && !importer->anon_vma) {
+ importer->anon_vma = anon_vma;
+ __anon_vma_link(importer);
+ }
}
}
@@ -628,7 +639,7 @@ again: remove_next = 1 + (end > next->
__insert_vm_struct(mm, insert);
}
- if (anon_vma)
+ if (anon_vma_use_lock)
spin_unlock(&anon_vma->lock);
if (mapping)
spin_unlock(&mapping->i_mmap_lock);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists