[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20090827160720.GC23722@redhat.com>
Date: Thu, 27 Aug 2009 19:07:20 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: netdev@...r.kernel.org, virtualization@...ts.linux-foundation.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org, mingo@...e.hu,
linux-mm@...ck.org, akpm@...ux-foundation.org, hpa@...or.com,
gregory.haskins@...il.com, Rusty Russell <rusty@...tcorp.com.au>,
s.hetze@...ux-ag.com
Subject: [PATCHv5 2/3] mm: reduce atomic use on use_mm fast path
When mm switched to matches that of active mm, we don't need to
increment and then drop the mm count. Making that conditional reduces
contention on that cache line on SMP systems.
Acked-by: Andrea Arcangeli <aarcange@...hat.com>
Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
---
mm/mmu_context.c | 9 ++++++---
1 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/mmu_context.c b/mm/mmu_context.c
index 9989c2f..0777654 100644
--- a/mm/mmu_context.c
+++ b/mm/mmu_context.c
@@ -27,13 +27,16 @@ void use_mm(struct mm_struct *mm)
task_lock(tsk);
active_mm = tsk->active_mm;
- atomic_inc(&mm->mm_count);
+ if (active_mm != mm) {
+ atomic_inc(&mm->mm_count);
+ tsk->active_mm = mm;
+ }
tsk->mm = mm;
- tsk->active_mm = mm;
switch_mm(active_mm, mm, tsk);
task_unlock(tsk);
- mmdrop(active_mm);
+ if (active_mm != mm)
+ mmdrop(active_mm);
}
EXPORT_SYMBOL_GPL(use_mm);
--
1.6.2.5
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists