[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1248421981-31865-1-git-send-email-thellstrom@vmware.com>
Date: Fri, 24 Jul 2009 09:53:01 +0200
From: Thomas Hellstrom <thellstrom@...are.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...e.hu, venkatesh.pallipadi@...el.com,
Thomas Hellstrom <thellstrom@...are.com>
Subject: [PATCH] x86: Use clflush() instead of wbinvd() whenever possible when changing mapping
The current code uses wbinvd() when the area to flush is > 4MB. Although this
may be faster than using clflush() the effect of wbinvd() on irq latencies
may be catastrophical on systems with large caches. Therefore use clflush()
whenever possible and accept the slight performance hit.
Signed-off-by: Thomas Hellstrom <thellstrom@...are.com>
---
arch/x86/mm/pageattr.c | 5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 1b734d7..d4327db 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -209,13 +209,12 @@ static void cpa_flush_array(unsigned long *start, int numpages, int cache,
int in_flags, struct page **pages)
{
unsigned int i, level;
- unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */
BUG_ON(irqs_disabled());
- on_each_cpu(__cpa_flush_all, (void *) do_wbinvd, 1);
+ on_each_cpu(__cpa_flush_all, (void *) 0UL, 1);
- if (!cache || do_wbinvd)
+ if (!cache)
return;
/*
--
1.6.1.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists