[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20120510.164906.434297150.d.hatayama@jp.fujitsu.com>
Date: Thu, 10 May 2012 16:49:06 +0900 (JST)
From: HATAYAMA Daisuke <d.hatayama@...fujitsu.com>
To: ak@...ux.intel.com
Cc: a.p.zijlstra@...llo.nl, alex.shi@...el.com, mgorman@...e.de,
npiggin@...il.com, tglx@...utronix.de, mingo@...hat.com,
hpa@...or.com, arnd@...db.de, rostedt@...dmis.org,
fweisbec@...il.com, jeremy@...p.org, gregkh@...uxfoundation.org,
glommer@...hat.com, riel@...hat.com, luto@....edu, avi@...hat.com,
len.brown@...el.com, dhowells@...hat.com, fenghua.yu@...el.com,
borislav.petkov@....com, yinghai@...nel.org, cpw@....com,
steiner@....com, akpm@...ux-foundation.org, penberg@...nel.org,
hughd@...gle.com, rientjes@...gle.com,
kosaki.motohiro@...fujitsu.com, n-horiguchi@...jp.nec.com,
paul.gortmaker@...driver.com, trenn@...e.de, tj@...nel.org,
oleg@...hat.com, axboe@...nel.dk, kamezawa.hiroyu@...fujitsu.com,
viro@...iv.linux.org.uk, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] TLB flush optimization
From: Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH v3] TLB flush optimization
Date: Wed, 9 May 2012 16:45:12 -0700
>> Have you tried what happens if you get rid of the funny multi-vector-ipi
>> scheme and use the generic smp_call functions?
>
> Yes we did. It's much faster on larger systems.
>
> But haven't sent the patch yet because wasn't sure if it wasn't slower
> on small systems.
>
> -Andi
I'm not sure the reason of performance gain. I'm guessing that the
performance gain depends on waiting time of specific multi-vector-ipi
vs wasting time of generic code consumed for the processing actually
unnecessary for TLB flushing, and on small systems the specific one is
shorter than the generic one, and on large systems the converse
holds. Is this correct?
Thanks.
HATAYAMA, Daisuke
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists