[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071030073736.GA21843@elte.hu>
Date: Tue, 30 Oct 2007 08:37:36 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Rusty Russell <rusty@...tcorp.com.au>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Glauber de Oliveira Costa <gcosta@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Jeremy Fitzhardinge <jeremy@...p.org>, --cc@...hat.com,
avi@...amnet.com, kvm-devel@...ts.sourceforge.net,
John Stultz <johnstul@...ibm.com>
Subject: Re: [PATCH] raise tsc clocksource rating
* Rusty Russell <rusty@...tcorp.com.au> wrote:
> No. tsc is very good, it's not perfect. If a paravirt clock
> registers 400 it really means "pick me over the tsc".
often the TSC is not perfect, but _IF_ it's perfect, using the paravirt
driver is a pessimisation in performance.
the main problem at the moment is that there's no mechanism at the
moment to convey to the guest the information that the TSC is "perfect",
and to convey the calibration values.
> That's *why* they use > 400: it's in the documentation.
static values do not capture conditional quality like that of the TSC.
and just in case it's not obvious: i am not arguing for the inclusion of
the patch, i'm just pointing out the plain fact that in the case where
the TSC _is_ reliable, 5 different clocksource drivers for has obvious
disadvantages. Anyone arguing against that simple point needs his head
examined :) Once we can pass around calibration information from the
host to the guest (which we dont do at the moment) there will be reason
not to use the native clocksource driver in the guest.
in the long run, the paravirt clocksource drivers must become _fallback_
drivers, for the case when the TSC is not perfect. Instead of the
current "lets try to make it reliable but also nearly as fast as the
TSC", which leads to inevitable breakage on SMP.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists