[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18757.42682.109305.676647@drongo.ozlabs.ibm.com>
Date: Mon, 15 Dec 2008 11:37:14 +1100
From: Paul Mackerras <paulus@...ba.org>
To: Ingo Molnar <mingo@...e.hu>
Cc: eranian@...il.com, Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Vince Weaver <vince@...ter.net>, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Eric Dumazet <dada1@...mosbay.com>,
Robert Richter <robert.richter@....com>,
Arjan van de Veen <arjan@...radead.org>,
Peter Anvin <hpa@...or.com>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [patch] Performance Counters for Linux, v3
Ingo Molnar writes:
> * stephane eranian <eranian@...glemail.com> wrote:
>
> > Hi,
> >
> > Given the level of abstractions you are using for the API, and given
> > your argument that the kernel can do the HW resource scheduling better
> > than anybody else.
> >
> > What happens in the following test case:
> >
> > - 2-way system (cpu0, cpu1)
> >
> > - on cpu0, two processes P1, P2, each self-monitoring and counting event E1.
> > Event E1 can only be measured on counter C1.
> >
> > - on cpu1, there is a cpu-wide session, monitoring event E1, thus using C1
> >
> > - the scheduler decides to migrate P1 onto CPU1. You now have a
> > conflict on C1.
> >
> > How is this managed?
>
> If there's a single unit of sharable resource [such as an event counter,
> or a physical CPU], then there's just three main possibilities: either
> user 1 gets it all, or user 2 gets it all, or they share it.
>
> We've implemented the essence of these variants, with sharing the resource
> being the sane default, and with the sysadmin also having a configuration
> vector to reserve the resource to himself permanently. (There could be
> more variations of this.)
>
> What is your point?
Note that Stephane said *counting* event E1.
One of the important things about counting (as opposed to sampling) is
that it matters whether or not the event is being counted the whole
time or only part of the time. Thus it puts constraints on counter
scheduling and reporting that don't apply for sampling.
In other words, if I'm counting an event, I want it to be counted all
the time (i.e. whenever the task is executing, for a per-task counter,
or continuously for a per-cpu counter). If that causes conflicts and
the kernel decides not to count the event for part of the time, that
is very much second-best, and I absolutely need to know that that
happened, and also when the kernel started and stopped counting the
event (so I can scale the result to get some idea what the result
would have been if it had been counted the whole time).
Now, I haven't digested V4 yet, so you might have already implemented
something like that. Have you? :)
Paul.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists