[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130705081516.GP18898@dyad.programming.kicks-ass.net>
Date: Fri, 5 Jul 2013 10:15:16 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Yan, Zheng" <zheng.z.yan@...el.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org, eranian@...gle.com,
andi@...stfloor.org
Subject: Re: [PATCH v2 4/7] perf, x86: Save/resotre LBR stack during context
switch
On Fri, Jul 05, 2013 at 01:36:24PM +0800, Yan, Zheng wrote:
> On 07/04/2013 08:45 PM, Peter Zijlstra wrote:
> > On Mon, Jul 01, 2013 at 03:23:04PM +0800, Yan, Zheng wrote:
> >
> >> @@ -2488,25 +2508,31 @@ static void perf_branch_stack_sched_in(struct task_struct *prev,
> >>
> >> list_for_each_entry_rcu(pmu, &pmus, entry) {
> >> cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
> >> + task_ctx = cpuctx->task_ctx;
> >>
> >> /*
> >> - * check if the context has at least one
> >> - * event using PERF_SAMPLE_BRANCH_STACK
> >> + * force flush the branch stack if there are cpu-wide events
> >> + * using PERF_SAMPLE_BRANCH_STACK
> >> + *
> >> + * save/restore the branch stack if the task context has
> >> + * at least one event using PERF_SAMPLE_BRANCH_STACK
> >> */
> >> - if (cpuctx->ctx.nr_branch_stack > 0
> >> - && pmu->flush_branch_stack) {
> >> -
> >> + bool force_flush = cpuctx->ctx.nr_branch_stack > 0;
> >> + if (pmu->branch_stack_sched &&
> >> + (force_flush ||
> >> + (task_ctx && task_ctx->nr_branch_stack > 0))) {
> >> pmu = cpuctx->ctx.pmu;
> >>
> >> - perf_ctx_lock(cpuctx, cpuctx->task_ctx);
> >> + perf_ctx_lock(cpuctx, task_ctx);
> >>
> >> perf_pmu_disable(pmu);
> >>
> >> - pmu->flush_branch_stack();
> >> + pmu->branch_stack_sched(task_ctx,
> >> + sched_in, force_flush);
> >>
> >> perf_pmu_enable(pmu);
> >>
> >> - perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
> >> + perf_ctx_unlock(cpuctx, task_ctx);
> >> }
> >> }
> >>
> >
> > I never really like this; and yes I know I wrote part of that. Is there
> > any way we can get rid of this and to it 'properly' through the events
> > that get scheduled?
> >
> > After all; the LBR usage is through the events, so scheduling the events
> > should also manage the LBR state.
> >
> > What is missing for that to work?
> >
>
> the LBR is shared resource, can be used by multiple events at the same time.
Yeah so? There's tons of shared resources in the PMU already.
> Strictly speaking,LBR is associated with task, not event.
Wrong!, it _is_ associated with events. Events is all there is. Event can be
associated with tasks, but that's completely irrelevant.
> One example is
> there are 5 events using the LBR stack feature, but there are only 4 counters.
> So these events need schedule. Saving/restoring LBR on the basis of event is
> clearly wrong.
Different scheduling and you're wrong. Look at perf_rotate_context(), we'd
disable everything at perf_pmu_disable() and enable the entire thing at
perf_pmu_enable(), on both sides we'd have the LBR running.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists