[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <p73bqqb7nkd.fsf@verdi.suse.de>
Date: 23 Aug 2006 12:29:06 +0200
From: Andi Kleen <ak@...e.de>
To: Stephane Eranian <eranian@...nkl.hpl.hp.com>
Cc: eranian@....hp.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/18] 2.6.17.9 perfmon2 patch for review: PMU context switch support
Stephane Eranian <eranian@...nkl.hpl.hp.com> writes:
> Because accessing PMU registers is usually much more expensive
> than accessing general registers, we take great care at minimizing
> the number of register accesses using various lazy save/restore schemes
> for both UP and SMP kernels.
Can you perhaps add a big "strategy" comment somewhere about
how those lazy schemes work?
I suppose some of those functions must be marked __kprobes
> +/*
> + * interrupts are masked
> + */
> +static void __pfm_ctxswin_thread(struct task_struct *task,
> + struct pfm_context *ctx)
> +{
> + u64 cur_act, now;
> + struct pfm_event_set *set;
> + int reload_pmcs, reload_pmds;
> +
> + now = pfm_arch_get_itc();
Isn't this sched_clock()?
> +
> + BUG_ON(!task->pid);
> +
> + spin_lock(&ctx->lock);
Why does it have an own lock? Shouldn't the caller protect it already.
It must be because you don't prevent preemption for once.
The locking in general needs a big comment somewhere I think.
> +/*
> + * come here when either prev or next has TIF_PERFMON flag set
> + * Note that this is not because a task has TIF_PERFMON set that
> + * it has a context attached, e.g., in system-wide on certain arch.
> + */
> +void __pfm_ctxsw(struct task_struct *prev, struct task_struct *next)
> +{
> + struct pfm_context *ctxp, *ctxn;
> + u64 now;
> +
> + now = pfm_arch_get_itc();
sched_clock(). And it can be expensive and you seem to do it redundandtly.
I would one do it once and pass down.
> + * given that prev and next can never be the same, this
> + * test is checking that ctxp == ctxn == NULL which is
> + * an indication we have an active system-wide session on
> + * this CPU
> + */
> + if (ctxp == ctxn)
> + __pfm_ctxsw_sys(prev, next);
> +
> + __get_cpu_var(pfm_stats).pfm_ctxsw_count++;
> + __get_cpu_var(pfm_stats).pfm_ctxsw_cycles += pfm_arch_get_itc() - now;
Is this really needed? On p4 you added hundreds of cycles now.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists