[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1291987065.6803.151.camel@twins>
Date: Fri, 10 Dec 2010 14:17:45 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Venkatesh Pallipadi <venki@...gle.com>
Cc: Russell King - ARM Linux <linux@....linux.org.uk>,
Mikael Pettersson <mikpe@...uu.se>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
John Stultz <johnstul@...ibm.com>
Subject: Re: [BUG] 2.6.37-rc3 massive interactivity regression on ARM
On Fri, 2010-12-10 at 11:08 +0100, Peter Zijlstra wrote:
> > Just to make sure, update_rq_clock() always gets called on current
> > CPU. Right?
>
> No, specifically not. If that were the case we wouldn't need the
> cross-cpu synced timestamp. Things like load-balancing and
> remote-wakeups need to update a remote CPUs clock.
>
> > The pending patches I have optimizes
> > account_system_vtime() to use this_cpu_write and friends. Want to make
> > sure this change will still keep that optimization relevant.
>
> Ah, good point, remote CPUs updating that will mess with the consistency
> of the per-cpu timestamps due to non atomic updates :/
>
> Bugger.. making them atomics will make it even more expensive. /me goes
> ponder.
OK, so I ended up doing the same you did.. Still staring at that, 32bit
will go very funny in the head once every so often. One possible
solution would be to ignore the occasional abs(irq_delta) > 2 * delta.
That would however result in an accounting discrepancy such that:
clock_task + irq_time != clock
Thoughts?
---
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -1813,11 +1813,36 @@ void disable_sched_clock_irqtime(void)
sched_clock_irqtime = 0;
}
+static void __account_system_vtime(int cpu, u64 now)
+{
+ s64 delta;
+
+ delta = now - per_cpu(irq_start_time, cpu);
+ per_cpu(irq_start_time, cpu) = now;
+
+ if (hardirq_count())
+ per_cpu(cpu_hardirq_time, cpu) += delta;
+ /*
+ * We do not account for softirq time from ksoftirqd here. We want to
+ * continue accounting softirq time to ksoftirqd thread in that case,
+ * so as not to confuse scheduler with a special task that do not
+ * consume any time, but still wants to run.
+ */
+ else if (in_serving_softirq() && !(current->flags & PF_KSOFTIRQD))
+ per_cpu(cpu_softirq_time, cpu) += delta;
+}
+
+/*
+ * Called before incrementing preempt_count on {soft,}irq_enter
+ * and before decrementing preempt_count on {soft,}irq_exit.
+ *
+ * @curr should at all times be current.
+ */
void account_system_vtime(struct task_struct *curr)
{
unsigned long flags;
+ u64 now;
int cpu;
- u64 now, delta;
if (!sched_clock_irqtime)
return;
@@ -1826,26 +1851,21 @@ void account_system_vtime(struct task_st
cpu = smp_processor_id();
now = sched_clock_cpu(cpu);
- delta = now - per_cpu(irq_start_time, cpu);
- per_cpu(irq_start_time, cpu) = now;
- /*
- * We do not account for softirq time from ksoftirqd here.
- * We want to continue accounting softirq time to ksoftirqd thread
- * in that case, so as not to confuse scheduler with a special task
- * that do not consume any time, but still wants to run.
- */
- if (hardirq_count())
- per_cpu(cpu_hardirq_time, cpu) += delta;
- else if (in_serving_softirq() && !(curr->flags & PF_KSOFTIRQD))
- per_cpu(cpu_softirq_time, cpu) += delta;
+ __account_system_vtime(cpu, now);
local_irq_restore(flags);
}
EXPORT_SYMBOL_GPL(account_system_vtime);
-static u64 irq_time_cpu(int cpu)
+static u64 irq_time_cpu(struct rq *rq)
{
- account_system_vtime(current);
+ int cpu = cpu_of(rq);
+ /*
+ * See the comment in update_rq_clock_task(), ideally we'd update
+ * the *irq_time values using rq->clock here.
+ *
+ * As it stands, reading this from a remote cpu is buggy on 32bit.
+ */
return per_cpu(cpu_softirq_time, cpu) + per_cpu(cpu_hardirq_time, cpu);
}
@@ -1853,9 +1873,27 @@ static void update_rq_clock_task(struct
{
s64 irq_delta;
- irq_delta = irq_time_cpu(cpu_of(rq)) - rq->prev_irq_time;
- rq->prev_irq_time += irq_delta;
+ irq_delta = irq_time_cpu(rq) - rq->prev_irq_time;
+
+ /*
+ * Since irq_time is only updated on {soft,}irq_exit, we might run into
+ * this case when a previous update_rq_clock() happened inside a
+ * {soft,}irq region.
+ *
+ * When this happens, we stop ->clock_task and only update the
+ * prev_irq_time stamp to account for the part that fit, so that a next
+ * update will consume the rest. This ensures ->clock_task is
+ * monotonic.
+ *
+ * It does however cause some slight miss-attribution of {soft,}irq
+ * time, a more accurate solution would be to update the irq_time using
+ * the current rq->clock timestamp, except that would require using
+ * atomic ops.
+ */
+ if (irq_delta > delta)
+ irq_delta = delta;
+ rq->prev_irq_time += irq_delta;
delta -= irq_delta;
rq->clock_task += delta;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists