[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160125184000.GA6249@gmail.com>
Date: Mon, 25 Jan 2016 19:40:00 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Peter Zijlstra <peterz@...radead.org>,
Matt Fleming <matt@...eblueprint.co.uk>,
Mike Galbraith <mgalbraith@...e.de>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched: Make schedstats a runtime tunable that is
disabled by default
* Mel Gorman <mgorman@...hsingularity.net> wrote:
> On Mon, Jan 25, 2016 at 04:46:35PM +0100, Ingo Molnar wrote:
> > > Of course, it'll be our luck that tracking the data for these
> > > tracepoints is the most expensive part of schedstats ...
> > >
> > > Ingo?
> >
> > IIRC it needed only a small subset of schedstats to make those tracepoints work.
> >
> > We already have too much overhead in the scheduler as-is - and the extra cache
> > footprint does not even show on the typically cache-rich enterprise CPUs most of
> > the scalability testing goes on.
> >
> > My minimum requirement for such runtime enablement would be to make it entirely
> > static-branch patched and triggered at the call sites as well - not hidden inside
> > schedstat functions.
> >
>
> As it is, it's static-branch patched but I'm struggling to see why they cannot
> be hidden in the schedstat_* functions which are just preprocessor macros. The
> checks could be put in the callsites but it's a lot of updates and I don't think
> the end result would be very nice to read.
So I was judging by:
@@ -755,7 +755,12 @@ static void
update_stats_wait_end(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
struct task_struct *p;
- u64 delta = rq_clock(rq_of(cfs_rq)) - se->statistics.wait_start;
+ u64 delta;
+
+ if (static_branch_unlikely(&sched_schedstats))
+ return;
+
which puts a static branch inside a real function, not preprocessor macros.
Thanks,
Ingo
Powered by blists - more mailing lists