lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 29 Mar 2017 23:03:45 +0200 From: Dietmar Eggemann <dietmar.eggemann@....com> To: Peter Zijlstra <peterz@...radead.org>, Steven Rostedt <rostedt@...dmis.org> Cc: Ingo Molnar <mingo@...nel.org>, LKML <linux-kernel@...r.kernel.org>, Matt Fleming <matt@...eblueprint.co.uk>, Vincent Guittot <vincent.guittot@...aro.org>, Morten Rasmussen <morten.rasmussen@....com>, Juri Lelli <juri.lelli@....com>, Patrick Bellasi <patrick.bellasi@....com> Subject: Re: [RFC PATCH 2/5] sched/events: Introduce cfs_rq load tracking trace event On 03/28/2017 06:44 PM, Peter Zijlstra wrote: > On Tue, Mar 28, 2017 at 10:46:00AM -0400, Steven Rostedt wrote: >> On Tue, 28 Mar 2017 07:35:38 +0100 >> Dietmar Eggemann <dietmar.eggemann@....com> wrote: [...] > I too suggested that; but then I looked again at that code and we can > actually do this. cfs_rq can be constant propagated and the if > determined at build time. > > Its not immediately obvious from the current code; but if we do > something like the below, it should be clearer. > > --- > Subject: sched/fair: Explicitly generate __update_load_avg() instances > From: Peter Zijlstra <peterz@...radead.org> > Date: Tue Mar 28 11:08:20 CEST 2017 > > The __update_load_avg() function is an __always_inline because its > used with constant propagation to generate different variants of the > code without having to duplicate it (which would be prone to bugs). Ah, so the if(cfs_rq)/else condition should stay in ___update_load_avg() and I shouldn't move the trace events into the 3 variants? I tried to verify that the if is determined at build time but it's kind of hard with trace_events. > Explicitly instantiate the 3 variants. > > Note that most of this is called from rather hot paths, so reducing > branches is good. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> > --- > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -2849,7 +2849,7 @@ static u32 __compute_runnable_contrib(u6 > * = u_0 + u_1*y + u_2*y^2 + ... [re-labeling u_i --> u_{i+1}] > */ > static __always_inline int > -__update_load_avg(u64 now, int cpu, struct sched_avg *sa, > +___update_load_avg(u64 now, int cpu, struct sched_avg *sa, > unsigned long weight, int running, struct cfs_rq *cfs_rq) > { > u64 delta, scaled_delta, periods; > @@ -2953,6 +2953,26 @@ __update_load_avg(u64 now, int cpu, stru > return decayed; > } > > +static int > +__update_load_avg_blocked_se(u64 now, int cpu, struct sched_avg *sa) > +{ > + return ___update_load_avg(now, cpu, sa, 0, 0, NULL); > +} > + > +static int > +__update_load_avg_se(u64 now, int cpu, struct sched_avg *sa, > + unsigned long weight, int running) > +{ > + return ___update_load_avg(now, cpu, sa, weight, running, NULL); > +} > + > +static int > +__update_load_avg(u64 now, int cpu, struct sched_avg *sa, > + unsigned long weight, int running, struct cfs_rq *cfs_rq) > +{ > + return ___update_load_avg(now, cpu, sa, weight, running, cfs_rq); > +} Why not reduce the parameter list of these 3 incarnations to 'now, cpu, object'? static int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se) static int __update_load_avg_se(u64 now, int cpu, struct sched_entity *se) static int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq) [...]
Powered by blists - more mailing lists