lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALOAHbC1zqzU8-ikcLOUMKY5cbyuW_B6MK5rz7G6rTK-SoyMTQ@mail.gmail.com>
Date:   Wed, 6 Mar 2019 18:15:39 +0800
From:   Yafang Shao <laoar.shao@...il.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...hat.com, LKML <linux-kernel@...r.kernel.org>,
        shaoyafang@...iglobal.com
Subject: Re: [PATCH] sched: fair: fix missed CONFIG_SCHEDSTATS

On Wed, Mar 6, 2019 at 6:09 PM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Wed, Mar 06, 2019 at 04:43:46PM +0800, Yafang Shao wrote:
> > When I'm using trace_sched_stat_{iowait, blocked, wait, sleep} to
> > measure how long the processes are stalled, there's always no output from
> > trace_pipe while there're really some tasks in uninterruptible sleep
> > state. That makes me confused, so I try to investigate why.
> > Finally I find the reason is that CONFIG_SCHEDSTATS is not set.
> >
> > To avoid such kind of confusion, we should not expose these tracepoints
> > if CONFIG_SCHEDSTATS is not set.
>
> Yeah, lets not sprinkle #ifdef. Big fat NAK.
>
> Also, the below seem to indicate your compiler is stupid. Without
> CONFIG_SCHEDSTAT, schedstat_enabled() should be a constant 0 and DCE
> should delete all code.
>

My compiler is GCC-7.3.0.
I don't know which comipler could be smart enough to remove the
definition of these tracepoints.
Could you pls. tell me what compiler you are using ?

> > @@ -976,6 +982,7 @@ static void update_curr_fair(struct rq *rq)
> >  static inline void
> >  update_stats_enqueue(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> >  {
> > +#ifdef CONFIG_SCHEDSTATS
> >       if (!schedstat_enabled())
> >               return;
> >
> > @@ -988,12 +995,13 @@ static void update_curr_fair(struct rq *rq)
> >
> >       if (flags & ENQUEUE_WAKEUP)
> >               update_stats_enqueue_sleeper(cfs_rq, se);
> > +#endif
> >  }
> >
> >  static inline void
> >  update_stats_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> >  {
> > -
> > +#ifdef CONFIG_SCHEDSTATS
> >       if (!schedstat_enabled())
> >               return;
> >
> > @@ -1014,6 +1022,7 @@ static void update_curr_fair(struct rq *rq)
> >                       __schedstat_set(se->statistics.block_start,
> >                                     rq_clock(rq_of(cfs_rq)));
> >       }
> > +#endif
> >  }
> >
> >  /*
> > @@ -4090,6 +4099,7 @@ static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se)
> >       update_stats_curr_start(cfs_rq, se);
> >       cfs_rq->curr = se;
> >
> > +#ifdef CONFIG_SCHEDSTATS
> >       /*
> >        * Track our maximum slice length, if the CPU's load is at
> >        * least twice that of our own weight (i.e. dont track it
> > @@ -4100,6 +4110,7 @@ static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se)
> >                       max((u64)schedstat_val(se->statistics.slice_max),
> >                           se->sum_exec_runtime - se->prev_sum_exec_runtime));
> >       }
> > +#endif
> >
> >       se->prev_sum_exec_runtime = se->sum_exec_runtime;
> >  }
> > --
> > 1.8.3.1
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ