[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202006270351.lVmaZ420%lkp@intel.com>
Date: Sat, 27 Jun 2020 03:16:15 +0800
From: kernel test robot <lkp@...el.com>
To: Guenter Roeck <linux@...ck-us.net>, Ingo Molnar <mingo@...hat.com>
Cc: kbuild-all@...ts.01.org, clang-built-linux@...glegroups.com,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel@...r.kernel.org, Guenter Roeck <linux@...ck-us.net>
Subject: Re: [PATCH] sched: Declare sched_rt_bandwidth_account() in include
file
Hi Guenter,
I love your patch! Yet something to improve:
[auto build test ERROR on tip/sched/core]
[also build test ERROR on tip/auto-latest linux/master linus/master v5.8-rc2 next-20200626]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Guenter-Roeck/sched-Declare-sched_rt_bandwidth_account-in-include-file/20200626-220544
base: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 01e377c539ca52a6c753d0fdbe93b3b8fcd66a1c
config: x86_64-allnoconfig (attached as .config)
compiler: clang version 11.0.0 (https://github.com/llvm/llvm-project 6e11ed52057ffc39941cb2de6d93cae522db4782)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install x86_64 cross compiling tool for clang build
# apt-get install binutils-x86-64-linux-gnu
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@...el.com>
All errors (new ones prefixed by >>):
>> kernel/sched/deadline.c:1315:7: error: implicit declaration of function 'sched_rt_bandwidth_account' [-Werror,-Wimplicit-function-declaration]
if (sched_rt_bandwidth_account(rt_rq))
^
1 error generated.
vim +/sched_rt_bandwidth_account +1315 kernel/sched/deadline.c
c52f14d384628d Luca Abeni 2017-05-18 1213
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1214 /*
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1215 * Update the current task's runtime statistics (provided it is still
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1216 * a -deadline task and has not been removed from the dl_rq).
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1217 */
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1218 static void update_curr_dl(struct rq *rq)
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1219 {
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1220 struct task_struct *curr = rq->curr;
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1221 struct sched_dl_entity *dl_se = &curr->dl;
07881166a892fa Juri Lelli 2017-12-04 1222 u64 delta_exec, scaled_delta_exec;
07881166a892fa Juri Lelli 2017-12-04 1223 int cpu = cpu_of(rq);
6fe0ce1eb04f99 Wen Yang 2018-02-06 1224 u64 now;
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1225
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1226 if (!dl_task(curr) || !on_dl_rq(dl_se))
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1227 return;
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1228
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1229 /*
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1230 * Consumed budget is computed considering the time as
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1231 * observed by schedulable tasks (excluding time spent
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1232 * in hardirq context, etc.). Deadlines are instead
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1233 * computed using hard walltime. This seems to be the more
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1234 * natural solution, but the full ramifications of this
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1235 * approach need further study.
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1236 */
6fe0ce1eb04f99 Wen Yang 2018-02-06 1237 now = rq_clock_task(rq);
6fe0ce1eb04f99 Wen Yang 2018-02-06 1238 delta_exec = now - curr->se.exec_start;
48be3a67da7413 Peter Zijlstra 2016-02-23 1239 if (unlikely((s64)delta_exec <= 0)) {
48be3a67da7413 Peter Zijlstra 2016-02-23 1240 if (unlikely(dl_se->dl_yielded))
48be3a67da7413 Peter Zijlstra 2016-02-23 1241 goto throttle;
734ff2a71f9e6a Kirill Tkhai 2014-03-04 1242 return;
48be3a67da7413 Peter Zijlstra 2016-02-23 1243 }
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1244
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1245 schedstat_set(curr->se.statistics.exec_max,
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1246 max(curr->se.statistics.exec_max, delta_exec));
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1247
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1248 curr->se.sum_exec_runtime += delta_exec;
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1249 account_group_exec_runtime(curr, delta_exec);
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1250
6fe0ce1eb04f99 Wen Yang 2018-02-06 1251 curr->se.exec_start = now;
d2cc5ed6949085 Tejun Heo 2017-09-25 1252 cgroup_account_cputime(curr, delta_exec);
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1253
794a56ebd9a57d Juri Lelli 2017-12-04 1254 if (dl_entity_is_special(dl_se))
794a56ebd9a57d Juri Lelli 2017-12-04 1255 return;
794a56ebd9a57d Juri Lelli 2017-12-04 1256
07881166a892fa Juri Lelli 2017-12-04 1257 /*
07881166a892fa Juri Lelli 2017-12-04 1258 * For tasks that participate in GRUB, we implement GRUB-PA: the
07881166a892fa Juri Lelli 2017-12-04 1259 * spare reclaimed bandwidth is used to clock down frequency.
07881166a892fa Juri Lelli 2017-12-04 1260 *
07881166a892fa Juri Lelli 2017-12-04 1261 * For the others, we still need to scale reservation parameters
07881166a892fa Juri Lelli 2017-12-04 1262 * according to current frequency and CPU maximum capacity.
07881166a892fa Juri Lelli 2017-12-04 1263 */
07881166a892fa Juri Lelli 2017-12-04 1264 if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) {
07881166a892fa Juri Lelli 2017-12-04 1265 scaled_delta_exec = grub_reclaim(delta_exec,
07881166a892fa Juri Lelli 2017-12-04 1266 rq,
07881166a892fa Juri Lelli 2017-12-04 1267 &curr->dl);
07881166a892fa Juri Lelli 2017-12-04 1268 } else {
07881166a892fa Juri Lelli 2017-12-04 1269 unsigned long scale_freq = arch_scale_freq_capacity(cpu);
8ec59c0f5f4966 Vincent Guittot 2019-06-17 1270 unsigned long scale_cpu = arch_scale_cpu_capacity(cpu);
07881166a892fa Juri Lelli 2017-12-04 1271
07881166a892fa Juri Lelli 2017-12-04 1272 scaled_delta_exec = cap_scale(delta_exec, scale_freq);
07881166a892fa Juri Lelli 2017-12-04 1273 scaled_delta_exec = cap_scale(scaled_delta_exec, scale_cpu);
07881166a892fa Juri Lelli 2017-12-04 1274 }
07881166a892fa Juri Lelli 2017-12-04 1275
07881166a892fa Juri Lelli 2017-12-04 1276 dl_se->runtime -= scaled_delta_exec;
48be3a67da7413 Peter Zijlstra 2016-02-23 1277
48be3a67da7413 Peter Zijlstra 2016-02-23 1278 throttle:
48be3a67da7413 Peter Zijlstra 2016-02-23 1279 if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) {
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1280 dl_se->dl_throttled = 1;
34be39305a77b8 Juri Lelli 2017-12-12 1281
34be39305a77b8 Juri Lelli 2017-12-12 1282 /* If requested, inform the user about runtime overruns. */
34be39305a77b8 Juri Lelli 2017-12-12 1283 if (dl_runtime_exceeded(dl_se) &&
34be39305a77b8 Juri Lelli 2017-12-12 1284 (dl_se->flags & SCHED_FLAG_DL_OVERRUN))
34be39305a77b8 Juri Lelli 2017-12-12 1285 dl_se->dl_overrun = 1;
34be39305a77b8 Juri Lelli 2017-12-12 1286
1019a359d3dc4b Peter Zijlstra 2014-11-26 1287 __dequeue_task_dl(rq, curr, 0);
a649f237db1845 Peter Zijlstra 2015-06-11 1288 if (unlikely(dl_se->dl_boosted || !start_dl_timer(curr)))
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1289 enqueue_task_dl(rq, curr, ENQUEUE_REPLENISH);
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1290
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1291 if (!is_leftmost(curr, &rq->dl))
8875125efe8402 Kirill Tkhai 2014-06-29 1292 resched_curr(rq);
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1293 }
1724813d9f2c7f Peter Zijlstra 2013-12-17 1294
1724813d9f2c7f Peter Zijlstra 2013-12-17 1295 /*
1724813d9f2c7f Peter Zijlstra 2013-12-17 1296 * Because -- for now -- we share the rt bandwidth, we need to
1724813d9f2c7f Peter Zijlstra 2013-12-17 1297 * account our runtime there too, otherwise actual rt tasks
1724813d9f2c7f Peter Zijlstra 2013-12-17 1298 * would be able to exceed the shared quota.
1724813d9f2c7f Peter Zijlstra 2013-12-17 1299 *
1724813d9f2c7f Peter Zijlstra 2013-12-17 1300 * Account to the root rt group for now.
1724813d9f2c7f Peter Zijlstra 2013-12-17 1301 *
1724813d9f2c7f Peter Zijlstra 2013-12-17 1302 * The solution we're working towards is having the RT groups scheduled
1724813d9f2c7f Peter Zijlstra 2013-12-17 1303 * using deadline servers -- however there's a few nasties to figure
1724813d9f2c7f Peter Zijlstra 2013-12-17 1304 * out before that can happen.
1724813d9f2c7f Peter Zijlstra 2013-12-17 1305 */
1724813d9f2c7f Peter Zijlstra 2013-12-17 1306 if (rt_bandwidth_enabled()) {
1724813d9f2c7f Peter Zijlstra 2013-12-17 1307 struct rt_rq *rt_rq = &rq->rt;
1724813d9f2c7f Peter Zijlstra 2013-12-17 1308
1724813d9f2c7f Peter Zijlstra 2013-12-17 1309 raw_spin_lock(&rt_rq->rt_runtime_lock);
1724813d9f2c7f Peter Zijlstra 2013-12-17 1310 /*
1724813d9f2c7f Peter Zijlstra 2013-12-17 1311 * We'll let actual RT tasks worry about the overflow here, we
faa5993736d9b4 Juri Lelli 2014-02-21 1312 * have our own CBS to keep us inline; only account when RT
faa5993736d9b4 Juri Lelli 2014-02-21 1313 * bandwidth is relevant.
1724813d9f2c7f Peter Zijlstra 2013-12-17 1314 */
faa5993736d9b4 Juri Lelli 2014-02-21 @1315 if (sched_rt_bandwidth_account(rt_rq))
faa5993736d9b4 Juri Lelli 2014-02-21 1316 rt_rq->rt_time += delta_exec;
1724813d9f2c7f Peter Zijlstra 2013-12-17 1317 raw_spin_unlock(&rt_rq->rt_runtime_lock);
1724813d9f2c7f Peter Zijlstra 2013-12-17 1318 }
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1319 }
aab03e05e8f7e2 Dario Faggioli 2013-11-28 1320
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Download attachment ".config.gz" of type "application/gzip" (7515 bytes)
Powered by blists - more mailing lists