[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202507160730.0cXkgs0S-lkp@intel.com>
Date: Wed, 16 Jul 2025 07:29:37 +0800
From: kernel test robot <lkp@...el.com>
To: Aaron Lu <ziqianlu@...edance.com>,
Valentin Schneider <vschneid@...hat.com>,
Ben Segall <bsegall@...gle.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Peter Zijlstra <peterz@...radead.org>,
Chengming Zhou <chengming.zhou@...ux.dev>,
Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Xi Wang <xii@...gle.com>
Cc: llvm@...ts.linux.dev, oe-kbuild-all@...ts.linux.dev,
linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>,
Chuyi Zhou <zhouchuyi@...edance.com>,
Jan Kiszka <jan.kiszka@...mens.com>,
Florian Bezdeka <florian.bezdeka@...mens.com>,
Songtang Liu <liusongtang@...edance.com>
Subject: Re: [PATCH v3 3/5] sched/fair: Switch to task based throttle model
Hi Aaron,
kernel test robot noticed the following build warnings:
[auto build test WARNING on tip/sched/core]
[also build test WARNING on next-20250715]
[cannot apply to linus/master v6.16-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Aaron-Lu/sched-fair-Add-related-data-structure-for-task-based-throttle/20250715-152307
base: tip/sched/core
patch link: https://lore.kernel.org/r/20250715071658.267-4-ziqianlu%40bytedance.com
patch subject: [PATCH v3 3/5] sched/fair: Switch to task based throttle model
config: i386-buildonly-randconfig-006-20250716 (https://download.01.org/0day-ci/archive/20250716/202507160730.0cXkgs0S-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250716/202507160730.0cXkgs0S-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@...el.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507160730.0cXkgs0S-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> kernel/sched/fair.c:5456:33: warning: implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1 [-Wsingle-bit-bitfield-constant-conversion]
5456 | cfs_rq->pelt_clock_throttled = 1;
| ^ ~
kernel/sched/fair.c:5971:32: warning: implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1 [-Wsingle-bit-bitfield-constant-conversion]
5971 | cfs_rq->pelt_clock_throttled = 1;
| ^ ~
kernel/sched/fair.c:6014:20: warning: implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1 [-Wsingle-bit-bitfield-constant-conversion]
6014 | cfs_rq->throttled = 1;
| ^ ~
3 warnings generated.
vim +/int +5456 kernel/sched/fair.c
5372
5373 static bool
5374 dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
5375 {
5376 bool sleep = flags & DEQUEUE_SLEEP;
5377 int action = UPDATE_TG;
5378
5379 update_curr(cfs_rq);
5380 clear_buddies(cfs_rq, se);
5381
5382 if (flags & DEQUEUE_DELAYED) {
5383 WARN_ON_ONCE(!se->sched_delayed);
5384 } else {
5385 bool delay = sleep;
5386 /*
5387 * DELAY_DEQUEUE relies on spurious wakeups, special task
5388 * states must not suffer spurious wakeups, excempt them.
5389 */
5390 if (flags & DEQUEUE_SPECIAL)
5391 delay = false;
5392
5393 WARN_ON_ONCE(delay && se->sched_delayed);
5394
5395 if (sched_feat(DELAY_DEQUEUE) && delay &&
5396 !entity_eligible(cfs_rq, se)) {
5397 update_load_avg(cfs_rq, se, 0);
5398 set_delayed(se);
5399 return false;
5400 }
5401 }
5402
5403 if (entity_is_task(se) && task_on_rq_migrating(task_of(se)))
5404 action |= DO_DETACH;
5405
5406 /*
5407 * When dequeuing a sched_entity, we must:
5408 * - Update loads to have both entity and cfs_rq synced with now.
5409 * - For group_entity, update its runnable_weight to reflect the new
5410 * h_nr_runnable of its group cfs_rq.
5411 * - Subtract its previous weight from cfs_rq->load.weight.
5412 * - For group entity, update its weight to reflect the new share
5413 * of its group cfs_rq.
5414 */
5415 update_load_avg(cfs_rq, se, action);
5416 se_update_runnable(se);
5417
5418 update_stats_dequeue_fair(cfs_rq, se, flags);
5419
5420 update_entity_lag(cfs_rq, se);
5421 if (sched_feat(PLACE_REL_DEADLINE) && !sleep) {
5422 se->deadline -= se->vruntime;
5423 se->rel_deadline = 1;
5424 }
5425
5426 if (se != cfs_rq->curr)
5427 __dequeue_entity(cfs_rq, se);
5428 se->on_rq = 0;
5429 account_entity_dequeue(cfs_rq, se);
5430
5431 /* return excess runtime on last dequeue */
5432 return_cfs_rq_runtime(cfs_rq);
5433
5434 update_cfs_group(se);
5435
5436 /*
5437 * Now advance min_vruntime if @se was the entity holding it back,
5438 * except when: DEQUEUE_SAVE && !DEQUEUE_MOVE, in this case we'll be
5439 * put back on, and if we advance min_vruntime, we'll be placed back
5440 * further than we started -- i.e. we'll be penalized.
5441 */
5442 if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
5443 update_min_vruntime(cfs_rq);
5444
5445 if (flags & DEQUEUE_DELAYED)
5446 finish_delayed_dequeue_entity(se);
5447
5448 if (cfs_rq->nr_queued == 0) {
5449 update_idle_cfs_rq_clock_pelt(cfs_rq);
5450 #ifdef CONFIG_CFS_BANDWIDTH
5451 if (throttled_hierarchy(cfs_rq)) {
5452 struct rq *rq = rq_of(cfs_rq);
5453
5454 list_del_leaf_cfs_rq(cfs_rq);
5455 cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
> 5456 cfs_rq->pelt_clock_throttled = 1;
5457 }
5458 #endif
5459 }
5460
5461 return true;
5462 }
5463
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Powered by blists - more mailing lists