lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <202312101912.zKLp44oc-lkp@intel.com>
Date:   Sun, 10 Dec 2023 19:05:10 +0800
From:   kernel test robot <lkp@...el.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     oe-kbuild-all@...ts.linux.dev, linux-kernel@...r.kernel.org,
        x86@...nel.org
Subject: [tip:sched/core 10/15] kernel/sched/fair.c:8231:(.text+0x2b62):
 relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
head:   418146e39891ef1fb2284dee4cabbfe616cd21cf
commit: c708a4dc5ab547edc3d6537233ca9e79ea30ce47 [10/15] sched: Unify more update_curr*()
config: csky-randconfig-s042-20220830 (https://download.01.org/0day-ci/archive/20231210/202312101912.zKLp44oc-lkp@intel.com/config)
compiler: csky-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231210/202312101912.zKLp44oc-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@...el.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312101912.zKLp44oc-lkp@intel.com/

All errors (new ones prefixed by >>):

   kernel/sched/fair.o: in function `attach_entity_load_avg':
>> include/trace/events/sched.h:743:(.text+0x2aec): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   include/trace/events/sched.h:743:(.text+0x2b0c): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   kernel/sched/fair.o: in function `check_preempt_wakeup_fair':
>> kernel/sched/fair.c:8231:(.text+0x2b62): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   kernel/sched/fair.c:8287:(.text+0x2b88): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   kernel/sched/fair.c:8261:(.text+0x2be8): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   kernel/sched/fair.c:8276:(.text+0x2c2e): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   kernel/sched/fair.c:8281:(.text+0x2c34): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   kernel/sched/fair.o: in function `select_idle_sibling':
   kernel/sched/fair.c:4730:(.text+0x2c94): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   kernel/sched/fair.c:7470:(.text+0x2d3a): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   kernel/sched/fair.c:7472:(.text+0x2d40): relocation truncated to fit: R_CKCORE_PCREL_IMM16BY4 against `__jump_table'
   kernel/sched/fair.c:6652:(.text+0x2d48): additional relocation overflows omitted from the output


vim +8231 kernel/sched/fair.c

02479099c28689 kernel/sched_fair.c Peter Zijlstra      2008-11-04  8206  
bf0f6f24a1ece8 kernel/sched_fair.c Ingo Molnar         2007-07-09  8207  /*
bf0f6f24a1ece8 kernel/sched_fair.c Ingo Molnar         2007-07-09  8208   * Preempt the current task with a newly woken task if needed:
bf0f6f24a1ece8 kernel/sched_fair.c Ingo Molnar         2007-07-09  8209   */
82845683ca6a15 kernel/sched/fair.c Ingo Molnar         2023-09-19  8210  static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int wake_flags)
bf0f6f24a1ece8 kernel/sched_fair.c Ingo Molnar         2007-07-09  8211  {
bf0f6f24a1ece8 kernel/sched_fair.c Ingo Molnar         2007-07-09  8212  	struct task_struct *curr = rq->curr;
8651a86c342ab7 kernel/sched_fair.c Srivatsa Vaddagiri  2007-10-15  8213  	struct sched_entity *se = &curr->se, *pse = &p->se;
4793241be408b3 kernel/sched_fair.c Peter Zijlstra      2008-11-04  8214  	struct cfs_rq *cfs_rq = task_cfs_rq(curr);
2f36825b176f67 kernel/sched_fair.c Venkatesh Pallipadi 2011-04-14  8215  	int next_buddy_marked = 0;
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8216  	int cse_is_idle, pse_is_idle;
4793241be408b3 kernel/sched_fair.c Peter Zijlstra      2008-11-04  8217  
4ae7d5cefd4aa3 kernel/sched_fair.c Ingo Molnar         2008-03-19  8218  	if (unlikely(se == pse))
4ae7d5cefd4aa3 kernel/sched_fair.c Ingo Molnar         2008-03-19  8219  		return;
4ae7d5cefd4aa3 kernel/sched_fair.c Ingo Molnar         2008-03-19  8220  
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8221  	/*
163122b7fcfa28 kernel/sched/fair.c Kirill Tkhai        2014-08-20  8222  	 * This is possible from callers such as attach_tasks(), in which we
e23edc86b09df6 kernel/sched/fair.c Ingo Molnar         2023-09-19  8223  	 * unconditionally wakeup_preempt() after an enqueue (which may have
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8224  	 * lead to a throttle).  This both saves work and prevents false
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8225  	 * next-buddy nomination below.
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8226  	 */
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8227  	if (unlikely(throttled_hierarchy(cfs_rq_of(pse))))
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8228  		return;
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8229  
5e963f2bd4654a kernel/sched/fair.c Peter Zijlstra      2023-05-31  8230  	if (sched_feat(NEXT_BUDDY) && !(wake_flags & WF_FORK)) {
02479099c28689 kernel/sched_fair.c Peter Zijlstra      2008-11-04 @8231  		set_next_buddy(pse);
2f36825b176f67 kernel/sched_fair.c Venkatesh Pallipadi 2011-04-14  8232  		next_buddy_marked = 1;
2f36825b176f67 kernel/sched_fair.c Venkatesh Pallipadi 2011-04-14  8233  	}
57fdc26d4a734a kernel/sched_fair.c Peter Zijlstra      2008-09-23  8234  
aec0a5142cb52a kernel/sched_fair.c Bharata B Rao       2008-08-28  8235  	/*
aec0a5142cb52a kernel/sched_fair.c Bharata B Rao       2008-08-28  8236  	 * We can come here with TIF_NEED_RESCHED already set from new task
aec0a5142cb52a kernel/sched_fair.c Bharata B Rao       2008-08-28  8237  	 * wake up path.
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8238  	 *
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8239  	 * Note: this also catches the edge-case of curr being in a throttled
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8240  	 * group (e.g. via set_curr_task), since update_curr() (in the
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8241  	 * enqueue of curr) will have resulted in resched being set.  This
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8242  	 * prevents us from potentially nominating it as a false LAST_BUDDY
5238cdd3873e67 kernel/sched_fair.c Paul Turner         2011-07-21  8243  	 * below.
aec0a5142cb52a kernel/sched_fair.c Bharata B Rao       2008-08-28  8244  	 */
aec0a5142cb52a kernel/sched_fair.c Bharata B Rao       2008-08-28  8245  	if (test_tsk_need_resched(curr))
aec0a5142cb52a kernel/sched_fair.c Bharata B Rao       2008-08-28  8246  		return;
aec0a5142cb52a kernel/sched_fair.c Bharata B Rao       2008-08-28  8247  
a2f5c9ab79f78e kernel/sched_fair.c Darren Hart         2011-02-22  8248  	/* Idle tasks are by definition preempted by non-idle tasks. */
1da1843f9f0334 kernel/sched/fair.c Viresh Kumar        2018-11-05  8249  	if (unlikely(task_has_idle_policy(curr)) &&
1da1843f9f0334 kernel/sched/fair.c Viresh Kumar        2018-11-05  8250  	    likely(!task_has_idle_policy(p)))
a2f5c9ab79f78e kernel/sched_fair.c Darren Hart         2011-02-22  8251  		goto preempt;
a2f5c9ab79f78e kernel/sched_fair.c Darren Hart         2011-02-22  8252  
91c234b4e3419c kernel/sched_fair.c Ingo Molnar         2007-10-15  8253  	/*
a2f5c9ab79f78e kernel/sched_fair.c Darren Hart         2011-02-22  8254  	 * Batch and idle tasks do not preempt non-idle tasks (their preemption
a2f5c9ab79f78e kernel/sched_fair.c Darren Hart         2011-02-22  8255  	 * is driven by the tick):
91c234b4e3419c kernel/sched_fair.c Ingo Molnar         2007-10-15  8256  	 */
8ed92e51f99c21 kernel/sched/fair.c Ingo Molnar         2012-10-14  8257  	if (unlikely(p->policy != SCHED_NORMAL) || !sched_feat(WAKEUP_PREEMPTION))
91c234b4e3419c kernel/sched_fair.c Ingo Molnar         2007-10-15  8258  		return;
8651a86c342ab7 kernel/sched_fair.c Srivatsa Vaddagiri  2007-10-15  8259  
464b75273f64be kernel/sched_fair.c Peter Zijlstra      2008-10-24  8260  	find_matching_se(&se, &pse);
09348d75a6ce60 kernel/sched/fair.c Ingo Molnar         2022-08-11  8261  	WARN_ON_ONCE(!pse);
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8262  
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8263  	cse_is_idle = se_is_idle(se);
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8264  	pse_is_idle = se_is_idle(pse);
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8265  
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8266  	/*
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8267  	 * Preempt an idle group in favor of a non-idle group (and don't preempt
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8268  	 * in the inverse case).
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8269  	 */
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8270  	if (cse_is_idle && !pse_is_idle)
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8271  		goto preempt;
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8272  	if (cse_is_idle != pse_is_idle)
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8273  		return;
304000390f88d0 kernel/sched/fair.c Josh Don            2021-07-29  8274  
147f3efaa24182 kernel/sched/fair.c Peter Zijlstra      2023-05-31  8275  	cfs_rq = cfs_rq_of(se);
147f3efaa24182 kernel/sched/fair.c Peter Zijlstra      2023-05-31  8276  	update_curr(cfs_rq);
147f3efaa24182 kernel/sched/fair.c Peter Zijlstra      2023-05-31  8277  
2f36825b176f67 kernel/sched_fair.c Venkatesh Pallipadi 2011-04-14  8278  	/*
147f3efaa24182 kernel/sched/fair.c Peter Zijlstra      2023-05-31  8279  	 * XXX pick_eevdf(cfs_rq) != se ?
2f36825b176f67 kernel/sched_fair.c Venkatesh Pallipadi 2011-04-14  8280  	 */
147f3efaa24182 kernel/sched/fair.c Peter Zijlstra      2023-05-31  8281  	if (pick_eevdf(cfs_rq) == pse)
3a7e73a2e26fff kernel/sched_fair.c Peter Zijlstra      2009-11-28  8282  		goto preempt;
464b75273f64be kernel/sched_fair.c Peter Zijlstra      2008-10-24  8283  
3a7e73a2e26fff kernel/sched_fair.c Peter Zijlstra      2009-11-28  8284  	return;
a65ac745e47e91 kernel/sched_fair.c Jupyung Lee         2009-11-17  8285  
3a7e73a2e26fff kernel/sched_fair.c Peter Zijlstra      2009-11-28  8286  preempt:
8875125efe8402 kernel/sched/fair.c Kirill Tkhai        2014-06-29  8287  	resched_curr(rq);
f685ceacab07d3 kernel/sched_fair.c Mike Galbraith      2009-10-23  8288  }
bf0f6f24a1ece8 kernel/sched_fair.c Ingo Molnar         2007-07-09  8289  

:::::: The code at line 8231 was first introduced by commit
:::::: 02479099c286894644f8e96c6bbb535ab64662fd sched: fix buddies for group scheduling

:::::: TO: Peter Zijlstra <a.p.zijlstra@...llo.nl>
:::::: CC: Ingo Molnar <mingo@...e.hu>

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ