[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e95c6e55-a023-b6f7-1dce-4195dc22114a@bytedance.com>
Date: Wed, 7 Sep 2022 10:45:55 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: kernel test robot <lkp@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
Johannes Weiner <hannes@...xchg.org>, Tejun Heo <tj@...nel.org>
Cc: kbuild-all@...ts.01.org, linux-kernel@...r.kernel.org
Subject: Re: [peterz-queue:sched/psi 11/11]
include/linux/cgroup-defs.h:432:38: error: 'NR_PSI_RESOURCES' undeclared here
(not in a function)
On 2022/9/7 02:33, kernel test robot wrote:
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/psi
> head: 51beb408c569e516780c84a2020920432ad4c5ed
> commit: 51beb408c569e516780c84a2020920432ad4c5ed [11/11] sched/psi: Per-cgroup PSI accounting disable/re-enable interface
> config: i386-randconfig-a001
> compiler: gcc-11 (Debian 11.3.0-5) 11.3.0
> reproduce (this is a W=1 build):
> # https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git/commit/?id=51beb408c569e516780c84a2020920432ad4c5ed
> git remote add peterz-queue https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git
> git fetch --no-tags peterz-queue sched/psi
> git checkout 51beb408c569e516780c84a2020920432ad4c5ed
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> make W=1 O=build_dir ARCH=i386 prepare
>
> If you fix the issue, kindly add following tag where applicable
> Reported-by: kernel test robot <lkp@...el.com>
>
> All errors (new ones prefixed by >>):
>
> In file included from include/linux/cgroup.h:28,
> from include/linux/memcontrol.h:13,
> from include/linux/swap.h:9,
> from include/linux/suspend.h:5,
> from arch/x86/kernel/asm-offsets.c:13:
>>> include/linux/cgroup-defs.h:432:38: error: 'NR_PSI_RESOURCES' undeclared here (not in a function)
> 432 | struct cgroup_file psi_files[NR_PSI_RESOURCES];
Sorry, looks like there are two problems here:
1. NR_PSI_RESOURCES is undeclared when !CONFIG_PSI
Should I send the below diff as a separate patch?
diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
index ab1f9b463df9..6e4372735068 100644
--- a/include/linux/psi_types.h
+++ b/include/linux/psi_types.h
@@ -195,6 +195,8 @@ struct psi_group {
#else /* CONFIG_PSI */
+#define NR_PSI_RESOURCES 0
+
struct psi_group { };
#endif /* CONFIG_PSI */
2. This patchset depends on Tejun's commit e2691f6b44ed ("cgroup: Implement cgroup_file_show()") in linux-next
Maybe peterz-queue should include that first? I don't know what's the normal way to handle.
Thanks.
> | ^~~~~~~~~~~~~~~~
> make[2]: *** [scripts/Makefile.build:117: arch/x86/kernel/asm-offsets.s] Error 1
> make[2]: Target '__build' not remade because of errors.
> make[1]: *** [Makefile:1205: prepare0] Error 2
> make[1]: Target 'prepare' not remade because of errors.
> make: *** [Makefile:222: __sub-make] Error 2
> make: Target 'prepare' not remade because of errors.
>
>
> vim +/NR_PSI_RESOURCES +432 include/linux/cgroup-defs.h
>
> 377
> 378 struct cgroup {
> 379 /* self css with NULL ->ss, points back to this cgroup */
> 380 struct cgroup_subsys_state self;
> 381
> 382 unsigned long flags; /* "unsigned long" so bitops work */
> 383
> 384 /*
> 385 * The depth this cgroup is at. The root is at depth zero and each
> 386 * step down the hierarchy increments the level. This along with
> 387 * ancestor_ids[] can determine whether a given cgroup is a
> 388 * descendant of another without traversing the hierarchy.
> 389 */
> 390 int level;
> 391
> 392 /* Maximum allowed descent tree depth */
> 393 int max_depth;
> 394
> 395 /*
> 396 * Keep track of total numbers of visible and dying descent cgroups.
> 397 * Dying cgroups are cgroups which were deleted by a user,
> 398 * but are still existing because someone else is holding a reference.
> 399 * max_descendants is a maximum allowed number of descent cgroups.
> 400 *
> 401 * nr_descendants and nr_dying_descendants are protected
> 402 * by cgroup_mutex and css_set_lock. It's fine to read them holding
> 403 * any of cgroup_mutex and css_set_lock; for writing both locks
> 404 * should be held.
> 405 */
> 406 int nr_descendants;
> 407 int nr_dying_descendants;
> 408 int max_descendants;
> 409
> 410 /*
> 411 * Each non-empty css_set associated with this cgroup contributes
> 412 * one to nr_populated_csets. The counter is zero iff this cgroup
> 413 * doesn't have any tasks.
> 414 *
> 415 * All children which have non-zero nr_populated_csets and/or
> 416 * nr_populated_children of their own contribute one to either
> 417 * nr_populated_domain_children or nr_populated_threaded_children
> 418 * depending on their type. Each counter is zero iff all cgroups
> 419 * of the type in the subtree proper don't have any tasks.
> 420 */
> 421 int nr_populated_csets;
> 422 int nr_populated_domain_children;
> 423 int nr_populated_threaded_children;
> 424
> 425 int nr_threaded_children; /* # of live threaded child cgroups */
> 426
> 427 struct kernfs_node *kn; /* cgroup kernfs entry */
> 428 struct cgroup_file procs_file; /* handle for "cgroup.procs" */
> 429 struct cgroup_file events_file; /* handle for "cgroup.events" */
> 430
> 431 /* handles for "{cpu,memory,io,irq}.pressure" */
> > 432 struct cgroup_file psi_files[NR_PSI_RESOURCES];
> 433
> 434 /*
> 435 * The bitmask of subsystems enabled on the child cgroups.
> 436 * ->subtree_control is the one configured through
> 437 * "cgroup.subtree_control" while ->subtree_ss_mask is the effective
> 438 * one which may have more subsystems enabled. Controller knobs
> 439 * are made available iff it's enabled in ->subtree_control.
> 440 */
> 441 u16 subtree_control;
> 442 u16 subtree_ss_mask;
> 443 u16 old_subtree_control;
> 444 u16 old_subtree_ss_mask;
> 445
> 446 /* Private pointers for each registered subsystem */
> 447 struct cgroup_subsys_state __rcu *subsys[CGROUP_SUBSYS_COUNT];
> 448
> 449 struct cgroup_root *root;
> 450
> 451 /*
> 452 * List of cgrp_cset_links pointing at css_sets with tasks in this
> 453 * cgroup. Protected by css_set_lock.
> 454 */
> 455 struct list_head cset_links;
> 456
> 457 /*
> 458 * On the default hierarchy, a css_set for a cgroup with some
> 459 * susbsys disabled will point to css's which are associated with
> 460 * the closest ancestor which has the subsys enabled. The
> 461 * following lists all css_sets which point to this cgroup's css
> 462 * for the given subsystem.
> 463 */
> 464 struct list_head e_csets[CGROUP_SUBSYS_COUNT];
> 465
> 466 /*
> 467 * If !threaded, self. If threaded, it points to the nearest
> 468 * domain ancestor. Inside a threaded subtree, cgroups are exempt
> 469 * from process granularity and no-internal-task constraint.
> 470 * Domain level resource consumptions which aren't tied to a
> 471 * specific task are charged to the dom_cgrp.
> 472 */
> 473 struct cgroup *dom_cgrp;
> 474 struct cgroup *old_dom_cgrp; /* used while enabling threaded */
> 475
> 476 /* per-cpu recursive resource statistics */
> 477 struct cgroup_rstat_cpu __percpu *rstat_cpu;
> 478 struct list_head rstat_css_list;
> 479
> 480 /* cgroup basic resource statistics */
> 481 struct cgroup_base_stat last_bstat;
> 482 struct cgroup_base_stat bstat;
> 483 struct prev_cputime prev_cputime; /* for printing out cputime */
> 484
> 485 /*
> 486 * list of pidlists, up to two for each namespace (one for procs, one
> 487 * for tasks); created on demand.
> 488 */
> 489 struct list_head pidlists;
> 490 struct mutex pidlist_mutex;
> 491
> 492 /* used to wait for offlining of csses */
> 493 wait_queue_head_t offline_waitq;
> 494
> 495 /* used to schedule release agent */
> 496 struct work_struct release_agent_work;
> 497
> 498 /* used to track pressure stalls */
> 499 struct psi_group *psi;
> 500
> 501 /* used to store eBPF programs */
> 502 struct cgroup_bpf bpf;
> 503
> 504 /* If there is block congestion on this cgroup. */
> 505 atomic_t congestion_count;
> 506
> 507 /* Used to store internal freezer state */
> 508 struct cgroup_freezer_state freezer;
> 509
> 510 /* ids of the ancestors at each level including self */
> 511 u64 ancestor_ids[];
> 512 };
> 513
>
Powered by blists - more mailing lists