[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220208164749.GB230002@windriver.com>
Date: Tue, 8 Feb 2022 11:47:49 -0500
From: Paul Gortmaker <paul.gortmaker@...driver.com>
To: Frederic Weisbecker <frederic@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...two.de>,
Juri Lelli <juri.lelli@...hat.com>,
Alex Belits <abelits@...vell.com>,
Nitesh Lal <nilal@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Nicolas Saenz <nsaenzju@...hat.com>,
"Paul E . McKenney" <paulmck@...nel.org>,
Phil Auld <pauld@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Zefan Li <lizefan.x@...edance.com>
Subject: Revisiting the current set of nohz / housekeeping flags before export
The isolation flags [some/all?] are someday going to be exposed via cgroups.
Frederic has a series out for review now that moves us closer to doing that:
https://lore.kernel.org/all/20220207155910.527133-1-frederic@kernel.org/
They will be user exposed tuning knobs and no longer just hidden away in
source with a subset linked to boot args used by a small set of people.
As such, we'll need clear user-facing descriptions of each, and specifics of
what they control/alter. Which was the initial thought that got me here.
When the 1st group of flags was introduced (2017) they were split at a fine
grained level, leaving the door open to re-merge some flags later if natural
groupings arose. (see de201559df8 log)
Prior to the elevated userspace exposure they'll get via cgroups (and via
adding a Documentation/isolation.rst or similar) it probably makes sense
to revisit possible flag merges now, before they become cemented into API
and thus essentially leave us stuck with the choices forever.
Open questions:
-should HK_FLAG_SCHED be squashed into HK_FLAG_MISC ? It isnt set and the
name can be misleading to new users, in that it "sounds like" the main
isolation flag (vs the "real" one which is essentially !HK_FLAG_DOMAIN)
-should HK_FLAG_RCU be squashed into ... ? HK_FLAG_MISC ? It is only used
for rcu_online/offline of a CPU and the name might inadvertently suggest
that it is somehow related to rcu_nocbs= (when it isn't).
-do we need HK_FLAG_WQ ? Currently always or'd with DOMAIN. Or should we
change to select from WQ and then fall back to DOMAIN iff WQ set is empty?
To better address this, and answer "how did we get here, and who is using
what flags and where currently" I found myself making some notes to gather
all that kind of information in one place.
So what follows below are my rough notes on the housekeeping flags - a
combination of snippets of source, and commit references etc. Maybe it
provides others a shortcut to the overall picture w/o duplicating the work.
>From here, hopefully we can decide if we are OK with the flags as they are,
and I can take what remains and try and address the documentation I think
we'll need, as per what I outlined at the top.
Paul.
======================
Current (5.17) Housekeeping Flag Set:
-------------------------------------
de201559df872 (Frederic Weisbecker 2017-10-27 8) enum hk_flags {
de201559df872 (Frederic Weisbecker 2017-10-27 9) HK_FLAG_TIMER = 1,
de201559df872 (Frederic Weisbecker 2017-10-27 10) HK_FLAG_RCU = (1 << 1),
de201559df872 (Frederic Weisbecker 2017-10-27 11) HK_FLAG_MISC = (1 << 2),
de201559df872 (Frederic Weisbecker 2017-10-27 12) HK_FLAG_SCHED = (1 << 3),
6f1982fedd598 (Frederic Weisbecker 2017-10-27 13) HK_FLAG_TICK = (1 << 4),
edb9382175c3e (Frederic Weisbecker 2017-10-27 14) HK_FLAG_DOMAIN = (1 << 5),
1bda3f8087fce (Frederic Weisbecker 2018-02-21 15) HK_FLAG_WQ = (1 << 6),
11ea68f553e24 (Ming Lei 2020-01-20 16) HK_FLAG_MANAGED_IRQ = (1 << 7),
9cc5b8656892a (Marcelo Tosatti 2020-05-27 17) HK_FLAG_KTHREAD = (1 << 8),
de201559df872 (Frederic Weisbecker 2017-10-27 18) };
Note that we currently don't set any "default" flags. Assignment only happens
via use of isolcpus= and/or nohz_full= bootargs.
Further, the "default" tries to skip all housekeeping flags considerations
entirely via the static key "housekeeping_overridden" used by all the various
housekeeping_cpu() type query functions.
1st split:
-----------
commit de201559df872f83d0c08fb4effe3efd28e6cbc8
Author: Frederic Weisbecker <frederic@...nel.org>
Date: Fri Oct 27 04:42:35 2017 +0200
sched/isolation: Introduce housekeeping flags
Before we implement isolcpus under housekeeping, we need the isolation
features to be more finegrained. For example some people want NOHZ_FULL
without the full scheduler isolation, others want full scheduler
isolation without NOHZ_FULL.
So let's cut all these isolation features piecewise, at the risk of
overcutting it right now. We can still merge some flags later if they
always make sense together.
+enum hk_flags {
+ HK_FLAG_TIMER = 1,
+ HK_FLAG_RCU = (1 << 1),
+ HK_FLAG_MISC = (1 << 2),
+ HK_FLAG_SCHED = (1 << 3),
+};
TICK and DOMAIN appears (same time):
------------------------------------
commit 6f1982fedd59856bcc42a9b521be4c3ffd2f60a7
Author: Frederic Weisbecker <frederic@...nel.org>
Date: Fri Oct 27 04:42:36 2017 +0200
sched/isolation: Handle the nohz_full= parameter
We want to centralize the isolation management, done by the housekeeping
subsystem. Therefore we need to handle the nohz_full= parameter from
there.
Since nohz_full= so far has involved unbound timers, watchdog, RCU
and tilegx NAPI isolation, we keep that default behaviour.
nohz_full= will be deprecated in the future. We want to control
the isolation features from the isolcpus= parameter.
HK_FLAG_SCHED = (1 << 3),
+ HK_FLAG_TICK = (1 << 4),
};
commit edb9382175c3ebdced8ffdb3e0f20052ad9fdbe9
Author: Frederic Weisbecker <frederic@...nel.org>
Date: Fri Oct 27 04:42:37 2017 +0200
sched/isolation: Move isolcpus= handling to the housekeeping code
We want to centralize the isolation features, to be done by the housekeeping
subsystem and scheduler domain isolation is a significant part of it.
No intended behaviour change, we just reuse the housekeeping cpumask
and core code.
HK_FLAG_TICK = (1 << 4),
+ HK_FLAG_DOMAIN = (1 << 5),
};
Current (5.17) housekeeping NOHZ_FULL Base Flags:
-------------------------------------------------
static int __init housekeeping_nohz_full_setup(char *str)
{
unsigned int flags;
flags = HK_FLAG_TICK | HK_FLAG_WQ | HK_FLAG_TIMER | HK_FLAG_RCU |
HK_FLAG_MISC | HK_FLAG_KTHREAD;
return housekeeping_setup(str, flags);
}
__setup("nohz_full=", housekeeping_nohz_full_setup);
[Note: SCHED not in above, but probably should be? (see discussion below).]
Individual Flags accesible via isolcpus= (currently):
-----------------------------------------------------
static int __init housekeeping_isolcpus_setup(char *str)
{
if (!strncmp(str, "nohz,", 5)) {
flags |= HK_FLAG_TICK;
if (!strncmp(str, "domain,", 7)) {
flags |= HK_FLAG_DOMAIN;
if (!strncmp(str, "managed_irq,", 12)) {
flags |= HK_FLAG_MANAGED_IRQ;
/* Default behaviour for isolcpus without flags */
if (!flags)
flags |= HK_FLAG_DOMAIN;
return housekeeping_setup(str, flags);
}
HK_FLAG_<name> Bit Breakdown and where used:
============================================
DOMAIN: (put 1st so as to not "bury the lead")
------------------------------------------------------------------------------
--if set, this core appears in the scheduler domain hierarchy and is available
for "normal" use. See Documentation/scheduler/sched-domains.rst
--can consider this the "main" isolation flag but in inverse; it NOT being set
dictates what gets printed from /sys/devices/system/cpu/isolated
drivers/base/cpu.c
static ssize_t print_cpus_isolated(struct device *dev,
struct device_attribute *attr, char *buf)
{
cpumask_var_t isolated;
cpumask_andnot(isolated, cpu_possible_mask,
housekeeping_cpumask(HK_FLAG_DOMAIN));
len = sysfs_emit(buf, "%*pbl\n", cpumask_pr_args(isolated));
--init is moved to domain (housekeeping) core here:
kernel/sched/core.c
/* Move init over to a non-isolated CPU */
if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)
BUG();
--cgroups only allows stuff on CPUs with it set
kernel/cgroup/cpuset.c
cpumask_and(doms[0], top_cpuset.effective_cpus,
housekeeping_cpumask(HK_FLAG_DOMAIN));
--asymmetry scan only covers DOMAIN CPUs
kernel/sched/topology.c
for_each_cpu_and(cpu, cpu_possible_mask, housekeeping_cpumask(HK_FLAG_DOMAIN))
asym_cpu_capacity_update_data(cpu);
--workqueue only on DOMAIN (or dedicated WQ CPUs).
kernel/workqueue.c
void __init workqueue_init_early(void)
{
int hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
cpumask_copy(wq_unbound_cpumask, housekeeping_cpumask(hk_flags));
...and see other DOMAIN | WQ instances below
SCHED:
------------------------------------------------------------------------------
--note: intention seems to have been scheduler *housekeeping* (idle rebalance
etc) and should NOT be interpreted as "please schedule stuff here".
Note wrt. SCHED vs. MISC:
* idle load balancing details
* - When one of the busy CPUs notice that there may be an idle rebalancing
* needed, they will kick the idle load balancer, which then does idle
* load balancing for all the idle CPUs.
* - HK_FLAG_MISC CPUs are used for this task, because HK_FLAG_SCHED not set
* anywhere yet.
...and even though SCHED isn't set, we do have tests for it:
void nohz_balance_enter_idle(int cpu)
{
/* Spare idle load balancing on CPUs that don't want to be disturbed: */
if (!housekeeping_cpu(cpu, HK_FLAG_SCHED))
return;
static void nohz_newidle_balance(struct rq *this_rq)
{
if (!housekeeping_cpu(this_cpu, HK_FLAG_SCHED))
return;
See also:
https://lore.kernel.org/all/20200401121342.930480720@redhat.com/
...which actually sets SCHED in default nohz_full mask, but was never merged.
WQ:
------------------------------------------------------------------------------
--workqueus: note that it is always used *currently* as:
int hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
FW is splitting the assignment in two, but that doesn't change the
fact they have been always used OR'd together and still will be.
commit 1bda3f8087fce9063da0b8aef87f17a3fe541aca
Author: Frederic Weisbecker <frederic@...nel.org>
Date: Wed Feb 21 05:17:26 2018 +0100
sched/isolation: Isolate workqueues when "nohz_full=" is set
kernel/workqueue.c
void __init workqueue_init_early(void)
{
int hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
cpumask_copy(wq_unbound_cpumask, housekeeping_cpumask(hk_flags));
--------------------------------------
--networking and Rx pkt steering (see desc in commit log)
commit 0a9627f2649a02bea165cfd529d7bcb625c2fcad
Author: Tom Herbert <therbert@...gle.com>
Date: Tue Mar 16 08:03:29 2010 +0000
rps: Receive Packet Steering
...went ~10y before being limited to housekeeping CPUs
commit 07bbecb3410617816a99e76a2df7576507a0c8ad
Author: Alex Belits <abelits@...vell.com>
Date: Thu Jun 25 18:34:43 2020 -0400
net: Restrict receive packets queuing to housekeeping CPUs
net/core/net-sysfs.c
static ssize_t store_rps_map(struct netdev_rx_queue *queue,
[...]
if (!cpumask_empty(mask)) {
hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
cpumask_and(mask, mask, housekeeping_cpumask(hk_flags));
if (cpumask_empty(mask)) {
free_cpumask_var(mask);
return -EINVAL;
--------------------------------------
--directing PCI probes and subsequent resources they allocate/set-up.
commit 69a18b18699b59654333651d95f8ca09d01048f8
Author: Alex Belits <abelits@...vell.com>
Date: Thu Jun 25 18:34:42 2020 -0400
PCI: Restrict probe functions to housekeeping CPUs
drivers/pci/pci-driver.c
static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
[...]
int hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
* Prevent nesting work_on_cpu() for the case where a Virtual Function
* device is probed from work_on_cpu() of the Physical device.
if (node < 0 || node >= MAX_NUMNODES || !node_online(node) ||
pci_physfn_is_probed(dev))
cpu = nr_cpu_ids;
else
cpu = cpumask_any_and(cpumask_of_node(node),
housekeeping_cpumask(hk_flags));
if (cpu < nr_cpu_ids)
error = work_on_cpu(cpu, local_pci_probe, &ddi);
else
error = local_pci_probe(&ddi);
MISC:
------------------------------------------------------------------------------
--idle load balance; note comment - would have used SCHED if it could have
commit 9b019acb72e4b5741d88e8936d6f200ed44b66b2
Author: Nicholas Piggin <npiggin@...il.com>
Date: Fri Apr 12 14:26:13 2019 +1000
sched/nohz: Run NOHZ idle load balancer on HK_FLAG_MISC CPUs
kernel/sched/fair.c
* - HK_FLAG_MISC CPUs are used for this task, because HK_FLAG_SCHED not set
* anywhere yet.
static inline int find_new_ilb(void)
{
int ilb;
const struct cpumask *hk_mask;
hk_mask = housekeeping_cpumask(HK_FLAG_MISC);
for_each_cpu_and(ilb, nohz.idle_cpus_mask, hk_mask) {
--blocking core frequency probe requests from !MISC cores
arch/x86/kernel/cpu/aperfmperf.c
commit cc9e303c91f5c25c49a4312552841f4c23fa2b69
Author: Konstantin Khlebnikov <koct9i@...il.com>
Date: Wed May 15 09:59:00 2019 +0300
x86/cpu: Disable frequency requests via aperfmperf IPI for nohz_full CPUs
@@ -85,6 +86,9 @@ unsigned int aperfmperf_get_khz(int cpu)
+ if (!housekeeping_cpu(cpu, HK_FLAG_MISC))
+ return 0;
@@ -101,9 +105,12 @@ void arch_freq_prepare_all(void)
- for_each_online_cpu(cpu)
+ for_each_online_cpu(cpu) {
+ if (!housekeeping_cpu(cpu, HK_FLAG_MISC))
+ continue;
if (!aperfmperf_snapshot_cpu(cpu, now, false))
RCU:
------------------------------------------------------------------------------
--currently only on path of CPU add/remove ops
kernel/rcu/tree_plugin.h
* We don't include outgoingcpu in the affinity set, use -1 if there is
* no outgoing CPU. If there are no CPUs left in the affinity set,
* this function allows the kthread to execute on any CPU.
static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)
{
struct task_struct *t = rnp->boost_kthread_task;
unsigned long mask = rcu_rnp_online_cpus(rnp);
cpumask_var_t cm;
int cpu;
if (!t)
return;
if (!zalloc_cpumask_var(&cm, GFP_KERNEL))
return;
for_each_leaf_node_possible_cpu(rnp, cpu)
if ((mask & leaf_node_cpu_bit(rnp, cpu)) &&
cpu != outgoingcpu)
cpumask_set_cpu(cpu, cm);
cpumask_and(cm, cm, housekeeping_cpumask(HK_FLAG_RCU));
if (cpumask_weight(cm) == 0)
cpumask_copy(cm, housekeeping_cpumask(HK_FLAG_RCU));
set_cpus_allowed_ptr(t, cm);
free_cpumask_var(cm);
}
------ rcu_boost_kthread_setaffinity -----
* The CPU has been completely removed, and some other CPU is reporting
* this fact from process context. Do the remainder of the cleanup.
* There can only be one CPU hotplug operation at a time, so no need for
* explicit locking.
int rcutree_dead_cpu(unsigned int cpu)
{
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */
if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
return 0;
WRITE_ONCE(rcu_state.n_online_cpus, rcu_state.n_online_cpus - 1);
/* Adjust any no-longer-needed kthreads. */
rcu_boost_kthread_setaffinity(rnp, -1);
-----------------------------------------
* Update RCU priority boot kthread affinity for CPU-hotplug changes.
static void rcutree_affinity_setting(unsigned int cpu, int outgoing)
{
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
rcu_boost_kthread_setaffinity(rdp->mynode, outgoing);
}
* Near the end of the CPU-online process. Pretty much all services
* enabled, and the CPU is now very much alive.
int rcutree_online_cpu(unsigned int cpu)
{
...
rcutree_affinity_setting(cpu, -1);
* Near the beginning of the process. The CPU is still very much alive
* with pretty much all services enabled.
int rcutree_offline_cpu(unsigned int cpu)
{
...
rcutree_affinity_setting(cpu, cpu);
-----------------------------------------
MANAGED_IRQ
------------------------------------------------------------------------------
--avoid (not prevent) IRQ traffic landing on isolated cores
commit 11ea68f553e244851d15793a7fa33a97c46d8271
Author: Ming Lei <ming.lei@...hat.com>
Date: Mon Jan 20 17:16:25 2020 +0800
genirq, sched/isolation: Isolate from handling managed interrupts
The affinity of managed interrupts is completely handled in the kernel and
cannot be changed via the /proc/irq/* interfaces from user space. [...]
kernel/irq/cpuhotplug.c
static bool hk_should_isolate(struct irq_data *data, unsigned int cpu)
{
const struct cpumask *hk_mask;
if (!housekeeping_enabled(HK_FLAG_MANAGED_IRQ))
return false;
--------------------------------------
kernel/irq/manage.c
int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
/*
* If this is a managed interrupt and housekeeping is enabled on
* it check whether the requested affinity mask intersects with
* a housekeeping CPU. If so, then remove the isolated CPUs from
* the mask and just keep the housekeeping CPU(s). This prevents
* the affinity setter from routing the interrupt to an isolated
* CPU to avoid that I/O submitted from a housekeeping CPU causes
* interrupts on an isolated one.
*
* If the masks do not intersect or include online CPU(s) then
* keep the requested mask. The isolated target CPUs are only
* receiving interrupts when the I/O operation was submitted
* directly from them.
*
* If all housekeeping CPUs in the affinity mask are offline, the
* interrupt will be migrated by the CPU hotplug code once a
* housekeeping CPU which belongs to the affinity mask comes
* online.
*/
KTHREAD
------------------------------------------------------------------------------
commit 9cc5b8656892a72438ee7deb5e80f5be47643b8b
Author: Marcelo Tosatti <mtosatti@...hat.com>
Date: Wed May 27 16:29:09 2020 +0200
isolcpus: Affine unbound kernel threads to housekeeping cpus
This is a kernel enhancement that configures the cpu affinity of kernel
threads via kernel boot option nohz_full=.
When this option is specified, the cpumask is immediately applied upon
kthread launch. This does not affect kernel threads that specify cpu
and node.
This allows CPU isolation (that is not allowing certain threads
to execute on certain CPUs) without using the isolcpus=domain parameter,
making it possible to enable load balancing on such CPUs
during runtime (see kernel-parameters.txt).
kernel/kthread.c
static int kthread(void *_create)
{
/*
* The new thread inherited kthreadd's priority and CPU mask. Reset
* back to default in case they have been changed.
*/
sched_setscheduler_nocheck(current, SCHED_NORMAL, ¶m);
set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_KTHREAD));
`
int kthreadd(void *unused)
{
/* Setup a clean context for our children to inherit. */
set_task_comm(tsk, "kthreadd");
ignore_signals(tsk);
set_cpus_allowed_ptr(tsk, housekeeping_cpumask(HK_FLAG_KTHREAD));
TICK:
------------------------------------------------------------------------------
--triggers scheduler tick offload at boot:
kernel/sched/isolation.c
void __init housekeeping_init(void)
{
if (!housekeeping_flags)
return;
static_branch_enable(&housekeeping_overridden);
if (housekeeping_flags & HK_FLAG_TICK)
sched_tick_offload_init();
/* We need at least one CPU to handle housekeeping work */
WARN_ON_ONCE(cpumask_empty(housekeeping_mask));
}
--assumes tick is already on and never turned off for TICK cores.
kernel/sched/core.c
static void sched_tick_start(int cpu)
{
int os;
struct tick_work *twork;
if (housekeeping_cpu(cpu, HK_FLAG_TICK))
return;
---------------
#ifdef CONFIG_HOTPLUG_CPU
static void sched_tick_stop(int cpu)
{
struct tick_work *twork;
int os;
if (housekeeping_cpu(cpu, HK_FLAG_TICK))
return;
TIMER:
-------------------------------------------------------------------------
arch/x86/kvm/x86.c
8710) int kvm_arch_init(void *opaque)
8771) if (pi_inject_timer == -1)
8772) pi_inject_timer = housekeeping_enabled(HK_FLAG_TIMER);
kernel/cpu.c
1484) int freeze_secondary_cpus(int primary)
1485) {
1489) if (primary == -1) {
1490) primary = cpumask_first(cpu_online_mask);
1491) if (!housekeeping_cpu(primary, HK_FLAG_TIMER))
1492) primary = housekeeping_any_cpu(HK_FLAG_TIMER);
1493) } else {
--timers and timer migration:
kernel/sched/core.c
1021) int get_nohz_timer_target(void) <------- users below
1022) {
1023) int i, cpu = smp_processor_id(), default_cpu = -1;
1027) if (housekeeping_cpu(cpu, HK_FLAG_TIMER)) {
1028) if (!idle_cpu(cpu))
1029) return cpu;
1030) default_cpu = cpu;
1031) }
1032)
1033) hk_mask = housekeeping_cpumask(HK_FLAG_TIMER);
1048) if (default_cpu == -1)
1049) default_cpu = housekeeping_any_cpu(HK_FLAG_TIMER);
----- get_nohz_timer_target -----
kernel/time/timer.c
static inline struct timer_base *
get_target_base(struct timer_base *base, unsigned tflags)
{
#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
if (static_branch_likely(&timers_migration_enabled) &&
!(tflags & TIMER_PINNED))
return get_timer_cpu_base(tflags, get_nohz_timer_target());
^^^^^^^^^^^^^^^^^^^^^
#endif
return get_timer_this_cpu_base(tflags);
}
__mod_timer(struct timer_list *timer, unsigned long expires, unsigned int options)
{
[....]
new_base = get_target_base(base, timer->flags);
^^^^^^^^^^^^^^^
if (base != new_base) {
[ ... ]
-------------------------------
kernel/time/hrtimer.c
static inline
struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base,
int pinned)
{
#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
if (static_branch_likely(&timers_migration_enabled) && !pinned)
return &per_cpu(hrtimer_bases, get_nohz_timer_target());
^^^^^^^^^^^^^^^^^^^^^
#endif
return base;
}
static inline struct hrtimer_clock_base *
switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
int pinned)
{
struct hrtimer_cpu_base *new_cpu_base, *this_cpu_base;
struct hrtimer_clock_base *new_base;
int basenum = base->index;
this_cpu_base = this_cpu_ptr(&hrtimer_bases);
new_cpu_base = get_target_base(this_cpu_base, pinned);
^^^^^^^^^^^^^^^
again:
new_base = &new_cpu_base->clock_base[basenum];
if (base != new_base) {
[ ... ]
-------------------------------
For influence of housekeeping on get_nohz_timer_target() see also
commit 9642d18eee2cd169b60c6ac0f20bda745b5a3d1e
Author: Vatika Harlalka <vatikaharlalka@...il.com>
Date: Tue Sep 1 16:50:59 2015 +0200
nohz: Affine unpinned timers to housekeepers
-------------------------------
kernel/watchdog.c
commit 314b08ff5205420d956d14657e16d92c460a6f21
Author: Frederic Weisbecker <fweisbec@...il.com>
Date: Fri Sep 4 15:45:09 2015 -0700
watchdog: simplify housekeeping affinity with the appropriate mask
845) void __init lockup_detector_init(void)
846) {
847) if (tick_nohz_full_enabled())
848) pr_info("Disabling watchdog on nohz_full cores by default\n");
849)
850) cpumask_copy(&watchdog_cpumask,
851) housekeeping_cpumask(HK_FLAG_TIMER));
------------- end ----------------
Powered by blists - more mailing lists