[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YwYBasgyIU0iQgL3@cmpxchg.org>
Date: Wed, 24 Aug 2022 06:46:02 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Chengming Zhou <zhouchengming@...edance.com>
Cc: tj@...nel.org, mkoutny@...e.com, surenb@...gle.com,
gregkh@...uxfoundation.org, corbet@....net, mingo@...hat.com,
peterz@...radead.org, songmuchun@...edance.com,
cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 07/10] sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ
pressure
On Wed, Aug 24, 2022 at 04:18:26PM +0800, Chengming Zhou wrote:
> @@ -903,6 +903,36 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
> }
> }
>
> +#ifdef CONFIG_IRQ_TIME_ACCOUNTING
> +void psi_account_irqtime(struct task_struct *task, u32 delta)
> +{
> + int cpu = task_cpu(task);
> + void *iter = NULL;
> + struct psi_group *group;
> + struct psi_group_cpu *groupc;
> + u64 now;
> +
> + if (!task->pid)
> + return;
> +
> + now = cpu_clock(cpu);
> +
> + while ((group = iterate_groups(task, &iter))) {
> + groupc = per_cpu_ptr(group->pcpu, cpu);
> +
> + write_seqcount_begin(&groupc->seq);
> +
> + record_times(groupc, now);
> + groupc->times[PSI_IRQ_FULL] += delta;
> +
> + write_seqcount_end(&groupc->seq);
> +
> + if (group->poll_states & (1 << PSI_IRQ_FULL))
> + psi_schedule_poll_work(group, 1);
> + }
Shouldn't this kick avgs_work too? If the CPU is otherwise idle,
times[PSI_IRQ_FULL] would overflow after two missed averaging runs.
avgs_work should probably also self-perpetuate when PSI_IRQ_FULL is in
changed_states. (Looking at that code, I think it can be simplified:
delete nonidle and do `if (changed_states) schedule_delayed_work()`.)
Powered by blists - more mailing lists