[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y0g0UAAJMhPczNm/@cmpxchg.org>
Date: Thu, 13 Oct 2022 11:52:48 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Chengming Zhou <zhouchengming@...edance.com>
Cc: Suren Baghdasaryan <surenb@...gle.com>, quic_pkondeti@...cinc.com,
peterz@...radead.org, quic_charante@...cinc.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/psi: Fix avgs_work re-arm in psi_avgs_work()
On Thu, Oct 13, 2022 at 07:06:55PM +0800, Chengming Zhou wrote:
> Should I still need to copy groupc->tasks[] out for the current_cpu as you
> suggested before?
It'd be my preference as well. This way the resched logic can be
consolidated into a single block of comment + code at the end of the
function.
> @@ -242,6 +242,8 @@ static void get_recent_times(struct psi_group *group, int cpu,
> u32 *pchanged_states)
> {
> struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
> + int current_cpu = raw_smp_processor_id();
> + bool reschedule;
> u64 now, state_start;
> enum psi_states s;
> unsigned int seq;
> @@ -256,6 +258,10 @@ static void get_recent_times(struct psi_group *group, int cpu,
> memcpy(times, groupc->times, sizeof(groupc->times));
> state_mask = groupc->state_mask;
> state_start = groupc->state_start;
> + if (cpu == current_cpu)
> + reschedule = groupc->tasks[NR_RUNNING] +
> + groupc->tasks[NR_IOWAIT] +
> + groupc->tasks[NR_MEMSTALL] > 1;
> } while (read_seqcount_retry(&groupc->seq, seq));
This also matches psi_show() and the poll worker. They don't currently
use the flag, but it's somewhat fragile and confusing. Add a test for
current_work() == &group->avgs_work?
Powered by blists - more mailing lists