[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0e91dbf-8379-f481-0009-1f19c79a610d@bytedance.com>
Date: Fri, 14 Oct 2022 10:02:10 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>, quic_pkondeti@...cinc.com,
peterz@...radead.org, quic_charante@...cinc.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/psi: Fix avgs_work re-arm in psi_avgs_work()
On 2022/10/13 23:52, Johannes Weiner wrote:
> On Thu, Oct 13, 2022 at 07:06:55PM +0800, Chengming Zhou wrote:
>> Should I still need to copy groupc->tasks[] out for the current_cpu as you
>> suggested before?
>
> It'd be my preference as well. This way the resched logic can be
> consolidated into a single block of comment + code at the end of the
> function.
Ok, will move these resched logic to the end of get_recent_times().
>
>> @@ -242,6 +242,8 @@ static void get_recent_times(struct psi_group *group, int cpu,
>> u32 *pchanged_states)
>> {
>> struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
>> + int current_cpu = raw_smp_processor_id();
>> + bool reschedule;
>> u64 now, state_start;
>> enum psi_states s;
>> unsigned int seq;
>> @@ -256,6 +258,10 @@ static void get_recent_times(struct psi_group *group, int cpu,
>> memcpy(times, groupc->times, sizeof(groupc->times));
>> state_mask = groupc->state_mask;
>> state_start = groupc->state_start;
>> + if (cpu == current_cpu)
>> + reschedule = groupc->tasks[NR_RUNNING] +
>> + groupc->tasks[NR_IOWAIT] +
>> + groupc->tasks[NR_MEMSTALL] > 1;
>> } while (read_seqcount_retry(&groupc->seq, seq));
>
> This also matches psi_show() and the poll worker. They don't currently
> use the flag, but it's somewhat fragile and confusing. Add a test for
> current_work() == &group->avgs_work?
Yes, only psi_avgs_work() use this to re-arm now, I will add this check next version.
Thanks.
Powered by blists - more mailing lists