[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7dba12ce-f981-6017-0613-542472e3ec5c@bytedance.com>
Date: Fri, 14 Oct 2022 10:03:58 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Suren Baghdasaryan <surenb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>
Cc: quic_pkondeti@...cinc.com, peterz@...radead.org,
quic_charante@...cinc.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/psi: Fix avgs_work re-arm in psi_avgs_work()
On 2022/10/14 00:10, Suren Baghdasaryan wrote:
> On Thu, Oct 13, 2022 at 8:52 AM Johannes Weiner <hannes@...xchg.org> wrote:
>>
>> On Thu, Oct 13, 2022 at 07:06:55PM +0800, Chengming Zhou wrote:
>>> Should I still need to copy groupc->tasks[] out for the current_cpu as you
>>> suggested before?
>>
>> It'd be my preference as well. This way the resched logic can be
>> consolidated into a single block of comment + code at the end of the
>> function.
>
> Sounds good to me. If we are copying times in the retry loop then
> let's move the `reschedule =` decision out of that loop completely. At
> the end of get_recent_times we can do:
>
> if (cpu == current_cpu)
> reschedule = tasks[NR_RUNNING] +
> tasks[NR_IOWAIT] +
> tasks[NR_MEMSTALL] > 1;
> else
> reschedule = *pchanged_states & (1 << PSI_NONIDLE);
>
Ok, I will send an updated patch later.
Thanks!
>
>>
>>> @@ -242,6 +242,8 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>> u32 *pchanged_states)
>>> {
>>> struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
>>> + int current_cpu = raw_smp_processor_id();
>>> + bool reschedule;
>>> u64 now, state_start;
>>> enum psi_states s;
>>> unsigned int seq;
>>> @@ -256,6 +258,10 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>> memcpy(times, groupc->times, sizeof(groupc->times));
>>> state_mask = groupc->state_mask;
>>> state_start = groupc->state_start;
>>> + if (cpu == current_cpu)
>>> + reschedule = groupc->tasks[NR_RUNNING] +
>>> + groupc->tasks[NR_IOWAIT] +
>>> + groupc->tasks[NR_MEMSTALL] > 1;
>>> } while (read_seqcount_retry(&groupc->seq, seq));
>>
>> This also matches psi_show() and the poll worker. They don't currently
>> use the flag, but it's somewhat fragile and confusing. Add a test for
>> current_work() == &group->avgs_work?
>
> Good point. (tasks[NR_RUNNING] + tasks[NR_IOWAIT] + tasks[NR_MEMSTALL]
>> 1) condition should also contain this check.
Powered by blists - more mailing lists