lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <43f4d1c3-52fe-5254-7d50-c420de6d11a6@bytedance.com>
Date:   Sun, 9 Oct 2022 20:41:41 +0800
From:   Chengming Zhou <zhouchengming@...edance.com>
To:     Suren Baghdasaryan <surenb@...gle.com>,
        Pavan Kondeti <quic_pkondeti@...cinc.com>,
        Johannes Weiner <hannes@...xchg.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Charan Teja Kalla <quic_charante@...cinc.com>
Subject: Re: PSI idle-shutoff

Hello,

I just saw these emails, sorry for late.

On 2022/10/6 00:32, Suren Baghdasaryan wrote:
> On Sun, Oct 2, 2022 at 11:11 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
>>
>> On Fri, Sep 16, 2022 at 10:45 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
>>>
>>> On Wed, Sep 14, 2022 at 11:20 PM Pavan Kondeti
>>> <quic_pkondeti@...cinc.com> wrote:
>>>>
>>>> On Tue, Sep 13, 2022 at 07:38:17PM +0530, Pavan Kondeti wrote:
>>>>> Hi
>>>>>
>>>>> The fact that psi_avgs_work()->collect_percpu_times()->get_recent_times()
>>>>> run from a kworker thread, PSI_NONIDLE condition would be observed as
>>>>> there is a RUNNING task. So we would always end up re-arming the work.
>>>>>
>>>>> If the work is re-armed from the psi_avgs_work() it self, the backing off
>>>>> logic in psi_task_change() (will be moved to psi_task_switch soon) can't
>>>>> help. The work is already scheduled. so we don't do anything there.
>>>
>>> Hi Pavan,
>>> Thanks for reporting the issue. IIRC [1] was meant to fix exactly this
>>> issue. At the time it was written I tested it and it seemed to work.
>>> Maybe I missed something or some other change introduced afterwards
>>> affected the shutoff logic. I'll take a closer look next week when I'm
>>> back at my computer and will consult with Johannes.
>>
>> Sorry for the delay. I had some time to look into this and test psi
>> shutoff on my device and I think you are right. The patch I mentioned
>> prevents new psi_avgs_work from being scheduled when the only non-idle
>> task is psi_avgs_work itself, however the regular 2sec averaging work
>> will still go on. I think we could record the fact that the only
>> active task is psi_avgs_work in record_times() using a new
>> psi_group_cpu.state_mask flag and then prevent psi_avgs_work() from
>> rescheduling itself if that flag is set for all non-idle cpus. I'll
>> test this approach and will post a patch for review if that works.
> 
> Hi Pavan,
> Testing PSI shutoff on Android proved more difficult than I expected.
> Lots of tasks to silence and I keep encountering new ones.
> The approach I was thinking about is something like this:
> 
> ---
>  include/linux/psi_types.h |  3 +++
>  kernel/sched/psi.c        | 12 +++++++++---
>  2 files changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
> index c7fe7c089718..8d936f22cb5b 100644
> --- a/include/linux/psi_types.h
> +++ b/include/linux/psi_types.h
> @@ -68,6 +68,9 @@ enum psi_states {
>          NR_PSI_STATES = 7,
>  };
> 
> +/* state_mask flag to keep re-arming averaging work */
> +#define PSI_STATE_WAKE_CLOCK        (1 << NR_PSI_STATES)
> +
>  enum psi_aggregators {
>          PSI_AVGS = 0,
>          PSI_POLL,
> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
> index ecb4b4ff4ce0..dd62ad28bacd 100644
> --- a/kernel/sched/psi.c
> +++ b/kernel/sched/psi.c
> @@ -278,6 +278,7 @@ static void get_recent_times(struct psi_group
> *group, int cpu,
>                  if (delta)
>                          *pchanged_states |= (1 << s);
>          }
> +        *pchanged_states |= (state_mask & PSI_STATE_WAKE_CLOCK);

If the avgs_work kworker is running on this CPU, it will still see
PSI_STATE_WAKE_CLOCK set in state_mask? So the work will be re-armed?

Maybe I missed something... but I have another different idea which
I want to implement later only for discussion.

Thanks.

>  }
> 
>  static void calc_avgs(unsigned long avg[3], int missed_periods,
> @@ -413,7 +414,7 @@ static void psi_avgs_work(struct work_struct *work)
>          struct delayed_work *dwork;
>          struct psi_group *group;
>          u32 changed_states;
> -        bool nonidle;
> +        bool wake_clock;
>          u64 now;
> 
>          dwork = to_delayed_work(work);
> @@ -424,7 +425,7 @@ static void psi_avgs_work(struct work_struct *work)
>          now = sched_clock();
> 
>          collect_percpu_times(group, PSI_AVGS, &changed_states);
> -        nonidle = changed_states & (1 << PSI_NONIDLE);
> +        wake_clock = changed_states & PSI_STATE_WAKE_CLOCK;
>          /*
>           * If there is task activity, periodically fold the per-cpu
>           * times and feed samples into the running averages. If things
> @@ -435,7 +436,7 @@ static void psi_avgs_work(struct work_struct *work)
>          if (now >= group->avg_next_update)
>                  group->avg_next_update = update_averages(group, now);
> 
> -        if (nonidle) {
> +        if (wake_clock) {
>                  schedule_delayed_work(dwork, nsecs_to_jiffies(
>                                  group->avg_next_update - now) + 1);
>          }
> @@ -742,6 +743,11 @@ static void psi_group_change(struct psi_group
> *group, int cpu,
>          if (unlikely(groupc->tasks[NR_ONCPU] && cpu_curr(cpu)->in_memstall))
>                  state_mask |= (1 << PSI_MEM_FULL);
> 
> +        if (wake_clock || test_state(groupc->tasks, PSI_NONIDLE)) {
> +                /* psi_avgs_work was not the only task on the CPU */
> +                state_mask |= PSI_STATE_WAKE_CLOCK;
> +        }
> +
>          groupc->state_mask = state_mask;
> 
>          write_seqcount_end(&groupc->seq);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ