lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Mon, 10 Oct 2022 14:16:26 -0700
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     Hillf Danton <hdanton@...a.com>
Cc:     Pavan Kondeti <quic_pkondeti@...cinc.com>,
        Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, quic_charante@...cinc.com
Subject: Re: PSI idle-shutoff

On Mon, Oct 10, 2022 at 3:57 AM Hillf Danton <hdanton@...a.com> wrote:
>
> On 13 Sep 2022 19:38:17 +0530 Pavan Kondeti <quic_pkondeti@...cinc.com>
> > Hi
> >
> > The fact that psi_avgs_work()->collect_percpu_times()->get_recent_times()
> > run from a kworker thread, PSI_NONIDLE condition would be observed as
> > there is a RUNNING task. So we would always end up re-arming the work.
> >
> > If the work is re-armed from the psi_avgs_work() it self, the backing off
> > logic in psi_task_change() (will be moved to psi_task_switch soon) can't
> > help. The work is already scheduled. so we don't do anything there.
> >
> > Probably I am missing some thing here. Can you please clarify how we
> > shut off re-arming the psi avg work?
>
> Instead of open coding schedule_delayed_work() in bid to check if timer
> hits the idle task (see delayed_work_timer_fn()), the idle task is tracked
> in psi_task_switch() and checked by kworker to see if it preempted the idle
> task.
>
> Only for thoughts now.
>
> Hillf
>
> +++ b/kernel/sched/psi.c
> @@ -412,6 +412,8 @@ static u64 update_averages(struct psi_gr
>         return avg_next_update;
>  }
>
> +static DEFINE_PER_CPU(int, prev_task_is_idle);
> +
>  static void psi_avgs_work(struct work_struct *work)
>  {
>         struct delayed_work *dwork;
> @@ -439,7 +441,7 @@ static void psi_avgs_work(struct work_st
>         if (now >= group->avg_next_update)
>                 group->avg_next_update = update_averages(group, now);
>
> -       if (nonidle) {
> +       if (nonidle && 0 == per_cpu(prev_task_is_idle, raw_smp_processor_id())) {

This condition would be incorrect if nonidle was set by a cpu other
than raw_smp_processor_id() and
prev_task_is_idle[raw_smp_processor_id()] == 0. IOW, if some activity
happens on a non-current cpu, we would fail to reschedule
psi_avgs_work for it. This can be fixed in collect_percpu_times() by
considering prev_task_is_idle for all other CPUs as well. However
Chengming's approach seems simpler to me TBH and does not require an
additional per-cpu variable.

>                 schedule_delayed_work(dwork, nsecs_to_jiffies(
>                                 group->avg_next_update - now) + 1);
>         }
> @@ -859,6 +861,7 @@ void psi_task_switch(struct task_struct
>         if (prev->pid) {
>                 int clear = TSK_ONCPU, set = 0;
>
> +               per_cpu(prev_task_is_idle, cpu) = 0;
>                 /*
>                  * When we're going to sleep, psi_dequeue() lets us
>                  * handle TSK_RUNNING, TSK_MEMSTALL_RUNNING and
> @@ -888,7 +891,8 @@ void psi_task_switch(struct task_struct
>                         for (; group; group = iterate_groups(prev, &iter))
>                                 psi_group_change(group, cpu, clear, set, now, true);
>                 }
> -       }
> +       } else
> +               per_cpu(prev_task_is_idle, cpu) = 1;
>  }
>
>  /**
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ