lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 Oct 2022 09:10:59 -0700
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Chengming Zhou <zhouchengming@...edance.com>,
        quic_pkondeti@...cinc.com, peterz@...radead.org,
        quic_charante@...cinc.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/psi: Fix avgs_work re-arm in psi_avgs_work()

On Thu, Oct 13, 2022 at 8:52 AM Johannes Weiner <hannes@...xchg.org> wrote:
>
> On Thu, Oct 13, 2022 at 07:06:55PM +0800, Chengming Zhou wrote:
> > Should I still need to copy groupc->tasks[] out for the current_cpu as you
> > suggested before?
>
> It'd be my preference as well. This way the resched logic can be
> consolidated into a single block of comment + code at the end of the
> function.

Sounds good to me. If we are copying times in the retry loop then
let's move the `reschedule =` decision out of that loop completely. At
the end of get_recent_times we can do:

if (cpu == current_cpu)
    reschedule = tasks[NR_RUNNING] +
                            tasks[NR_IOWAIT] +
                            tasks[NR_MEMSTALL] > 1;
else
    reschedule = *pchanged_states & (1 << PSI_NONIDLE);


>
> > @@ -242,6 +242,8 @@ static void get_recent_times(struct psi_group *group, int cpu,
> >                              u32 *pchanged_states)
> >  {
> >         struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
> > +       int current_cpu = raw_smp_processor_id();
> > +       bool reschedule;
> >         u64 now, state_start;
> >         enum psi_states s;
> >         unsigned int seq;
> > @@ -256,6 +258,10 @@ static void get_recent_times(struct psi_group *group, int cpu,
> >                 memcpy(times, groupc->times, sizeof(groupc->times));
> >                 state_mask = groupc->state_mask;
> >                 state_start = groupc->state_start;
> > +               if (cpu == current_cpu)
> > +                       reschedule = groupc->tasks[NR_RUNNING] +
> > +                               groupc->tasks[NR_IOWAIT] +
> > +                               groupc->tasks[NR_MEMSTALL] > 1;
> >         } while (read_seqcount_retry(&groupc->seq, seq));
>
> This also matches psi_show() and the poll worker. They don't currently
> use the flag, but it's somewhat fragile and confusing. Add a test for
> current_work() == &group->avgs_work?

Good point. (tasks[NR_RUNNING] + tasks[NR_IOWAIT] + tasks[NR_MEMSTALL]
> 1) condition should also contain this check.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ