[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpFP2VP_t_tP27w=k4HDhm=jv=G2C56mM_kbs6wqux+DhA@mail.gmail.com>
Date: Wed, 10 Nov 2021 09:48:10 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Georgi Djakov <djakov@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Georgi Djakov <quic_c_gdjako@...cinc.com>, hannes@...xchg.org,
vincent.guittot@...aro.org, juri.lelli@...hat.com,
mingo@...hat.com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, mhocko@...nel.org,
vdavydov.dev@...il.com, tj@...nel.org, axboe@...nel.dk,
cgroups@...r.kernel.org, linux-block@...r.kernel.org,
akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC] psi: Add additional PSI counters for each type of memory pressure
On Wed, Nov 10, 2021 at 8:46 AM Georgi Djakov <djakov@...nel.org> wrote:
>
>
> On 10.11.21 18:14, Peter Zijlstra wrote:
> > On Wed, Nov 10, 2021 at 07:36:37AM -0800, Georgi Djakov wrote:
> >> @@ -21,7 +19,18 @@ enum psi_task_count {
> >> * don't have to special case any state tracking for it.
> >> */
> >> NR_ONCPU,
> >> - NR_PSI_TASK_COUNTS = 4,
> >> + NR_BLK_CGROUP_THROTTLE,
> >> + NR_BIO,
> >> + NR_COMPACTION,
> >> + NR_THRASHING,
> >> + NR_CGROUP_RECLAIM_HIGH,
> >> + NR_CGROUP_RECLAIM_HIGH_SLEEP,
> >> + NR_CGROUP_TRY_CHARGE,
> >> + NR_DIRECT_COMPACTION,
> >> + NR_DIRECT_RECLAIM,
> >> + NR_READ_SWAPPAGE,
> >> + NR_KSWAPD,
> >> + NR_PSI_TASK_COUNTS = 16,
> >> };
> >>
> >
> >> @@ -51,9 +80,20 @@ enum psi_states {
> >> PSI_MEM_FULL,
> >> PSI_CPU_SOME,
> >> PSI_CPU_FULL,
> >> + PSI_BLK_CGROUP_THROTTLE,
> >> + PSI_BIO,
> >> + PSI_COMPACTION,
> >> + PSI_THRASHING,
> >> + PSI_CGROUP_RECLAIM_HIGH,
> >> + PSI_CGROUP_RECLAIM_HIGH_SLEEP,
> >> + PSI_CGROUP_TRY_CHARGE,
> >> + PSI_DIRECT_COMPACTION,
> >> + PSI_DIRECT_RECLAIM,
> >> + PSI_READ_SWAPPAGE,
> >> + PSI_KSWAPD,
> >> /* Only per-CPU, to weigh the CPU in the global average: */
> >> PSI_NONIDLE,
> >> - NR_PSI_STATES = 7,
> >> + NR_PSI_STATES = 18,
> >> };
> >
> > Have you considered what this does to psi_group_cpu's size and layout
> > and the impact thereof on performance?
>
> Thanks, i will definitely add some numbers in case there are no other
> major arguments against this RFC patch.
Please CC me too in the future postings.
Thanks,
Suren.
>
> BR,
> Georgi
>
Powered by blists - more mailing lists