[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170530122436.GE7969@dhcp22.suse.cz>
Date: Tue, 30 May 2017 14:24:36 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Roman Gushchin <guro@...com>
Cc: Balbir Singh <bsingharora@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>, kernel-team@...com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: bump PGSTEAL*/PGSCAN*/ALLOCSTALL counters in memcg
reclaim
On Mon 29-05-17 14:01:41, Roman Gushchin wrote:
> Historically, PGSTEAL*/PGSCAN*/ALLOCSTALL counters were used to
> account only for global reclaim events, memory cgroup targeted reclaim
> was ignored.
>
> It doesn't make sense anymore, because the whole reclaim path
> is designed around cgroups. Also, per-cgroup counters can exceed the
> corresponding global counters, what can be confusing.
The whole reclaim is designed around cgroups but the source of the
memory pressure is different. I agree that checking global_reclaim()
for PGSTEAL_KSWAPD doesn't make much sense because we are _always_ in
the global reclaim context but counting ALLOCSTALL even for targetted
memcg reclaim is more confusing than helpful. We usually consider this
counter to see whether the kswapd catches up with the memory demand
and the global direct reclaim is indicator it doesn't. The similar
applies to other counters as well.
So I do not think this is correct. What is the problem you are trying to
solve here anyway.
> So, make PGSTEAL*/PGSCAN*/ALLOCSTALL counters reflect sum of any
> reclaim activity in the system.
>
> Signed-off-by: Roman Gushchin <guro@...com>
> Cc: Balbir Singh <bsingharora@...il.com>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Vladimir Davydov <vdavydov.dev@...il.com>
> Cc: kernel-team@...com
> Cc: linux-mm@...ck.org
> Cc: linux-kernel@...r.kernel.org
> ---
> mm/vmscan.c | 15 +++++----------
> 1 file changed, 5 insertions(+), 10 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 7c2a36b..77253b1 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1765,13 +1765,11 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> reclaim_stat->recent_scanned[file] += nr_taken;
>
> if (current_is_kswapd()) {
> - if (global_reclaim(sc))
> - __count_vm_events(PGSCAN_KSWAPD, nr_scanned);
> + __count_vm_events(PGSCAN_KSWAPD, nr_scanned);
> count_memcg_events(lruvec_memcg(lruvec), PGSCAN_KSWAPD,
> nr_scanned);
> } else {
> - if (global_reclaim(sc))
> - __count_vm_events(PGSCAN_DIRECT, nr_scanned);
> + __count_vm_events(PGSCAN_DIRECT, nr_scanned);
> count_memcg_events(lruvec_memcg(lruvec), PGSCAN_DIRECT,
> nr_scanned);
> }
> @@ -1786,13 +1784,11 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> spin_lock_irq(&pgdat->lru_lock);
>
> if (current_is_kswapd()) {
> - if (global_reclaim(sc))
> - __count_vm_events(PGSTEAL_KSWAPD, nr_reclaimed);
> + __count_vm_events(PGSTEAL_KSWAPD, nr_reclaimed);
> count_memcg_events(lruvec_memcg(lruvec), PGSTEAL_KSWAPD,
> nr_reclaimed);
> } else {
> - if (global_reclaim(sc))
> - __count_vm_events(PGSTEAL_DIRECT, nr_reclaimed);
> + __count_vm_events(PGSTEAL_DIRECT, nr_reclaimed);
> count_memcg_events(lruvec_memcg(lruvec), PGSTEAL_DIRECT,
> nr_reclaimed);
> }
> @@ -2828,8 +2824,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> retry:
> delayacct_freepages_start();
>
> - if (global_reclaim(sc))
> - __count_zid_vm_events(ALLOCSTALL, sc->reclaim_idx, 1);
> + __count_zid_vm_events(ALLOCSTALL, sc->reclaim_idx, 1);
>
> do {
> vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
> --
> 2.7.4
>
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists