[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod5ac0fmfD+92kg8nu4zV31ow95DJmvsTp8Rh4Ff+FhcXg@mail.gmail.com>
Date: Wed, 6 Nov 2019 18:51:02 -0800
From: Shakeel Butt <shakeelb@...gle.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>, Linux MM <linux-mm@...ck.org>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Kernel Team <kernel-team@...com>
Subject: Re: [PATCH 03/11] mm: vmscan: simplify lruvec_lru_size()
On Mon, Jun 3, 2019 at 2:59 PM Johannes Weiner <hannes@...xchg.org> wrote:
>
> This function currently takes the node or lruvec size and subtracts
> the zones that are excluded by the classzone index of the
> allocation. It uses four different types of counters to do this.
>
> Just add up the eligible zones.
>
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
I think this became part of other series. Anyways:
Reviewed-by: Shakeel Butt <shakeelb@...gle.com>
> ---
> mm/vmscan.c | 19 +++++--------------
> 1 file changed, 5 insertions(+), 14 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 853be16ee5e2..69c4c82a9b5a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -342,30 +342,21 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
> */
> unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
> {
> - unsigned long lru_size;
> + unsigned long size = 0;
> int zid;
>
> - if (!mem_cgroup_disabled())
> - lru_size = lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
> - else
> - lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
> -
> - for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {
> + for (zid = 0; zid <= zone_idx; zid++) {
> struct zone *zone = &lruvec_pgdat(lruvec)->node_zones[zid];
> - unsigned long size;
>
> if (!managed_zone(zone))
> continue;
>
> if (!mem_cgroup_disabled())
> - size = mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
> + size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
> else
> - size = zone_page_state(&lruvec_pgdat(lruvec)->node_zones[zid],
> - NR_ZONE_LRU_BASE + lru);
> - lru_size -= min(size, lru_size);
> + size += zone_page_state(zone, NR_ZONE_LRU_BASE + lru);
> }
> -
> - return lru_size;
> + return size;
>
> }
>
> --
> 2.21.0
>
Powered by blists - more mailing lists