[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod4R68wNgzOF9dN=i6LwyUYMBhvM7SXaRJGW9Wn_SmeGGA@mail.gmail.com>
Date: Thu, 23 Apr 2020 15:59:41 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Roman Gushchin <guro@...com>,
Michal Hocko <mhocko@...nel.org>,
Linux MM <linux-mm@...ck.org>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] memcg: optimize memory.numa_stat like memory.stat
On Thu, Mar 5, 2020 at 8:54 PM Shakeel Butt <shakeelb@...gle.com> wrote:
>
> On Thu, Mar 5, 2020 at 8:41 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
> >
> > On Tue, 3 Mar 2020 18:20:58 -0800 Shakeel Butt <shakeelb@...gle.com> wrote:
> >
> > > Currently reading memory.numa_stat traverses the underlying memcg tree
> > > multiple times to accumulate the stats to present the hierarchical view
> > > of the memcg tree. However the kernel already maintains the hierarchical
> > > view of the stats and use it in memory.stat. Just use the same mechanism
> > > in memory.numa_stat as well.
> > >
> > > I ran a simple benchmark which reads root_mem_cgroup's memory.numa_stat
> > > file in the presense of 10000 memcgs. The results are:
> > >
> > > Without the patch:
> > > $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null
> > >
> > > real 0m0.700s
> > > user 0m0.001s
> > > sys 0m0.697s
> > >
> > > With the patch:
> > > $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null
> > >
> > > real 0m0.001s
> > > user 0m0.001s
> > > sys 0m0.000s
> > >
> >
> > Can't you do better than that ;)
> >
> > >
> > > + page_state = tree ? lruvec_page_state : lruvec_page_state_local;
> > > ...
> > >
> > > + page_state = tree ? memcg_page_state : memcg_page_state_local;
> > >
> >
> > All four of these functions are inlined. Taking their address in this
> > fashion will force the compiler to generate out-of-line copies.
> >
> > If we do it the uglier-and-maybe-a-bit-slower way:
> >
> > --- a/mm/memcontrol.c~memcg-optimize-memorynuma_stat-like-memorystat-fix
> > +++ a/mm/memcontrol.c
> > @@ -3658,17 +3658,16 @@ static unsigned long mem_cgroup_node_nr_
> > struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
> > unsigned long nr = 0;
> > enum lru_list lru;
> > - unsigned long (*page_state)(struct lruvec *lruvec,
> > - enum node_stat_item idx);
> >
> > VM_BUG_ON((unsigned)nid >= nr_node_ids);
> >
> > - page_state = tree ? lruvec_page_state : lruvec_page_state_local;
> > -
> > for_each_lru(lru) {
> > if (!(BIT(lru) & lru_mask))
> > continue;
> > - nr += page_state(lruvec, NR_LRU_BASE + lru);
> > + if (tree)
> > + nr += lruvec_page_state(lruvec, NR_LRU_BASE + lru);
> > + else
> > + nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
> > }
> > return nr;
> > }
> > @@ -3679,14 +3678,14 @@ static unsigned long mem_cgroup_nr_lru_p
> > {
> > unsigned long nr = 0;
> > enum lru_list lru;
> > - unsigned long (*page_state)(struct mem_cgroup *memcg, int idx);
> > -
> > - page_state = tree ? memcg_page_state : memcg_page_state_local;
> >
> > for_each_lru(lru) {
> > if (!(BIT(lru) & lru_mask))
> > continue;
> > - nr += page_state(memcg, NR_LRU_BASE + lru);
> > + if (tree)
> > + nr += memcg_page_state(memcg, NR_LRU_BASE + lru);
> > + else
> > + nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru);
> > }
> > return nr;
> > }
> >
> > Then we get:
> >
> > text data bss dec hex filename
> > now: 106705 35641 1024 143370 2300a mm/memcontrol.o
> > shakeel: 107111 35657 1024 143792 231b0 mm/memcontrol.o
> > shakeel+the-above: 106805 35657 1024 143486 2307e mm/memcontrol.o
> >
> > Which do we prefer? The 100-byte patch or the 406-byte patch?
>
> I would go with the 100-byte one. The for-loop is just 5 iteration, so
> doing a check in each iteration should not be an issue.
>
Andrew, anything more needed for this patch to be merged?
Shakeel
Powered by blists - more mailing lists