[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130710095533.GA5557@lge.com>
Date: Wed, 10 Jul 2013 18:55:33 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Michal Hocko <mhocko@...e.cz>
Cc: Zhang Yanfei <zhangyanfei.yes@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
David Rientjes <rientjes@...gle.com>,
Glauber Costa <glommer@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Rik van Riel <riel@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Minchan Kim <minchan@...nel.org>,
Jiang Liu <jiang.liu@...wei.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/5] Support multiple pages allocation
On Wed, Jul 10, 2013 at 11:17:03AM +0200, Michal Hocko wrote:
> On Wed 10-07-13 09:31:42, Joonsoo Kim wrote:
> > On Thu, Jul 04, 2013 at 12:00:44PM +0200, Michal Hocko wrote:
> > > On Thu 04-07-13 13:24:50, Joonsoo Kim wrote:
> > > > On Thu, Jul 04, 2013 at 12:01:43AM +0800, Zhang Yanfei wrote:
> > > > > On 07/03/2013 11:51 PM, Zhang Yanfei wrote:
> > > > > > On 07/03/2013 11:28 PM, Michal Hocko wrote:
> > > > > >> On Wed 03-07-13 17:34:15, Joonsoo Kim wrote:
> > > > > >> [...]
> > > > > >>> For one page allocation at once, this patchset makes allocator slower than
> > > > > >>> before (-5%).
> > > > > >>
> > > > > >> Slowing down the most used path is a no-go. Where does this slow down
> > > > > >> come from?
> > > > > >
> > > > > > I guess, it might be: for one page allocation at once, comparing to the original
> > > > > > code, this patch adds two parameters nr_pages and pages and will do extra checks
> > > > > > for the parameter nr_pages in the allocation path.
> > > > > >
> > > > >
> > > > > If so, adding a separate path for the multiple allocations seems better.
> > > >
> > > > Hello, all.
> > > >
> > > > I modify the code for optimizing one page allocation via likely macro.
> > > > I attach a new one at the end of this mail.
> > > >
> > > > In this case, performance degradation for one page allocation at once is -2.5%.
> > > > I guess, remained overhead comes from two added parameters.
> > > > Is it unreasonable cost to support this new feature?
> > >
> > > Which benchmark you are using for this testing?
> >
> > I use my own module which do allocation repeatedly.
>
> I am not sure this microbenchmark will tell us much. Allocations are
> usually not short lived so the longer time might get amortized.
> If you want to use the multi page allocation for read ahead then try to
> model your numbers on read-ahead workloads.
Of couse. In later, I will get the result on read-ahead workloads or
vmalloc workload which is recommended by Zhang.
I think, without this microbenchmark, we cannot know this modification's
performance effect to single page allocation accurately. Because the impact
to single page allocation is relatively small and it is easily hidden by
other factors.
Now, I tried several implementation for this feature and found that
separate path also makes single page allocation slower (-1.0~-1.5%).
I didn't find any reason except the fact that
text size of page_alloc.o is 1500 bytes more than before.
Before
text data bss dec hex filename
34466 1389 640 36495 8e8f mm/page_alloc.o
sep
text data bss dec hex filename
36074 1413 640 38127 94ef mm/page_alloc.o
Not yet posted implementation which pass two more arguments to
__alloc_pages_nodemask() also makes single page allocation
(-1.0~-1.5%) slower. So in later, I will work with this implementation,
not separate path implementation.
Thanks for comment!
> --
> Michal Hocko
> SUSE Labs
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists