[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150106082329.GB18346@js1304-P5Q-DELUXE>
Date: Tue, 6 Jan 2015 17:23:29 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Gregory Fong <gregory.0xf0@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Mel Gorman <mgorman@...e.de>,
Laura Abbott <lauraa@...eaurora.org>,
Minchan Kim <minchan@...nel.org>,
Heesub Shin <heesub.shin@...sung.com>, Marek@...per.es,
linux-mm@...ck.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/3] CMA: aggressively allocate the pages on cma
reserved memory when not used
On Mon, Jan 05, 2015 at 08:01:45PM -0800, Gregory Fong wrote:
> +linux-mm and linux-kernel (not sure how those got removed from cc,
> sorry about that)
>
> On Mon, Jan 5, 2015 at 7:58 PM, Gregory Fong <gregory.0xf0@...il.com> wrote:
> > Hi Joonsoo,
> >
> > On Wed, May 28, 2014 at 12:04 AM, Joonsoo Kim <iamjoonsoo.kim@....com> wrote:
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index 674ade7..ca678b6 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -788,6 +788,56 @@ void __init __free_pages_bootmem(struct page *page, unsigned int order)
> >> }
> >>
> >> #ifdef CONFIG_CMA
> >> +void adjust_managed_cma_page_count(struct zone *zone, long count)
> >> +{
> >> + unsigned long flags;
> >> + long total, cma, movable;
> >> +
> >> + spin_lock_irqsave(&zone->lock, flags);
> >> + zone->managed_cma_pages += count;
> >> +
> >> + total = zone->managed_pages;
> >> + cma = zone->managed_cma_pages;
> >> + movable = total - cma - high_wmark_pages(zone);
> >> +
> >> + /* No cma pages, so do only movable allocation */
> >> + if (cma <= 0) {
> >> + zone->max_try_movable = pageblock_nr_pages;
> >> + zone->max_try_cma = 0;
> >> + goto out;
> >> + }
> >> +
> >> + /*
> >> + * We want to consume cma pages with well balanced ratio so that
> >> + * we have consumed enough cma pages before the reclaim. For this
> >> + * purpose, we can use the ratio, movable : cma. And we doesn't
> >> + * want to switch too frequently, because it prevent allocated pages
> >> + * from beging successive and it is bad for some sorts of devices.
> >> + * I choose pageblock_nr_pages for the minimum amount of successive
> >> + * allocation because it is the size of a huge page and fragmentation
> >> + * avoidance is implemented based on this size.
> >> + *
> >> + * To meet above criteria, I derive following equation.
> >> + *
> >> + * if (movable > cma) then; movable : cma = X : pageblock_nr_pages
> >> + * else (movable <= cma) then; movable : cma = pageblock_nr_pages : X
> >> + */
> >> + if (movable > cma) {
> >> + zone->max_try_movable =
> >> + (movable * pageblock_nr_pages) / cma;
> >> + zone->max_try_cma = pageblock_nr_pages;
> >> + } else {
> >> + zone->max_try_movable = pageblock_nr_pages;
> >> + zone->max_try_cma = cma * pageblock_nr_pages / movable;
> >
> > I don't know if anyone's already pointed this out (didn't see anything
> > when searching lkml), but while testing this, I noticed this can
> > result in a div by zero under memory pressure (movable becomes 0).
> > This is not unlikely when the majority of pages are in CMA regions
> > (this may seem pathological but we do actually do this right now).
Hello,
Yes, you are right. Thanks for pointing this out.
I will fix it on next version.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists