[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANFwon0CP6jA4oq0U2xC340MbFsws5NmhEMGEUDm983N=mT-Pg@mail.gmail.com>
Date: Fri, 28 Nov 2014 11:45:04 +0800
From: Hui Zhu <teawater@...il.com>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Hui Zhu <zhuhui@...omi.com>, rjw@...ysocki.net,
len.brown@...el.com, pavel@....cz, m.szyprowski@...sung.com,
Andrew Morton <akpm@...ux-foundation.org>, mina86@...a86.com,
aneesh.kumar@...ux.vnet.ibm.com, hannes@...xchg.org,
Rik van Riel <riel@...hat.com>, mgorman@...e.de,
minchan@...nel.org, nasa4836@...il.com, ddstreet@...e.org,
Hugh Dickins <hughd@...gle.com>, mingo@...nel.org,
rientjes@...gle.com, Peter Zijlstra <peterz@...radead.org>,
keescook@...omium.org, atomlin@...hat.com, raistlin@...ux.it,
axboe@...com, Paul McKenney <paulmck@...ux.vnet.ibm.com>,
kirill.shutemov@...ux.intel.com, n-horiguchi@...jp.nec.com,
k.khlebnikov@...sung.com, msalter@...hat.com, deller@....de,
tangchen@...fujitsu.com, ben@...adent.org.uk,
akinobu.mita@...il.com, lauraa@...eaurora.org, vbabka@...e.cz,
sasha.levin@...cle.com, vdavydov@...allels.com,
suleiman@...gle.com,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
linux-pm@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 4/4] (CMA_AGGRESSIVE) Update page alloc function
On Fri, Oct 24, 2014 at 1:28 PM, Joonsoo Kim <iamjoonsoo.kim@....com> wrote:
> On Thu, Oct 16, 2014 at 11:35:51AM +0800, Hui Zhu wrote:
>> If page alloc function __rmqueue try to get pages from MIGRATE_MOVABLE and
>> conditions (cma_alloc_counter, cma_aggressive_free_min, cma_alloc_counter)
>> allow, MIGRATE_CMA will be allocated as MIGRATE_MOVABLE first.
>>
>> Signed-off-by: Hui Zhu <zhuhui@...omi.com>
>> ---
>> mm/page_alloc.c | 42 +++++++++++++++++++++++++++++++-----------
>> 1 file changed, 31 insertions(+), 11 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 736d8e1..87bc326 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -65,6 +65,10 @@
>> #include <asm/div64.h>
>> #include "internal.h"
>>
>> +#ifdef CONFIG_CMA_AGGRESSIVE
>> +#include <linux/cma.h>
>> +#endif
>> +
>> /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>> static DEFINE_MUTEX(pcp_batch_high_lock);
>> #define MIN_PERCPU_PAGELIST_FRACTION (8)
>> @@ -1189,20 +1193,36 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order,
>> {
>> struct page *page;
>>
>> -retry_reserve:
>> +#ifdef CONFIG_CMA_AGGRESSIVE
>> + if (cma_aggressive_switch
>> + && migratetype == MIGRATE_MOVABLE
>> + && atomic_read(&cma_alloc_counter) == 0
>> + && global_page_state(NR_FREE_CMA_PAGES) > cma_aggressive_free_min
>> + + (1 << order))
>> + migratetype = MIGRATE_CMA;
>> +#endif
>> +retry:
>
> I don't get it why cma_alloc_counter should be tested.
> When cma alloc is progress, pageblock is isolated so that pages on that
> pageblock cannot be allocated. Why should we prevent aggressive
> allocation in this case?
>
Hi Joonsoo,
Even if the pageblock is isolated in the begin of function
alloc_contig_range, it will unisolate if alloc_contig_range get some
error for example "PFNs busy". And the cma_alloc will keep call
alloc_contig_range with another address if need.
So it will decrease the contradiction between CMA allocation in
cma_alloc and __rmqueue with cma_alloc_counter.
Thanks,
Hui
> Thanks.
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists