lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 4 Nov 2014 11:31:12 +0900
From:	Joonsoo Kim <iamjoonsoo.kim@....com>
To:	Hui Zhu <teawater@...il.com>
Cc:	Hui Zhu <zhuhui@...omi.com>, rjw@...ysocki.net,
	len.brown@...el.com, pavel@....cz, m.szyprowski@...sung.com,
	Andrew Morton <akpm@...ux-foundation.org>, mina86@...a86.com,
	aneesh.kumar@...ux.vnet.ibm.com, hannes@...xchg.org,
	Rik van Riel <riel@...hat.com>, mgorman@...e.de,
	minchan@...nel.org, nasa4836@...il.com, ddstreet@...e.org,
	Hugh Dickins <hughd@...gle.com>, mingo@...nel.org,
	rientjes@...gle.com, Peter Zijlstra <peterz@...radead.org>,
	keescook@...omium.org, atomlin@...hat.com, raistlin@...ux.it,
	axboe@...com, Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	kirill.shutemov@...ux.intel.com, n-horiguchi@...jp.nec.com,
	k.khlebnikov@...sung.com, msalter@...hat.com, deller@....de,
	tangchen@...fujitsu.com, ben@...adent.org.uk,
	akinobu.mita@...il.com, lauraa@...eaurora.org, vbabka@...e.cz,
	sasha.levin@...cle.com, vdavydov@...allels.com,
	suleiman@...gle.com,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	linux-pm@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 0/4] (CMA_AGGRESSIVE) Make CMA memory be more aggressive
 about allocation

On Mon, Nov 03, 2014 at 05:05:46PM +0900, Joonsoo Kim wrote:
> On Mon, Nov 03, 2014 at 03:28:38PM +0800, Hui Zhu wrote:
> > On Fri, Oct 24, 2014 at 1:25 PM, Joonsoo Kim <iamjoonsoo.kim@....com> wrote:
> > > On Thu, Oct 16, 2014 at 11:35:47AM +0800, Hui Zhu wrote:
> > >> In fallbacks of page_alloc.c, MIGRATE_CMA is the fallback of
> > >> MIGRATE_MOVABLE.
> > >> MIGRATE_MOVABLE will use MIGRATE_CMA when it doesn't have a page in
> > >> order that Linux kernel want.
> > >>
> > >> If a system that has a lot of user space program is running, for
> > >> instance, an Android board, most of memory is in MIGRATE_MOVABLE and
> > >> allocated.  Before function __rmqueue_fallback get memory from
> > >> MIGRATE_CMA, the oom_killer will kill a task to release memory when
> > >> kernel want get MIGRATE_UNMOVABLE memory because fallbacks of
> > >> MIGRATE_UNMOVABLE are MIGRATE_RECLAIMABLE and MIGRATE_MOVABLE.
> > >> This status is odd.  The MIGRATE_CMA has a lot free memory but Linux
> > >> kernel kill some tasks to release memory.
> > >>
> > >> This patch series adds a new function CMA_AGGRESSIVE to make CMA memory
> > >> be more aggressive about allocation.
> > >> If function CMA_AGGRESSIVE is available, when Linux kernel call function
> > >> __rmqueue try to get pages from MIGRATE_MOVABLE and conditions allow,
> > >> MIGRATE_CMA will be allocated as MIGRATE_MOVABLE first.  If MIGRATE_CMA
> > >> doesn't have enough pages for allocation, go back to allocate memory from
> > >> MIGRATE_MOVABLE.
> > >> Then the memory of MIGRATE_MOVABLE can be kept for MIGRATE_UNMOVABLE and
> > >> MIGRATE_RECLAIMABLE which doesn't have fallback MIGRATE_CMA.
> > >
> > > Hello,
> > >
> > > I did some work similar to this.
> > > Please reference following links.
> > >
> > > https://lkml.org/lkml/2014/5/28/64
> > > https://lkml.org/lkml/2014/5/28/57
> > 
> > > I tested #1 approach and found the problem. Although free memory on
> > > meminfo can move around low watermark, there is large fluctuation on free
> > > memory, because too many pages are reclaimed when kswapd is invoked.
> > > Reason for this behaviour is that successive allocated CMA pages are
> > > on the LRU list in that order and kswapd reclaim them in same order.
> > > These memory doesn't help watermark checking from kwapd, so too many
> > > pages are reclaimed, I guess.
> > 
> > This issue can be handle with some change around shrink code.  I am
> > trying to integrate  a patch for them.
> > But I am not sure we met the same issue.  Do you mind give me more
> > info about this part?
> 
> I forgot the issue because there is so big time-gap. I need sometime
> to bring issue back to my brain. I will answer it soon after some thinking.

Hello,

Yes, the issue I mentioned before can be handled by modifying shrink code.
I didn't dive into the problem so I also didn't know the detail. What
I know is that there is large fluctuation on memory statistics and
my guess is that it is caused by order of reclaimable pages. If we use
#1 approach, the bulk of cma pages used for page cache or something are
linked together and will be reclaimed all at once, because reclaiming cma
pages are not counted and watermark check still fails until normal
pages are reclaimed.

I think that round-robin approach is better. Reasons are on the
following:

1) Want to spread CMA freepages to whole users, not specific one user.
We can modify shirnk code not to reclaim pages on CMA, because it
doesn't help watermark checking in some cases. In this case, if we
don't use round-robin, one specific user whose mapping with CMA pages
can get all the benefit. Others would take all the overhead. I think that
spreading will make all users fair.

2) Using CMA freepages first needlessly imposes overhead to CMA user.
If the system has enough normal freepages, it is better not to use it
as much as possible.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ