lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 May 2014 11:26:03 +0900
From:	Joonsoo Kim <iamjoonsoo.kim@....com>
To:	Marek Szyprowski <m.szyprowski@...sung.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Rik van Riel <riel@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Mel Gorman <mgorman@...e.de>,
	Laura Abbott <lauraa@...eaurora.org>,
	Minchan Kim <minchan@...nel.org>,
	Heesub Shin <heesub.shin@...sung.com>,
	Michal Nazarewicz <mina86@...a86.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Kyungmin Park <kyungmin.park@...sung.com>,
	Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
	'Tomasz Stanislawski' <t.stanislaws@...sung.com>
Subject: Re: [RFC PATCH 0/3] Aggressively allocate the pages on cma reserved
 memory

On Fri, May 09, 2014 at 02:39:20PM +0200, Marek Szyprowski wrote:
> Hello,
> 
> On 2014-05-08 02:32, Joonsoo Kim wrote:
> >This series tries to improve CMA.
> >
> >CMA is introduced to provide physically contiguous pages at runtime
> >without reserving memory area. But, current implementation works like as
> >reserving memory approach, because allocation on cma reserved region only
> >occurs as fallback of migrate_movable allocation. We can allocate from it
> >when there is no movable page. In that situation, kswapd would be invoked
> >easily since unmovable and reclaimable allocation consider
> >(free pages - free CMA pages) as free memory on the system and free memory
> >may be lower than high watermark in that case. If kswapd start to reclaim
> >memory, then fallback allocation doesn't occur much.
> >
> >In my experiment, I found that if system memory has 1024 MB memory and
> >has 512 MB reserved memory for CMA, kswapd is mostly invoked around
> >the 512MB free memory boundary. And invoked kswapd tries to make free
> >memory until (free pages - free CMA pages) is higher than high watermark,
> >so free memory on meminfo is moving around 512MB boundary consistently.
> >
> >To fix this problem, we should allocate the pages on cma reserved memory
> >more aggressively and intelligenetly. Patch 2 implements the solution.
> >Patch 1 is the simple optimization which remove useless re-trial and patch 3
> >is for removing useless alloc flag, so these are not important.
> >See patch 2 for more detailed description.
> >
> >This patchset is based on v3.15-rc4.
> 
> Thanks for posting those patches. It basically reminds me the
> following discussion:
> http://thread.gmane.org/gmane.linux.kernel/1391989/focus=1399524
> 
> Your approach is basically the same. I hope that your patches can be
> improved
> in such a way that they will be accepted by mm maintainers. I only
> wonder if the
> third patch is really necessary. Without it kswapd wakeup might be
> still avoided
> in some cases.

Hello,

Oh... I didn't know that patch and discussion, because I have no interest
on CMA at that time. Your approach looks similar to #1
approach of mine and could have same problem of #1 approach which I mentioned
in patch 2/3. Please refer that patch description. :)
And, there is different purpose between this and yours. This patch is
intended to better use of CMA pages and so get maximum performance.
Just to not trigger oom, it can be possible to put this logic on reclaim path.
But that is sub-optimal to get higher performance, because it needs
migration in some cases.

If second patch works as intended, there are just a few of cma free pages
when we are toward on the watermark. So benefit of third patch would
be marginal and we can remove ALLOC_CMA.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists