lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150206182918.GA2290@kernel.org>
Date:	Fri, 6 Feb 2015 10:29:18 -0800
From:	Shaohua Li <shli@...nel.org>
To:	Minchan Kim <minchan@...nel.org>
Cc:	"Michael Kerrisk (man-pages)" <mtk.manpages@...il.com>,
	Michal Hocko <mhocko@...e.cz>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	linux-api@...r.kernel.org, Hugh Dickins <hughd@...gle.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Rik van Riel <riel@...hat.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Mel Gorman <mgorman@...e.de>, Jason Evans <je@...com>,
	zhangyanfei@...fujitsu.com,
	"Kirill A. Shutemov" <kirill@...temov.name>,
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH v17 1/7] mm: support madvise(MADV_FREE)

On Fri, Feb 06, 2015 at 02:51:03PM +0900, Minchan Kim wrote:
> Hi Shaohua,
> 
> On Thu, Feb 05, 2015 at 04:33:11PM -0800, Shaohua Li wrote:
> > 
> > Hi Minchan,
> > 
> > Sorry to jump in this thread so later, and if some issues are discussed before.
> > I'm interesting in this patch, so tried it here. I use a simple test with
> 
> No problem at all. Interest is always win over ignorance.
> 
> > jemalloc. Obviously this can improve performance when there is no memory
> > pressure. Did you try setup with memory pressure?
> 
> Sure but it was not a huge memory system like yours.

Yes, I'd like to check the symptom in memory pressure, so choose such test.

> > In my test, jemalloc will map 61G vma, and use about 32G memory without
> > MADV_FREE. If MADV_FREE is enabled, jemalloc will use whole 61G memory because
> > madvise doesn't reclaim the unused memory. If I disable swap (tweak your patch
> 
> Yes, IIUC, jemalloc replaces MADV_DONTNEED with MADV_FREE completely.

right.
> > slightly to make it work without swap), I got oom. If swap is enabled, my
> 
> You mean you modified anon aging logic so it works although there is no swap?
> If so, I have no idea why OOM happens. I guess it should free all of freeable
> pages during the aging so although system stall happens more, I don't expect
> OOM. Anyway, with MADV_FREE with no swap, we should consider more things
> about anonymous aging.

In the patch, MADV_FREE will be disabled and fallback to DONTNEED if no swap is
enabled. Our production environment doesn't enable swap, so I tried to delete
the 'no swap' check and make MADV_FREE always enabled regardless if swap is
enabled. I didn't change anything else. With such change, I saw oom
immediately. So definitely we have aging issue, the pages aren't reclaimed
fast.

> > system is totally stalled because of swap activity. Without the MADV_FREE,
> > everything is ok. Considering we definitely don't want to waste too much
> > memory, a system with memory pressure is normal, so sounds MADV_FREE will
> > introduce big trouble here.
> > 
> > Did you think about move the MADV_FREE pages to the head of inactive LRU, so
> > they can be reclaimed easily?
> 
> I think it's desirable if the page lived in active LRU.
> The reason I didn't that was caused by volatile ranges system call which
> was motivaion for MADV_FREE in my mind.
> In last LSF/MM, there was concern about data's hotness.
> Some of users want to keep that as it is in LRU position, others want to
> handle that as cold(tail of inactive list)/warm(head of inactive list)/
> hot(head of active list), for example.
> The vrange syscall was just about volatiltiy, not depends on page hotness
> so the decision on my head was not to change LRU order and let's make new
> hotness advise if we need it later.
> 
> However, MADV_FREE's main customer is allocators and afaik, they want
> to replace MADV_DONTNEED with MADV_FREE so I think it is really cold,
> but we couldn't make sure so head of inactive is good compromise.
> Another concern about tail of inactive list is that there could be
> plenty of pages in there, which was asynchromos write-backed in
> previous reclaim path, not-yet reclaimed because of not being able
> to free the in softirq context of writeback. It means we ends up
> freeing more potential pages to become workingset in advance
> than pages VM already decided to evict.

Yes, they are definitely cold pages. I thought We should make sure the
MADV_FREE pages are reclaimed first before other pages, at least in the anon
LRU list, though there might be difficult to determine if we should reclaim
writeback pages first or MADV_FREE pages first.

Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ