[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <558AA37E.20106@suse.cz>
Date: Wed, 24 Jun 2015 14:33:02 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Rik van Riel <riel@...hat.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Ebru Akagunduz <ebru.akagunduz@...il.com>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
kirill.shutemov@...ux.intel.com, n-horiguchi@...jp.nec.com,
aarcange@...hat.com, iamjoonsoo.kim@....com, xiexiuqi@...wei.com,
gorcunov@...nvz.org, linux-kernel@...r.kernel.org, mgorman@...e.de,
rientjes@...gle.com, aneesh.kumar@...ux.vnet.ibm.com,
hughd@...gle.com, hannes@...xchg.org, mhocko@...e.cz,
boaz@...xistor.com, raindel@...lanox.com
Subject: Re: [RFC v2 3/3] mm: make swapin readahead to improve thp collapse
rate
On 06/22/2015 03:37 AM, Rik van Riel wrote:
> On 06/21/2015 02:11 PM, Kirill A. Shutemov wrote:
>> On Sat, Jun 20, 2015 at 02:28:06PM +0300, Ebru Akagunduz wrote:
>>> + __collapse_huge_page_swapin(mm, vma, address, pmd, pte);
>>> +
>>
>> And now the pages we swapped in are not isolated, right?
>> What prevents them from being swapped out again or whatever?
>
> Nothing, but __collapse_huge_page_isolate is run with the
> appropriate locks to ensure that once we actually collapse
> the THP, things are present.
>
> The way do_swap_page is called, khugepaged does not even
> wait for pages to be brought in from swap. It just maps
> in pages that are in the swap cache, and which can be
> immediately locked (without waiting).
>
> It will also start IO on pages that are not in memory
> yet, and will hopefully get those next round.
Hm so what if the process is slightly larger than available memory and really
doesn't touch the swapped out pages that much? Won't that just be thrashing and
next round you find them swapped out again?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists