[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87d1ketih2.fsf@yhuang-mobile.sh.intel.com>
Date: Thu, 08 Sep 2016 10:22:01 -0700
From: "Huang\, Ying" <ying.huang@...el.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: "Huang\, Ying" <ying.huang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
<tim.c.chen@...el.com>, <dave.hansen@...el.com>,
<andi.kleen@...el.com>, <aaron.lu@...el.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Hugh Dickins <hughd@...gle.com>,
"Shaohua Li" <shli@...nel.org>, Minchan Kim <minchan@...nel.org>,
Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH -v3 05/10] mm, THP, swap: Add get_huge_swap_page()
"Kirill A. Shutemov" <kirill@...temov.name> writes:
> On Wed, Sep 07, 2016 at 09:46:04AM -0700, Huang, Ying wrote:
>> From: Huang Ying <ying.huang@...el.com>
>>
>> A variation of get_swap_page(), get_huge_swap_page(), is added to
>> allocate a swap cluster (512 swap slots) based on the swap cluster
>> allocation function. A fair simple algorithm is used, that is, only the
>> first swap device in priority list will be tried to allocate the swap
>> cluster. The function will fail if the trying is not successful, and
>> the caller will fallback to allocate a single swap slot instead. This
>> works good enough for normal cases.
>
> For normal cases, yes. But the limitation is not obvious for users and
> performance difference after small change in configuration could be
> puzzling.
If the difference of the number of the free swap clusters among
multiple swap devices is significant, it is possible that some THP are
split earlier than necessary because we fail to allocate the swap
clusters for them. For example, this could be caused by big size
difference among multiple swap devices.
> At least this must be documented somewhere.
I can add the above description in the patch description. Any other
places do you suggest?
Best Regards,
Huang, Ying
[snip]
Powered by blists - more mailing lists