[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1380d06b-e15b-8fdd-8e31-a6457db634a4@linux.vnet.ibm.com>
Date: Wed, 3 Jan 2018 08:41:09 +0530
From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To: Michal Hocko <mhocko@...nel.org>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>
Cc: linux-mm@...ck.org, Zi Yan <zi.yan@...rutgers.edu>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Reale <ar@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 1/3] mm, numa: rework do_pages_move
On 01/02/2018 05:42 PM, Michal Hocko wrote:
> On Tue 02-01-18 16:55:46, Anshuman Khandual wrote:
>> On 12/08/2017 09:45 PM, Michal Hocko wrote:
>>> From: Michal Hocko <mhocko@...e.com>
>>>
>>> do_pages_move is supposed to move user defined memory (an array of
>>> addresses) to the user defined numa nodes (an array of nodes one for
>>> each address). The user provided status array then contains resulting
>>> numa node for each address or an error. The semantic of this function is
>>> little bit confusing because only some errors are reported back. Notably
>>> migrate_pages error is only reported via the return value. This patch
>>
>> It does report back the migration failures as well. In new_page_node
>> there is '*result = &pm->status' which going forward in unmap_and_move
>> will hold migration error or node ID of the new page.
>>
>> newpage = get_new_page(page, private, &result);
>> ............
>> if (result) {
>> if (rc)
>> *result = rc;
>> else
>> *result = page_to_nid(newpage);
>> }
>>
>
> This is true, except the user will not get this information. Have a look
> how we do not copy status on error up in the do_pages_move layer.
Ahh, right, we dont. But as you have mentioned this patch does not
intend to change the semantics of status return thought it seems
like the right thing to do. We can just pass on the status to user
here before bailing out.
Powered by blists - more mailing lists