[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190530154119.GF6703@dhcp22.suse.cz>
Date: Thu, 30 May 2019 17:41:19 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Yang Shi <yang.shi@...ux.alibaba.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [HELP] How to get task_struct from mm
On Thu 30-05-19 14:57:46, Yang Shi wrote:
> Hi folks,
>
>
> As what we discussed about page demotion for PMEM at LSF/MM, the demotion
> should respect to the mempolicy and allowed mems of the process which the
> page (anonymous page only for now) belongs to.
cpusets memory mask (aka mems_allowed) is indeed tricky and somehow
awkward. It is inherently an address space property and I never
understood why we have it per _thread_. This just doesn't make any
sense to me. This just leads to weird corner cases. What should happen
if different threads disagree about the allocation affinity while
working on a shared address space?
> The vma that the page is mapped to can be retrieved from rmap walk easily,
> but we need know the task_struct that the vma belongs to. It looks there is
> not such API, and container_of seems not work with pointer member.
I do not think this is a good idea. As you point out in the reply we
have that for memcgs but we really hope to get rid of mm->owner there
as well. It is just more tricky there. Moreover such a reverse mapping
would be incorrect. Just think of a disagreeing yet overlapping cpusets
for different threads mapping the same page.
Is it such a big deal to document that the node migrate is not
compatible with cpusets?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists