[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190213132800.dekg525rhrjn3cmj@kshutemo-mobl1>
Date: Wed, 13 Feb 2019 16:28:00 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Matthew Wilcox <willy@...radead.org>
Cc: Anshuman Khandual <anshuman.khandual@....com>,
lsf-pc@...ts.linux-foundation.org,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [LSF/MM TOPIC] Non standard size THP
On Wed, Feb 13, 2019 at 05:06:47AM -0800, Matthew Wilcox wrote:
> On Tue, Feb 12, 2019 at 11:33:31AM +0300, Kirill A. Shutemov wrote:
> > To consider it seriously we need to understand what it means for
> > split_huge_p?d()/split_huge_page()? How khugepaged will deal with this?
> >
> > In particular, I'm worry to expose (to user or CPU) page table state in
> > the middle of conversion (huge->small or small->huge). Handling this on
> > page table level provides a level atomicity that you will not have.
>
> We could do an RCU-style trick where (eg) for merging 16 consecutive
> entries together, we allocate a new PTE leaf, take the mmap_sem for write,
> copy the page table over, update the new entries, then put the new leaf
> into the PMD level. Then iterate over the old PTE leaf again, and set
> any dirty bits in the new leaf which were set during the race window.
>
> Does that cover all the problems?
Probably, but it will kill scalability. Taking mmap_sem for write to
handle page fault or MADV_DONTNEED will not make anybody happy.
--
Kirill A. Shutemov
Powered by blists - more mailing lists