[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <f662e44c-4dd9-a301-8b6c-8cee572f6465@linux.vnet.ibm.com>
Date: Fri, 16 Jun 2017 21:27:04 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Vineet Gupta <vgupta@...opsys.com>,
Russell King <linux@...linux.org.uk>,
Will Deacon <will.deacon@....com>,
Catalin Marinas <catalin.marinas@....com>,
Ralf Baechle <ralf@...ux-mips.org>,
"David S. Miller" <davem@...emloft.net>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Andrea Arcangeli <aarcange@...hat.com>,
linux-arch@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCHv2 3/3] mm: Use updated pmdp_invalidate() inteface to track
dirty/accessed bits
On Friday 16 June 2017 06:51 PM, Kirill A. Shutemov wrote:
> On Fri, Jun 16, 2017 at 05:01:30PM +0530, Aneesh Kumar K.V wrote:
>> "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> writes:
>>
>>> This patch uses modifed pmdp_invalidate(), that return previous value of pmd,
>>> to transfer dirty and accessed bits.
>>>
>>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>>> ---
>>> fs/proc/task_mmu.c | 8 ++++----
>>> mm/huge_memory.c | 29 ++++++++++++-----------------
>>> 2 files changed, 16 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>>> index f0c8b33d99b1..f2fc1ef5bba2 100644
>>> --- a/fs/proc/task_mmu.c
>>> +++ b/fs/proc/task_mmu.c
>>
>> .....
>>
>>> @@ -1965,7 +1955,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>> page_ref_add(page, HPAGE_PMD_NR - 1);
>>> write = pmd_write(*pmd);
>>> young = pmd_young(*pmd);
>>> - dirty = pmd_dirty(*pmd);
>>> soft_dirty = pmd_soft_dirty(*pmd);
>>>
>>> pmdp_huge_split_prepare(vma, haddr, pmd);
>>> @@ -1995,8 +1984,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>> if (soft_dirty)
>>> entry = pte_mksoft_dirty(entry);
>>> }
>>> - if (dirty)
>>> - SetPageDirty(page + i);
>>> pte = pte_offset_map(&_pmd, addr);
>>> BUG_ON(!pte_none(*pte));
>>> set_pte_at(mm, addr, pte, entry);
>>> @@ -2045,7 +2032,15 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>> * and finally we write the non-huge version of the pmd entry with
>>> * pmd_populate.
>>> */
>>> - pmdp_invalidate(vma, haddr, pmd);
>>> + old = pmdp_invalidate(vma, haddr, pmd);
>>> +
>>> + /*
>>> + * Transfer dirty bit using value returned by pmd_invalidate() to be
>>> + * sure we don't race with CPU that can set the bit under us.
>>> + */
>>> + if (pmd_dirty(old))
>>> + SetPageDirty(page);
>>> +
>>> pmd_populate(mm, pmd, pgtable);
>>>
>>> if (freeze) {
>>
>>
>> Can we invalidate the pmd early here ? ie, do pmdp_invalidate instead of
>> pmdp_huge_split_prepare() ?
>
> I think we can. But it means we would block access to the page for longer
> than it's necessary on most architectures. I guess it's not a bit deal.
>
> Maybe as separate patch on top of this patchet? Aneesh, would you take
> care of this?
>
Yes, I cam do that.
-aneesh
Powered by blists - more mailing lists