[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9a84440f-1462-2193-7dd6-c84e8bb22232@redhat.com>
Date: Thu, 6 Oct 2022 11:20:42 +0200
From: David Hildenbrand <david@...hat.com>
To: Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-kselftest@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>, Hugh Dickins <hughd@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Andrea Arcangeli <aarcange@...hat.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Jason Gunthorpe <jgg@...dia.com>,
John Hubbard <jhubbard@...dia.com>
Subject: Re: [PATCH v1 6/7] mm/ksm: convert break_ksm() to use
walk_page_range_vma()
>> +int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
>> + struct mm_walk *walk)
>> +{
>> + /* We only care about page tables to walk to a single base page. */
>> + if (pud_leaf(*pud) || !pud_present(*pud))
>> + return 1;
>> + return 0;
>> +}
>
> Is this needed? I thought the pgtable walker handlers this already.
>
> [...]
>
Most probably yes. I was trying to avoid about PUD splits, but I guess
we simply should not care in VMAs that are considered by KSM (MERGABLE).
Most probably never ever happens.
>> static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
>> {
>> - struct page *page;
>> vm_fault_t ret = 0;
>>
>> + if (WARN_ON_ONCE(!IS_ALIGNED(addr, PAGE_SIZE)))
>> + return -EINVAL;
>> +
>> do {
>> bool ksm_page = false;
>>
>> cond_resched();
>> - page = follow_page(vma, addr,
>> - FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE);
>> - if (IS_ERR_OR_NULL(page))
>> - break;
>> - if (PageKsm(page))
>> - ksm_page = true;
>> - put_page(page);
>> + ret = walk_page_range_vma(vma, addr, addr + PAGE_SIZE,
>> + &break_ksm_ops, &ksm_page);
>> + if (WARN_ON_ONCE(ret < 0))
>> + return ret;
>
> I'm not sure this would be worth it, especially with a 4% degrade. The
> next patch will be able to bring 50- LOC, but this patch does 60+ anyway,
> based on another new helper just introduced...
>
> I just don't see whether there's strong enough reason to do so to drop
> FOLL_MIGRATE. It's different to the previous VM_FAULT_WRITE refactor
> because of the unshare approach was much of a good reasoning to me.
>
> Perhaps I missed something?
My main motivation is to remove most of that GUP hackery here, which is
1) Getting a reference on a page and waiting for migration to finish
even though both is unnecessary.
2) As we don't have sufficient control, we added FOLL_MIGRATION hacks to
MM core to work around limitations in the GUP-based approacj.
3) We rely on legacy follow_page() interface that we should really get
rid of in the long term.
All we want to do is walk the page tables and make a decision if
something we care about is mapped. Instead of leaking these details via
hacks into GUP code and making that code harder to grasp/maintain, this
patch moves that logic to the actual user, while reusing generic page
walking code.
Yes, we have to extend page walking code, but it's just the natural,
non-hacky way of doing it.
Regarding the 4% performance degradation (if I wouldn't have added the
benchmarks, nobody would know and probably care ;) ), I am not quite
sure why that is the case. We're just walking page tables after all in
both cases. Maybe the callback-based implementation of pagewalk code is
less efficient, but we might be able to improve that implementation if
we really care about performance here. Maybe removing
break_ksm_pud_entry() already improves the numbers slightly.
Thanks!
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists