[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6cee3838-1807-4983-9d7f-b3a30ee30563@collabora.com>
Date: Fri, 6 Oct 2023 16:40:53 +0500
From: Muhammad Usama Anjum <usama.anjum@...labora.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Michał Mirosław <emmir@...gle.com>,
Andrei Vagin <avagin@...il.com>
Cc: Muhammad Usama Anjum <usama.anjum@...labora.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kernel@...labora.com, Paul Gofman <pgofman@...eweavers.com>
Subject: Re: [PATCH v33 3/6] fs/proc/task_mmu: Add fast paths to get/clear
PAGE_IS_WRITTEN flag
Hi Andrew,
You picked up all the other patches in this series except this one. Thank
you so much. I'm unable to find any comment on why this wasn't picked or
maybe you missed it?
Please let me know what you think.
Regards,
Usama
On 8/21/23 7:15 PM, Muhammad Usama Anjum wrote:
> Adding fast code paths to handle specifically only get and/or clear
> operation of PAGE_IS_WRITTEN, increases its performance by 0-35%.
> The results of some test cases are given below:
>
> Test-case-1
> t1 = (Get + WP) time
> t2 = WP time
> t1 t2
> Without this patch: 140-170mcs 90-115mcs
> With this patch: 110mcs 80mcs
> Worst case diff: 35% faster 30% faster
>
> Test-case-2
> t3 = atomic Get and WP
> t3
> Without this patch: 120-140mcs
> With this patch: 100-110mcs
> Worst case diff: 21% faster
>
> Signed-off-by: Muhammad Usama Anjum <usama.anjum@...labora.com>
> ---
> The test to measure the performance can be found: https://is.gd/FtSKcD
> 8 8192 3 1 0 and 8 8192 3 1 1 arguments have been used to produce the
> above mentioned results.
>
> Changes in v29:
> - Minor updates in flush logic following the original patch
> ---
> fs/proc/task_mmu.c | 36 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 36 insertions(+)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 6e6261e8b91b1..79cf023148b28 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -2138,6 +2138,41 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
> return 0;
> }
>
> + if (!p->vec_out) {
> + /* Fast path for performing exclusive WP */
> + for (addr = start; addr != end; pte++, addr += PAGE_SIZE) {
> + if (pte_uffd_wp(ptep_get(pte)))
> + continue;
> + make_uffd_wp_pte(vma, addr, pte);
> + if (!flush_end)
> + start = addr;
> + flush_end = addr + PAGE_SIZE;
> + }
> + goto flush_and_return;
> + }
> +
> + if (!p->arg.category_anyof_mask && !p->arg.category_inverted &&
> + p->arg.category_mask == PAGE_IS_WRITTEN &&
> + p->arg.return_mask == PAGE_IS_WRITTEN) {
> + for (addr = start; addr < end; pte++, addr += PAGE_SIZE) {
> + unsigned long next = addr + PAGE_SIZE;
> +
> + if (pte_uffd_wp(ptep_get(pte)))
> + continue;
> + ret = pagemap_scan_output(p->cur_vma_category | PAGE_IS_WRITTEN,
> + p, addr, &next);
> + if (next == addr)
> + break;
> + if (~p->arg.flags & PM_SCAN_WP_MATCHING)
> + continue;
> + make_uffd_wp_pte(vma, addr, pte);
> + if (!flush_end)
> + start = addr;
> + flush_end = next;
> + }
> + goto flush_and_return;
> + }
> +
> for (addr = start; addr != end; pte++, addr += PAGE_SIZE) {
> unsigned long categories = p->cur_vma_category |
> pagemap_page_category(p, vma, addr, ptep_get(pte));
> @@ -2161,6 +2196,7 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
> flush_end = next;
> }
>
> +flush_and_return:
> if (flush_end)
> flush_tlb_range(vma, start, addr);
>
--
BR,
Muhammad Usama Anjum
Powered by blists - more mailing lists