[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200212184115.127c17c6b0f9dab6fcae56c2@linux-foundation.org>
Date: Wed, 12 Feb 2020 18:41:15 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Arjun Roy <arjunroy.kdev@...il.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org, linux-mm@...ck.org,
arjunroy@...gle.com, Eric Dumazet <edumazet@...gle.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>
Subject: Re: [PATCH resend mm,net-next 1/3] mm: Refactor insert_page to
prepare for batched-lock insert.
On Mon, 27 Jan 2020 18:59:56 -0800 Arjun Roy <arjunroy.kdev@...il.com> wrote:
> Add helper methods for vm_insert_page()/insert_page() to prepare for
> vm_insert_pages(), which batch-inserts pages to reduce spinlock
> operations when inserting multiple consecutive pages into the user
> page table.
>
> The intention of this patch-set is to reduce atomic ops for
> tcp zerocopy receives, which normally hits the same spinlock multiple
> times consecutively.
I tweaked this a bit for the addition of page_has_type() to
insert_page(). Please check.
From: Arjun Roy <arjunroy.kdev@...il.com>
Subject: mm: Refactor insert_page to prepare for batched-lock insert.
From: Arjun Roy <arjunroy@...gle.com>
Add helper methods for vm_insert_page()/insert_page() to prepare for
vm_insert_pages(), which batch-inserts pages to reduce spinlock
operations when inserting multiple consecutive pages into the user
page table.
The intention of this patch-set is to reduce atomic ops for
tcp zerocopy receives, which normally hits the same spinlock multiple
times consecutively.
Link: http://lkml.kernel.org/r/20200128025958.43490-1-arjunroy.kdev@gmail.com
Signed-off-by: Arjun Roy <arjunroy@...gle.com>
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@...gle.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
---
mm/memory.c | 39 ++++++++++++++++++++++++---------------
1 file changed, 24 insertions(+), 15 deletions(-)
--- a/mm/memory.c~mm-refactor-insert_page-to-prepare-for-batched-lock-insert
+++ a/mm/memory.c
@@ -1430,6 +1430,27 @@ pte_t *__get_locked_pte(struct mm_struct
return pte_alloc_map_lock(mm, pmd, addr, ptl);
}
+static int validate_page_before_insert(struct page *page)
+{
+ if (PageAnon(page) || PageSlab(page) || page_has_type(page))
+ return -EINVAL;
+ flush_dcache_page(page);
+ return 0;
+}
+
+static int insert_page_into_pte_locked(struct mm_struct *mm, pte_t *pte,
+ unsigned long addr, struct page *page, pgprot_t prot)
+{
+ if (!pte_none(*pte))
+ return -EBUSY;
+ /* Ok, finally just insert the thing.. */
+ get_page(page);
+ inc_mm_counter_fast(mm, mm_counter_file(page));
+ page_add_file_rmap(page, false);
+ set_pte_at(mm, addr, pte, mk_pte(page, prot));
+ return 0;
+}
+
/*
* This is the old fallback for page remapping.
*
@@ -1445,26 +1466,14 @@ static int insert_page(struct vm_area_st
pte_t *pte;
spinlock_t *ptl;
- retval = -EINVAL;
- if (PageAnon(page) || PageSlab(page) || page_has_type(page))
+ retval = validate_page_before_insert(page);
+ if (retval)
goto out;
retval = -ENOMEM;
- flush_dcache_page(page);
pte = get_locked_pte(mm, addr, &ptl);
if (!pte)
goto out;
- retval = -EBUSY;
- if (!pte_none(*pte))
- goto out_unlock;
-
- /* Ok, finally just insert the thing.. */
- get_page(page);
- inc_mm_counter_fast(mm, mm_counter_file(page));
- page_add_file_rmap(page, false);
- set_pte_at(mm, addr, pte, mk_pte(page, prot));
-
- retval = 0;
-out_unlock:
+ retval = insert_page_into_pte_locked(mm, pte, addr, page, prot);
pte_unmap_unlock(pte, ptl);
out:
return retval;
_
Powered by blists - more mailing lists