lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 5 Oct 2021 01:30:36 +0000 From: Liam Howlett <liam.howlett@...cle.com> To: "maple-tree@...ts.infradead.org" <maple-tree@...ts.infradead.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand <david@...hat.com>, Douglas Gilbert <dgilbert@...erlog.com> CC: Song Liu <songliubraving@...com>, Davidlohr Bueso <dave@...olabs.net>, "Paul E . McKenney" <paulmck@...nel.org>, Matthew Wilcox <willy@...radead.org>, David Rientjes <rientjes@...gle.com>, Axel Rasmussen <axelrasmussen@...gle.com>, Suren Baghdasaryan <surenb@...gle.com>, Vlastimil Babka <vbabka@...e.cz>, Rik van Riel <riel@...riel.com>, Peter Zijlstra <peterz@...radead.org> Subject: [PATCH v3 12/66] mmap: Change zeroing of maple tree in __vma_adjust Only write to the maple tree if we are not inserting or the insert isn't going to overwrite the area to clear. This avoids spanning writes and node coealescing when unnecessary. Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com> --- mm/mmap.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index c51d739d7411..9f047204fa93 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -626,6 +626,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, bool vma_changed = false; long adjust_next = 0; int remove_next = 0; + unsigned long old_start; if (next && !insert) { struct vm_area_struct *exporter = NULL, *importer = NULL; @@ -751,25 +752,29 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, vma_interval_tree_remove(next, root); } + old_start = vma->vm_start; if (start != vma->vm_start) { - if (vma->vm_start < start) - vma_mt_szero(mm, vma->vm_start, start); - else - vma_changed = true; + vma_changed = true; vma->vm_start = start; } if (end != vma->vm_end) { - if (vma->vm_end > end) - vma_mt_szero(mm, end, vma->vm_end); - else + if (vma->vm_end > end) { + if (!insert || (insert && (insert->vm_start != end))) + vma_mt_szero(mm, end, vma->vm_end); + } else vma_changed = true; vma->vm_end = end; if (!next) mm->highest_vm_end = vm_end_gap(vma); } - if (vma_changed) + if (vma_changed) { vma_mt_store(mm, vma); + if (old_start < start) { + if (insert && (insert->vm_start != old_start)) + vma_mt_szero(mm, old_start, start); + } + } vma->vm_pgoff = pgoff; if (adjust_next) { -- 2.30.2
Powered by blists - more mailing lists