lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 6 Jun 2022 22:50:21 +0200 From: Uladzislau Rezki <urezki@...il.com> To: Baoquan He <bhe@...hat.com> Cc: akpm@...ux-foundation.org, npiggin@...il.com, urezki@...il.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org Subject: Re: [PATCH 4/5] mm/vmalloc: Add code comment for find_vmap_area_exceed_addr() On Mon, Jun 06, 2022 at 04:39:08PM +0800, Baoquan He wrote: > Its behaviour is like find_vma() which finds an area above the specified > address, add comment to make it easier to understand. > > And also fix two places of grammer mistake/typo. > > Signed-off-by: Baoquan He <bhe@...hat.com> > --- > mm/vmalloc.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 11dfc897de40..860ed9986775 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -790,6 +790,7 @@ unsigned long vmalloc_nr_pages(void) > return atomic_long_read(&nr_vmalloc_pages); > } > > +/* Look up the first VA which satisfies addr < va_end, NULL if none. */ > static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr) > { > struct vmap_area *va = NULL; > @@ -929,7 +930,7 @@ link_va(struct vmap_area *va, struct rb_root *root, > * Some explanation here. Just perform simple insertion > * to the tree. We do not set va->subtree_max_size to > * its current size before calling rb_insert_augmented(). > - * It is because of we populate the tree from the bottom > + * It is because we populate the tree from the bottom > * to parent levels when the node _is_ in the tree. > * > * Therefore we set subtree_max_size to zero after insertion, > @@ -1659,7 +1660,7 @@ static atomic_long_t vmap_lazy_nr = ATOMIC_LONG_INIT(0); > > /* > * Serialize vmap purging. There is no actual critical section protected > - * by this look, but we want to avoid concurrent calls for performance > + * by this lock, but we want to avoid concurrent calls for performance > * reasons and to make the pcpu_get_vm_areas more deterministic. > */ > static DEFINE_MUTEX(vmap_purge_lock); > -- > 2.34.1 > Reviewed-by: Uladzislau Rezki (Sony) <urezki@...il.com> -- Uladzislau Rezki
Powered by blists - more mailing lists