lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOJsxLFy5TP_xJ0GcqYdpsZ_Lj+Sf2Bfn99CqCqOv8P21N8+UA@mail.gmail.com>
Date:	Fri, 7 Dec 2012 09:44:11 +0200
From:	Pekka Enberg <penberg@...nel.org>
To:	Joonsoo Kim <js1304@...il.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Russell King <rmk+kernel@....linux.org.uk>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	kexec@...ts.infradead.org, Chris Metcalf <cmetcalf@...era.com>,
	Guan Xuetao <gxt@...c.pku.edu.cn>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [RFC PATCH 1/8] mm, vmalloc: change iterating a vmlist to find_vm_area()

On Thu, Dec 6, 2012 at 6:09 PM, Joonsoo Kim <js1304@...il.com> wrote:
> The purpose of iterating a vmlist is finding vm area with specific
> virtual address. find_vm_area() is provided for this purpose
> and more efficient, because it uses a rbtree.
> So change it.

You no longer take the 'vmlist_lock'. This is safe, because...?

> Cc: Chris Metcalf <cmetcalf@...era.com>
> Cc: Guan Xuetao <gxt@...c.pku.edu.cn>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Signed-off-by: Joonsoo Kim <js1304@...il.com>
>
> diff --git a/arch/tile/mm/pgtable.c b/arch/tile/mm/pgtable.c
> index de0de0c..862782d 100644
> --- a/arch/tile/mm/pgtable.c
> +++ b/arch/tile/mm/pgtable.c
> @@ -592,12 +592,7 @@ void iounmap(volatile void __iomem *addr_in)
>            in parallel. Reuse of the virtual address is prevented by
>            leaving it in the global lists until we're done with it.
>            cpa takes care of the direct mappings. */
> -       read_lock(&vmlist_lock);
> -       for (p = vmlist; p; p = p->next) {
> -               if (p->addr == addr)
> -                       break;
> -       }
> -       read_unlock(&vmlist_lock);
> +       p = find_vm_area((void *)addr);
>
>         if (!p) {
>                 pr_err("iounmap: bad address %p\n", addr);
> diff --git a/arch/unicore32/mm/ioremap.c b/arch/unicore32/mm/ioremap.c
> index b7a6055..13068ee 100644
> --- a/arch/unicore32/mm/ioremap.c
> +++ b/arch/unicore32/mm/ioremap.c
> @@ -235,7 +235,7 @@ EXPORT_SYMBOL(__uc32_ioremap_cached);
>  void __uc32_iounmap(volatile void __iomem *io_addr)
>  {
>         void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr);
> -       struct vm_struct **p, *tmp;
> +       struct vm_struct *vm;
>
>         /*
>          * If this is a section based mapping we need to handle it
> @@ -244,17 +244,10 @@ void __uc32_iounmap(volatile void __iomem *io_addr)
>          * all the mappings before the area can be reclaimed
>          * by someone else.
>          */
> -       write_lock(&vmlist_lock);
> -       for (p = &vmlist ; (tmp = *p) ; p = &tmp->next) {
> -               if ((tmp->flags & VM_IOREMAP) && (tmp->addr == addr)) {
> -                       if (tmp->flags & VM_UNICORE_SECTION_MAPPING) {
> -                               unmap_area_sections((unsigned long)tmp->addr,
> -                                                   tmp->size);
> -                       }
> -                       break;
> -               }
> -       }
> -       write_unlock(&vmlist_lock);
> +       vm = find_vm_area(addr);
> +       if (vm && (vm->flags & VM_IOREMAP) &&
> +               (vm->flags & VM_UNICORE_SECTION_MAPPING))
> +               unmap_area_sections((unsigned long)vm->addr, vm->size);
>
>         vunmap(addr);
>  }
> diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
> index 78fe3f1..9a1e658 100644
> --- a/arch/x86/mm/ioremap.c
> +++ b/arch/x86/mm/ioremap.c
> @@ -282,12 +282,7 @@ void iounmap(volatile void __iomem *addr)
>            in parallel. Reuse of the virtual address is prevented by
>            leaving it in the global lists until we're done with it.
>            cpa takes care of the direct mappings. */
> -       read_lock(&vmlist_lock);
> -       for (p = vmlist; p; p = p->next) {
> -               if (p->addr == (void __force *)addr)
> -                       break;
> -       }
> -       read_unlock(&vmlist_lock);
> +       p = find_vm_area((void __force *)addr);
>
>         if (!p) {
>                 printk(KERN_ERR "iounmap: bad address %p\n", addr);
> --
> 1.7.9.5
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ