lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1393517246.3884.16.camel@buesod1.americas.hpqcorp.net>
Date:	Thu, 27 Feb 2014 08:07:26 -0800
From:	Davidlohr Bueso <davidlohr@...com>
To:	Michel Lespinasse <walken@...gle.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	KOSAKI Motohiro <kosaki.motohiro@...il.com>, aswin@...com,
	scott.norton@...com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] mm: per-thread vma caching

On Wed, 2014-02-26 at 22:47 -0800, Michel Lespinasse wrote:
> Agree with Linus; this is starting to look pretty good.
> 
> I still have nits though :)
> 
> On Wed, Feb 26, 2014 at 4:07 PM, Davidlohr Bueso <davidlohr@...com> wrote:
> > @@ -0,0 +1,45 @@
> > +#ifndef __LINUX_VMACACHE_H
> > +#define __LINUX_VMACACHE_H
> > +
> > +#include <linux/mm.h>
> > +
> > +#ifdef CONFIG_MMU
> > +#define VMACACHE_BITS 2
> > +#else
> > +#define VMACACHE_BITS 0
> > +#endif
> 
> I wouldn't even both with the #ifdef here - why not just always use 2 bits ?

Sure.

> > +#define vmacache_flush(tsk)                                     \
> > +       do {                                                     \
> > +               memset(tsk->vmacache, 0, sizeof(tsk->vmacache)); \
> > +       } while (0)
> 
> I think inline functions are preferred

Indeed, but I used a macro instead because we don't have access to
'struct task_struct' from vmacache.h, since we also include it in
sched.h.

> 
> > diff --git a/mm/nommu.c b/mm/nommu.c
> > index 8740213..9a5347b 100644
> > --- a/mm/nommu.c
> > +++ b/mm/nommu.c
> > @@ -768,16 +768,19 @@ static void add_vma_to_mm(struct mm_struct *mm, struct vm_area_struct *vma)
> >   */
> >  static void delete_vma_from_mm(struct vm_area_struct *vma)
> >  {
> > +       int i;
> >         struct address_space *mapping;
> >         struct mm_struct *mm = vma->vm_mm;
> > +       struct task_struct *curr = current;
> >
> >         kenter("%p", vma);
> >
> >         protect_vma(vma, 0);
> >
> >         mm->map_count--;
> > -       if (mm->mmap_cache == vma)
> > -               mm->mmap_cache = NULL;
> > +       for (i = 0; i < VMACACHE_SIZE; i++)
> > +               if (curr->vmacache[i] == vma)
> > +                       curr->vmacache[i] = NULL;
> 
> Why is the invalidation done differently here ? shouldn't it be done
> by bumping the mm's sequence number so that invalidation works accross
> all threads sharing that mm ?

You are absolutely right. I will update this.

> 
> > +#ifndef CONFIG_MMU
> > +struct vm_area_struct *vmacache_find_exact(struct mm_struct *mm,
> > +                                          unsigned long start,
> > +                                          unsigned long end)
> > +{
> > +       int i;
> > +
> > +       if (!vmacache_valid(mm))
> > +               return NULL;
> > +
> > +       for (i = 0; i < VMACACHE_SIZE; i++) {
> > +               struct vm_area_struct *vma = current->vmacache[i];
> > +
> > +               if (vma && vma->vm_start == start && vma->vm_end == end)
> > +                       return vma;
> > +       }
> > +
> > +       return NULL;
> > +
> > +}
> > +#endif
> 
> I think the caller could do instead
> vma = vmacache_find(mm, start)
> if (vma && vma->vm_start == start && vma->vm_end == end) {
> }
> 
> I.e. better deal with it at the call site than add a new vmacache
> function for it.

But that would require two vma checks as the vmacache_find() function
needs to verify the matching vmas before returning. I had also thought
of passing a pointer to a matching function where we do the default (the
addr within a range) or the exact lookup. Seemed more of an overkill so
I just went with just adding a vmacache_find_exact() for !CONFIG_MMU.

Thanks,
Davidlohr

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ