lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 22 Apr 2008 15:06:24 +1000
From:	Rusty Russell <rusty@...tcorp.com.au>
To:	Andrea Arcangeli <andrea@...ranet.com>
Cc:	Christoph Lameter <clameter@....com>, akpm@...ux-foundation.org,
	Nick Piggin <npiggin@...e.de>,
	Steve Wise <swise@...ngridcomputing.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>, linux-mm@...ck.org,
	Kanoj Sarcar <kanojsarcar@...oo.com>,
	Roland Dreier <rdreier@...co.com>,
	Jack Steiner <steiner@....com>, linux-kernel@...r.kernel.org,
	Avi Kivity <avi@...ranet.com>, kvm-devel@...ts.sourceforge.net,
	Robin Holt <holt@....com>, general@...ts.openfabrics.org,
	Hugh Dickins <hugh@...itas.com>
Subject: Re: [PATCH 1 of 9] Lock the entire mm to prevent any mmu related operation to happen

On Wednesday 09 April 2008 01:44:04 Andrea Arcangeli wrote:
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1050,6 +1050,15 @@
>  				   unsigned long addr, unsigned long len,
>  				   unsigned long flags, struct page **pages);
>
> +struct mm_lock_data {
> +	spinlock_t **i_mmap_locks;
> +	spinlock_t **anon_vma_locks;
> +	unsigned long nr_i_mmap_locks;
> +	unsigned long nr_anon_vma_locks;
> +};
> +extern struct mm_lock_data *mm_lock(struct mm_struct * mm);
> +extern void mm_unlock(struct mm_struct *mm, struct mm_lock_data *data);

As far as I can tell you don't actually need to expose this struct at all?

> +		data->i_mmap_locks = vmalloc(nr_i_mmap_locks *
> +					     sizeof(spinlock_t));

This is why non-typesafe allocators suck.  You want 'sizeof(spinlock_t *)' 
here.

> +		data->anon_vma_locks = vmalloc(nr_anon_vma_locks *
> +					       sizeof(spinlock_t));

and here.

> +	err = -EINTR;
> +	i_mmap_lock_last = NULL;
> +	nr_i_mmap_locks = 0;
> +	for (;;) {
> +		spinlock_t *i_mmap_lock = (spinlock_t *) -1UL;
> +		for (vma = mm->mmap; vma; vma = vma->vm_next) {
...
> +		data->i_mmap_locks[nr_i_mmap_locks++] = i_mmap_lock;
> +	}
> +	data->nr_i_mmap_locks = nr_i_mmap_locks;

How about you track your running counter in data->nr_i_mmap_locks, leave 
nr_i_mmap_locks alone, and BUG_ON(data->nr_i_mmap_locks != nr_i_mmap_locks)?

Even nicer would be to wrap this in a "get_sorted_mmap_locks()" function.

Similarly for anon_vma locks.

Unfortunately, I just don't think we can fail locking like this.  In your next 
patch unregistering a notifier can fail because of it: that not usable.

I think it means you need to add a linked list element to the vma for the 
CONFIG_MMU_NOTIFIER case.  Or track the max number of vmas for any mm, and 
keep a pool to handle mm_lock for this number (ie. if you can't enlarge the 
pool, fail the vma allocation).  

Both have their problems though...
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ