lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 Apr 2008 15:34:01 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Andrea Arcangeli <andrea@...ranet.com>
cc:	akpm@...ux-foundation.org, Jack Steiner <steiner@....com>,
	Nick Piggin <npiggin@...e.de>, Robin Holt <holt@....com>,
	Avi Kivity <avi@...ranet.com>, Izik Eidus <izike@...ranet.com>,
	kvm-devel@...ts.sourceforge.net,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	general@...ts.openfabrics.org,
	Steve Wise <swise@...ngridcomputing.com>,
	Roland Dreier <rdreier@...co.com>,
	Kanoj Sarcar <kanojsarcar@...oo.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 1 of 8] Core of mmu notifiers

On Wed, 2 Apr 2008, Andrea Arcangeli wrote:

> +	void (*invalidate_page)(struct mmu_notifier *mn,
> +				struct mm_struct *mm,
> +				unsigned long address);
> +
> +	void (*invalidate_range_start)(struct mmu_notifier *mn,
> +				       struct mm_struct *mm,
> +				       unsigned long start, unsigned long end);
> +	void (*invalidate_range_end)(struct mmu_notifier *mn,
> +				     struct mm_struct *mm,
> +				     unsigned long start, unsigned long end);

Still two methods ...

> +void __mmu_notifier_release(struct mm_struct *mm)
> +{
> +	struct mmu_notifier *mn;
> +	unsigned seq;
> +
> +	seq = read_seqbegin(&mm->mmu_notifier_lock);
> +	while (unlikely(!hlist_empty(&mm->mmu_notifier_list))) {
> +		mn = hlist_entry(mm->mmu_notifier_list.first,
> +				 struct mmu_notifier,
> +				 hlist);
> +		hlist_del(&mn->hlist);
> +		if (mn->ops->release)
> +			mn->ops->release(mn, mm);
> +		BUG_ON(read_seqretry(&mm->mmu_notifier_lock, seq));
> +	}
> +}

seqlock just taken for checking if everything is ok?

> +
> +/*
> + * If no young bitflag is supported by the hardware, ->clear_flush_young can
> + * unmap the address and return 1 or 0 depending if the mapping previously
> + * existed or not.
> + */
> +int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
> +					unsigned long address)
> +{
> +	struct mmu_notifier *mn;
> +	struct hlist_node *n;
> +	int young = 0;
> +	unsigned seq;
> +
> +	seq = read_seqbegin(&mm->mmu_notifier_lock);
> +	do {
> +		hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
> +			if (mn->ops->clear_flush_young)
> +				young |= mn->ops->clear_flush_young(mn, mm,
> +								    address);
> +		}
> +	} while (read_seqretry(&mm->mmu_notifier_lock, seq));
> +

The critical section could be run multiple times for one callback which 
could result in multiple callbacks to clear the young bit. Guess not that 
big of an issue?

> +void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> +					  unsigned long address)
> +{
> +	struct mmu_notifier *mn;
> +	struct hlist_node *n;
> +	unsigned seq;
> +
> +	seq = read_seqbegin(&mm->mmu_notifier_lock);
> +	do {
> +		hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
> +			if (mn->ops->invalidate_page)
> +				mn->ops->invalidate_page(mn, mm, address);
> +		}
> +	} while (read_seqretry(&mm->mmu_notifier_lock, seq));
> +}

Ok. Retry would try to invalidate the page a second time which is not a 
problem unless you would drop the refcount or make other state changes 
that require correspondence with mapping. I guess this is the reason 
that you stopped adding a refcount?

> +void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> +				  unsigned long start, unsigned long end)
> +{
> +	struct mmu_notifier *mn;
> +	struct hlist_node *n;
> +	unsigned seq;
> +
> +	seq = read_seqbegin(&mm->mmu_notifier_lock);
> +	do {
> +		hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
> +			if (mn->ops->invalidate_range_start)
> +				mn->ops->invalidate_range_start(mn, mm,
> +								start, end);
> +		}
> +	} while (read_seqretry(&mm->mmu_notifier_lock, seq));
> +}

Multiple invalidate_range_starts on the same range? This means the driver 
needs to be able to deal with the situation and ignore the repeated 
call?

> +void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> +				  unsigned long start, unsigned long end)
> +{
> +	struct mmu_notifier *mn;
> +	struct hlist_node *n;
> +	unsigned seq;
> +
> +	seq = read_seqbegin(&mm->mmu_notifier_lock);
> +	do {
> +		hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_list, hlist) {
> +			if (mn->ops->invalidate_range_end)
> +				mn->ops->invalidate_range_end(mn, mm,
> +							      start, end);
> +		}
> +	} while (read_seqretry(&mm->mmu_notifier_lock, seq));
> +}

Retry can lead to multiple invalidate_range callbacks with the same 
parameters? Driver needs to ignore if the range is already clear?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ