lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 31 Jan 2008 21:52:49 -0600 From: Robin Holt <holt@....com> To: Christoph Lameter <clameter@....com> Cc: Andrea Arcangeli <andrea@...ranet.com>, Robin Holt <holt@....com>, Avi Kivity <avi@...ranet.com>, Izik Eidus <izike@...ranet.com>, kvm-devel@...ts.sourceforge.net, Peter Zijlstra <a.p.zijlstra@...llo.nl>, steiner@....com, linux-kernel@...r.kernel.org, linux-mm@...ck.org, daniel.blueman@...drics.com Subject: Re: [patch 1/3] mmu_notifier: Core code > Index: linux-2.6/include/linux/mmu_notifier.h ... > +struct mmu_notifier_ops { ... > + /* > + * invalidate_range_begin() and invalidate_range_end() must paired. > + * Multiple invalidate_range_begin/ends may be nested or called > + * concurrently. That is legit. However, no new external references > + * may be established as long as any invalidate_xxx is running or > + * any invalidate_range_begin() and has not been completed through a > + * corresponding call to invalidate_range_end(). > + * > + * Locking within the notifier needs to serialize events correspondingly. > + * > + * If all invalidate_xx notifier calls take a driver lock then it is possible > + * to run follow_page() under the same lock. The lock can then guarantee > + * that no page is removed. That way we can avoid increasing the refcount > + * of the pages. > + * > + * invalidate_range_begin() must clear all references in the range > + * and stop the establishment of new references. > + * > + * invalidate_range_end() reenables the establishment of references. > + * > + * atomic indicates that the function is called in an atomic context. > + * We can sleep if atomic == 0. > + */ > + void (*invalidate_range_begin)(struct mmu_notifier *mn, > + struct mm_struct *mm, > + unsigned long start, unsigned long end, > + int atomic); > + > + void (*invalidate_range_end)(struct mmu_notifier *mn, > + struct mm_struct *mm, int atomic); I think we need to pass in the same start-end here as well. Without it, the first invalidate_range would have to block faulting for all addresses and would need to remain blocked until the last invalidate_range has completed. While this would work, (and will probably be how we implement it for the short term), it is far from ideal. Thanks, Robin -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists