Fix the VM_BUG_ON assertion check to actually do what I want, noted by Christoph. Also, fix an error-path-leak type issue with frozen refcount not being unfrozen. Found by review. In practice, this check is very rare to hit because a page dirtier is likely to hold the refcount elevated for much longer than it takes to check and non-racy-recheck. So it doesn't pose a big problem for users of -mm, but of course needs fixing. --- Index: linux-2.6/mm/vmscan.c =================================================================== --- linux-2.6.orig/mm/vmscan.c 2008-06-11 23:36:07.000000000 +1000 +++ linux-2.6/mm/vmscan.c 2008-06-11 23:36:18.000000000 +1000 @@ -415,8 +415,10 @@ static int __remove_mapping(struct addre if (!page_freeze_refs(page, 2)) goto cannot_free; /* note: atomic_cmpxchg in page_freeze_refs provides the smp_rmb */ - if (unlikely(PageDirty(page))) + if (unlikely(PageDirty(page))) { + page_unfreeze_refs(page, 2); goto cannot_free; + } if (PageSwapCache(page)) { swp_entry_t swap = { .val = page_private(page) }; Index: linux-2.6/include/linux/pagemap.h =================================================================== --- linux-2.6.orig/include/linux/pagemap.h 2008-06-11 23:36:07.000000000 +1000 +++ linux-2.6/include/linux/pagemap.h 2008-06-11 23:36:18.000000000 +1000 @@ -165,7 +165,7 @@ static inline int page_cache_get_specula return 0; } #endif - VM_BUG_ON(PageCompound(page) && (struct page *)page_private(page) != page); + VM_BUG_ON(PageTail(page)); return 1; }