lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 23 Apr 2008 19:45:50 +0200 From: Andrea Arcangeli <andrea@...ranet.com> To: Jack Steiner <steiner@....com> Cc: Christoph Lameter <clameter@....com>, Nick Piggin <npiggin@...e.de>, Peter Zijlstra <a.p.zijlstra@...llo.nl>, kvm-devel@...ts.sourceforge.net, Kanoj Sarcar <kanojsarcar@...oo.com>, Roland Dreier <rdreier@...co.com>, Steve Wise <swise@...ngridcomputing.com>, linux-kernel@...r.kernel.org, Avi Kivity <avi@...ranet.com>, linux-mm@...ck.org, Robin Holt <holt@....com>, general@...ts.openfabrics.org, Hugh Dickins <hugh@...itas.com>, akpm@...ux-foundation.org, Rusty Russell <rusty@...tcorp.com.au> Subject: Re: [PATCH 01 of 12] Core of mmu notifiers On Wed, Apr 23, 2008 at 12:09:09PM -0500, Jack Steiner wrote: > > You may have spotted this already. If so, just ignore this. > > It looks like there is a bug in copy_page_range() around line 667. > It's possible to do a mmu_notifier_invalidate_range_start(), then > return -ENOMEM w/o doing a corresponding mmu_notifier_invalidate_range_end(). No I didn't spot it yet, great catch!! ;) Thanks a lot. I think we can take example by Jack and use our energy to spot any bug in the mmu-notifier-core like with his above auditing effort (I'm quite certain you didn't reprouce this with real oom ;) so we get a rock solid mmu-notifier implementation in 2.6.26 so XPMEM will also benefit later in 2.6.27 and I hope the last XPMEM internal bugs will also be fixed by that time. (for the not going to become mmu-notifier users, nothing to worry about for you, unless you used KVM or GRU actively with mmu-notifiers this bug would be entirely harmless with both MMU_NOTIFIER=n and =y, as previously guaranteed) Here the still untested fix for review. diff --git a/mm/memory.c b/mm/memory.c --- a/mm/memory.c +++ b/mm/memory.c @@ -597,6 +597,7 @@ unsigned long next; unsigned long addr = vma->vm_start; unsigned long end = vma->vm_end; + int ret; /* * Don't copy ptes where a page fault will fill them correctly. @@ -604,33 +605,39 @@ * readonly mappings. The tradeoff is that copy_page_range is more * efficient than faulting. */ + ret = 0; if (!(vma->vm_flags & (VM_HUGETLB|VM_NONLINEAR|VM_PFNMAP|VM_INSERTPAGE))) { if (!vma->anon_vma) - return 0; + goto out; } - if (is_vm_hugetlb_page(vma)) - return copy_hugetlb_page_range(dst_mm, src_mm, vma); + if (unlikely(is_vm_hugetlb_page(vma))) { + ret = copy_hugetlb_page_range(dst_mm, src_mm, vma); + goto out; + } if (is_cow_mapping(vma->vm_flags)) mmu_notifier_invalidate_range_start(src_mm, addr, end); + ret = 0; dst_pgd = pgd_offset(dst_mm, addr); src_pgd = pgd_offset(src_mm, addr); do { next = pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(src_pgd)) continue; - if (copy_pud_range(dst_mm, src_mm, dst_pgd, src_pgd, - vma, addr, next)) - return -ENOMEM; + if (unlikely(copy_pud_range(dst_mm, src_mm, dst_pgd, src_pgd, + vma, addr, next))) { + ret = -ENOMEM; + break; + } } while (dst_pgd++, src_pgd++, addr = next, addr != end); if (is_cow_mapping(vma->vm_flags)) mmu_notifier_invalidate_range_end(src_mm, - vma->vm_start, end); - - return 0; + vma->vm_start, end); +out: + return ret; } static unsigned long zap_pte_range(struct mmu_gather *tlb, -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists