[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1103151206430.2787@localhost6.localdomain6>
Date: Tue, 15 Mar 2011 14:22:09 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...e.hu>,
Steven Rostedt <rostedt@...dmis.org>,
Linux-mm <linux-mm@...ck.org>,
Arnaldo Carvalho de Melo <acme@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ananth N Mavinakayanahalli <ananth@...ibm.com>,
Christoph Hellwig <hch@...radead.org>,
Andi Kleen <andi@...stfloor.org>,
Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
Oleg Nesterov <oleg@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Jim Keniston <jkenisto@...ux.vnet.ibm.com>,
Roland McGrath <roland@...k.frob.com>,
SystemTap <systemtap@...rces.redhat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH v2 2.6.38-rc8-tip 3/20] 3: uprobes: Breakground page
replacement.
On Mon, 14 Mar 2011, Srikar Dronamraju wrote:
> +/*
> + * Called with tsk->mm->mmap_sem held (either for read or write and
> + * with a reference to tsk->mm
Hmm, why is holding it for read sufficient?
.
> + */
> +static int write_opcode(struct task_struct *tsk, struct uprobe * uprobe,
> + unsigned long vaddr, uprobe_opcode_t opcode)
> +{
> + struct page *old_page, *new_page;
> + void *vaddr_old, *vaddr_new;
> + struct vm_area_struct *vma;
> + spinlock_t *ptl;
> + pte_t *orig_pte;
> + unsigned long addr;
> + int ret = -EINVAL;
That initialization is pointless.
> + /* Read the page with vaddr into memory */
> + ret = get_user_pages(tsk, tsk->mm, vaddr, 1, 1, 1, &old_page, &vma);
> + if (ret <= 0)
> + return -EINVAL;
> + ret = -EINVAL;
> +
> + /*
> + * check if the page we are interested is read-only mapped
> + * Since we are interested in text pages, Our pages of interest
> + * should be mapped read-only.
> + */
> + if ((vma->vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)) ==
> + (VM_READ|VM_EXEC))
> + goto put_out;
IIRC then text pages are (VM_READ|VM_EXEC)
> + /* Allocate a page */
> + new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr);
> + if (!new_page) {
> + ret = -ENOMEM;
> + goto put_out;
> + }
> +
> + /*
> + * lock page will serialize against do_wp_page()'s
> + * PageAnon() handling
> + */
> + lock_page(old_page);
> + /* copy the page now that we've got it stable */
> + vaddr_old = kmap_atomic(old_page, KM_USER0);
> + vaddr_new = kmap_atomic(new_page, KM_USER1);
> +
> + memcpy(vaddr_new, vaddr_old, PAGE_SIZE);
> + /* poke the new insn in, ASSUMES we don't cross page boundary */
And what makes sure that we don't ?
> + addr = vaddr;
> + vaddr &= ~PAGE_MASK;
> + memcpy(vaddr_new + vaddr, &opcode, uprobe_opcode_sz);
> +
> + kunmap_atomic(vaddr_new, KM_USER1);
> + kunmap_atomic(vaddr_old, KM_USER0);
> +
> + orig_pte = page_check_address(old_page, tsk->mm, addr, &ptl, 0);
> + if (!orig_pte)
> + goto unlock_out;
> + pte_unmap_unlock(orig_pte, ptl);
> +
> + lock_page(new_page);
> + if (!anon_vma_prepare(vma))
Why don't you get the error code of anon_vma_prepare()?
> + /* flip pages, do_wp_page() will fail pte_same() and bail */
-ENOPARSE
> + ret = replace_page(vma, old_page, new_page, *orig_pte);
> +
> + unlock_page(new_page);
> + if (ret != 0)
> + page_cache_release(new_page);
> +unlock_out:
> + unlock_page(old_page);
> +
> +put_out:
> + put_page(old_page); /* we did a get_page in the beginning */
> + return ret;
> +}
> +
> +/**
> + * read_opcode - read the opcode at a given virtual address.
> + * @tsk: the probed task.
> + * @vaddr: the virtual address to store the opcode.
> + * @opcode: location to store the read opcode.
> + *
> + * For task @tsk, read the opcode at @vaddr and store it in @opcode.
> + * Return 0 (success) or a negative errno.
Wants to called with mmap_sem held as well, right ?
> + */
> +int __weak read_opcode(struct task_struct *tsk, unsigned long vaddr,
> + uprobe_opcode_t *opcode)
> +{
> + struct vm_area_struct *vma;
> + struct page *page;
> + void *vaddr_new;
> + int ret;
> +
> + ret = get_user_pages(tsk, tsk->mm, vaddr, 1, 0, 0, &page, &vma);
> + if (ret <= 0)
> + return -EFAULT;
> + ret = -EFAULT;
> +
> + /*
> + * check if the page we are interested is read-only mapped
> + * Since we are interested in text pages, Our pages of interest
> + * should be mapped read-only.
> + */
> + if ((vma->vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)) ==
> + (VM_READ|VM_EXEC))
> + goto put_out;
Same as above
> + lock_page(page);
> + vaddr_new = kmap_atomic(page, KM_USER0);
> + vaddr &= ~PAGE_MASK;
> + memcpy(&opcode, vaddr_new + vaddr, uprobe_opcode_sz);
> + kunmap_atomic(vaddr_new, KM_USER0);
> + unlock_page(page);
> + ret = uprobe_opcode_sz;
ret = 0 ?? At least, that's what the comment above says.
> +
> +put_out:
> + put_page(page); /* we did a get_page in the beginning */
> + return ret;
> +}
> +
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists