lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090804092316.GB6451@cr0.nay.redhat.com>
Date:	Tue, 4 Aug 2009 17:23:16 +0800
From:	Amerigo Wang <xiyou.wangcong@...il.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	Mike Smith <scgtrp@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	bugzilla-daemon@...zilla.kernel.org,
	bugme-daemon@...zilla.kernel.org,
	Amerigo Wang <xiyou.wangcong@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [BUGFIX][PATCH 2/3] kcore: fix vread/vwrite to be aware of
	holes.

On Mon, Aug 03, 2009 at 08:18:45PM +0900, KAMEZAWA Hiroyuki wrote:
>From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>
>vread/vwrite access vmalloc area without checking there is a page or not.
>In most case, this works well.
>
>In old ages, the caller of get_vm_ara() is only IOREMAP and there is no
>memory hole within vm_struct's [addr...addr + size - PAGE_SIZE]
>( -PAGE_SIZE is for a guard page.)
>
>After per-cpu-alloc patch, it uses get_vm_area() for reserve continuous
>virtual address but remap _later_. There tend to be a hole in valid vmalloc
>area in vm_struct lists.
>Then, skip the hole (not mapped page) is necessary.
>This patch updates vread/vwrite() for avoiding memory hole.
>
>Routines which access vmalloc area without knowing for which addr is used
>are
>  - /proc/kcore
>  - /dev/kmem
>
>kcore checks IOREMAP, /dev/kmem doesn't. After this patch, IOREMAP is
>checked and /dev/kmem will avoid to read/write it.
>Fixes to /proc/kcore will be in the next patch in series.
>
>Changelog v2->v3:
> - fixed typos.
> - use kmap. (if not using kmap, we have to add lock here.)


Hmm.. I missed this.


> - fixed PAGE_MASK miss-use.
>Changelog v1->v2:
> - enhanced comments.
> - treat IOREMAP as hole always.
> - zero-fill memory hole if [addr...addr+size] includes valid pages.
> - returns 0 if [addr...addr+size) includes no valid pages.
>
>Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>


This time it looks much better now, but it still has a small problem.
Please check it below.


>---
> mm/vmalloc.c |  182 +++++++++++++++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 159 insertions(+), 23 deletions(-)
>

<snip>


>+
>+/**
>+ *	vread() -  read vmalloc area in a safe way.
>+ *	@buf:		buffer for reading data
>+ *	@addr:		vm address.
>+ *	@count:		number of bytes to be read.
>+ *
>+ *	Returns # of bytes which addr and buf should be increased.
>+ *	(same to count).
>+ *	If [addr...addr+count) doesn't includes any valid area, returns 0.


If I read it correctly, your code doesn't do what you described here,
it doesn't return 0 when there is no valid area.


>+ *
>+ *	This function checks that addr is a valid vmalloc'ed area, and
>+ *	copy data from that area to a given buffer. If the given memory range of
>+ *	[addr...addr+count) includes some valid address, data is copied to
>+ *	proper area of @buf. If there are memory holes, they'll be zero-filled.
>+ *	IOREMAP area is treated as memory hole and no copy is done.
>+ *
>+ *	Note: In usual ops, vread() is never necessary because the caller should
>+ *	know vmalloc() area is valid and can use memcpy(). This is for routines
>+ *	which have to access vmalloc area without any informaion, as /dev/kmem.
>+ *
>+ *	The caller should guarantee KM_USER1 is not used.
>+ */
>+
> long vread(char *buf, char *addr, unsigned long count)
> {
> 	struct vm_struct *tmp;
> 	char *vaddr, *buf_start = buf;
>+	unsigned long buflen = count;
> 	unsigned long n;
> 
> 	/* Don't allow overflow */
>@@ -1640,7 +1739,7 @@
> 		count = -(unsigned long) addr;
> 
> 	read_lock(&vmlist_lock);
>-	for (tmp = vmlist; tmp; tmp = tmp->next) {
>+	for (tmp = vmlist; count && tmp; tmp = tmp->next) {
> 		vaddr = (char *) tmp->addr;
> 		if (addr >= vaddr + tmp->size - PAGE_SIZE)
> 			continue;
>@@ -1653,32 +1752,66 @@
> 			count--;
> 		}
> 		n = vaddr + tmp->size - PAGE_SIZE - addr;
>-		do {
>-			if (count == 0)
>-				goto finished;
>-			*buf = *addr;
>-			buf++;
>-			addr++;
>-			count--;
>-		} while (--n > 0);
>+		if (n > count)
>+			n = count;
>+		if (!(tmp->flags & VM_IOREMAP))
>+			aligned_vread(buf, addr, n);
>+		else /* IOREMAP area is treated as memory hole */
>+			memset(buf, 0, n);
>+		buf += n;
>+		addr += n;
>+		count -= n;
> 	}
> finished:
> 	read_unlock(&vmlist_lock);
>-	return buf - buf_start;
>+
>+	if (buf == buf_start)
>+		return 0;
>+	/* zero-fill memory holes */
>+	if (buf != buf_start + buflen)
>+		memset(buf, 0, buflen - (buf - buf_start));
>+
>+	return buflen;
> }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ