lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090804143031.b9769a13.akpm@linux-foundation.org>
Date:	Tue, 4 Aug 2009 14:30:31 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	kamezawa.hiroyu@...fujitsu.com, scgtrp@...il.com,
	bugzilla-daemon@...zilla.kernel.org,
	bugme-daemon@...zilla.kernel.org, xiyou.wangcong@...il.com,
	linux-kernel@...r.kernel.org
Subject: Re: [BUGFIX][PATCH 2/3] kcore: fix vread/vwrite to be aware of
 holes.

On Mon, 3 Aug 2009 20:18:45 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:

> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> 
> vread/vwrite access vmalloc area without checking there is a page or not.
> In most case, this works well.
> 
> In old ages, the caller of get_vm_ara() is only IOREMAP and there is no
> memory hole within vm_struct's [addr...addr + size - PAGE_SIZE]
> ( -PAGE_SIZE is for a guard page.)
> 
> After per-cpu-alloc patch, it uses get_vm_area() for reserve continuous
> virtual address but remap _later_. There tend to be a hole in valid vmalloc
> area in vm_struct lists.
> Then, skip the hole (not mapped page) is necessary.
> This patch updates vread/vwrite() for avoiding memory hole.
> 
> Routines which access vmalloc area without knowing for which addr is used
> are
>   - /proc/kcore
>   - /dev/kmem
> 
> kcore checks IOREMAP, /dev/kmem doesn't. After this patch, IOREMAP is
> checked and /dev/kmem will avoid to read/write it.
> Fixes to /proc/kcore will be in the next patch in series.
> 
> Changelog v2->v3:
>  - fixed typos.
>  - use kmap. (if not using kmap, we have to add lock here.)
>  - fixed PAGE_MASK miss-use.
> Changelog v1->v2:
>  - enhanced comments.
>  - treat IOREMAP as hole always.
>  - zero-fill memory hole if [addr...addr+size] includes valid pages.
>  - returns 0 if [addr...addr+size) includes no valid pages.
> 
> ...
>
> +static int aligned_vread(char *buf, char *addr, unsigned long count)
> +{
> +	struct page *p;
> +	int copied = 0;
> +
> +	while (count) {
> +		unsigned long offset, length;
> +
> +		offset = (unsigned long)addr & ~PAGE_MASK;
> +		length = PAGE_SIZE - offset;
> +		if (length > count)
> +			length = count;
> +		p = vmalloc_to_page(addr);
> +		/*
> +		 * To do safe access to this _mapped_ area, we need
> +		 * lock. But adding lock here means that we need to add
> +		 * overhead of vmalloc()/vfree() calles for this _debug_
> +		 * interface, rarely used. Instead of that, we'll use
> +		 * kmap() and get small overhead in this access function.
> +		 */
> +		if (p) {
> +			/* we can expect USR1 is not used */

It would be nice if the comment were to explain _why_ KM_USER1 is known
to be free here.


> +			void *map = kmap_atomic(p, KM_USER1);
> +			memcpy(buf, map + offset, length);
> +			kunmap_atomic(map, KM_USER1);

Can use clear_highpage().

> +		} else
> +			memset(buf, 0, length);
> +
> +		addr += length;
> +		buf += length;
> +		copied += length;
> +		count -= length;
> +	}
> +	return copied;
> +}
> +
> +static int aligned_vwrite(char *buf, char *addr, unsigned long count)
> +{
> +	struct page *p;
> +	int copied = 0;
> +
> +	while (count) {
> +		unsigned long offset, length;
> +
> +		offset = (unsigned long)addr & ~PAGE_MASK;
> +		length = PAGE_SIZE - offset;
> +		if (length > count)
> +			length = count;
> +		p = vmalloc_to_page(addr);
> +		/*
> +		 * To do safe access to this _mapped_ area, we need
> +		 * lock. But adding lock here means that we need to add
> +		 * overhead of vmalloc()/vfree() calles for this _debug_
> +		 * interface, rarely used. Instead of that, we'll use
> +		 * kmap() and get small overhead in this access function.
> +		 */
> +		if (p) {
> +			/* we can expect USR1 is not used */
> +			void *map = kmap_atomic(p, KM_USER1);
> +			memcpy(map + offset, buf, length);
> +			kunmap_atomic(map, KM_USER1);

clear_highpage().

> +		}
> +		addr += length;
> +		buf += length;
> +		copied += length;
> +		count -= length;
> +	}
> +	return copied;
> +}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ