lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 16 Jul 2015 16:47:06 +0800
From:	Baoquan He <bhe@...hat.com>
To:	linux-kernel@...r.kernel.org, ncroxon@...hat.com,
	dyoung@...hat.com, mhuang@...hat.com
Cc:	Baoquan He <bhe@...hat.com>
Subject: [RHEL6.8 Patch 08/19] vmcore: treat memory chunks referenced by PT_LOAD program header entries in page-size boundary in vmcore_list

Resolves: bz1097904
https://bugzilla.redhat.com/show_bug.cgi?id=1097904

This is back port from upstream directly.

commit 7f614cd1e052ebbddee7ea49c725dc75fee74a5a
Author: HATAYAMA Daisuke <d.hatayama@...fujitsu.com>
Date:   Wed Jul 3 15:02:15 2013 -0700

    vmcore: treat memory chunks referenced by PT_LOAD program header entries in page-size boundary in vmcore_list

    Treat memory chunks referenced by PT_LOAD program header entries in
    page-size boundary in vmcore_list.  Formally, for each range [start,
    end], we set up the corresponding vmcore object in vmcore_list to
    [rounddown(start, PAGE_SIZE), roundup(end, PAGE_SIZE)].

    This change affects layout of /proc/vmcore.  The gaps generated by the
    rearrangement are newly made visible to applications as holes.
    Concretely, they are two ranges [rounddown(start, PAGE_SIZE), start] and
    [end, roundup(end, PAGE_SIZE)].

    Suppose variable m points at a vmcore object in vmcore_list, and
    variable phdr points at the program header of PT_LOAD type the variable
    m corresponds to.  Then, pictorially:

      m->offset                    +---------------+
                                   | hole          |
    phdr->p_offset =               +---------------+
      m->offset + (paddr - start)  |               |\
                                   | kernel memory | phdr->p_memsz
                                   |               |/
                                   +---------------+
                                   | hole          |
      m->offset + m->size          +---------------+

    where m->offset and m->offset + m->size are always page-size aligned.

    Signed-off-by: HATAYAMA Daisuke <d.hatayama@...fujitsu.com>
    Acked-by: Vivek Goyal <vgoyal@...hat.com>
    Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
    Cc: Atsushi Kumagai <kumagai-atsushi@....nes.nec.co.jp>
    Cc: Lisa Mitchell <lisa.mitchell@...com>
    Cc: Zhang Yanfei <zhangyanfei@...fujitsu.com>
    Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>

Signed-off-by: Baoquan He <bhe@...hat.com>
---
 fs/proc/vmcore.c | 30 ++++++++++++++++++++++--------
 1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index f7b9aa1..2850906 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -402,20 +402,27 @@ static int __init process_ptload_program_headers_elf64(char *elfptr,
 			phdr_ptr->p_memsz; /* Note sections */
 
 	for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) {
+		u64 paddr, start, end, size;
+
 		if (phdr_ptr->p_type != PT_LOAD)
 			continue;
 
+		paddr = phdr_ptr->p_offset;
+		start = rounddown(paddr, PAGE_SIZE);
+		end = roundup(paddr + phdr_ptr->p_memsz, PAGE_SIZE);
+		size = end - start;
+
 		/* Add this contiguous chunk of memory to vmcore list.*/
 		new = get_new_element();
 		if (!new)
 			return -ENOMEM;
-		new->paddr = phdr_ptr->p_offset;
-		new->size = phdr_ptr->p_memsz;
+		new->paddr = start;
+		new->size = size;
 		list_add_tail(&new->list, vc_list);
 
 		/* Update the program header offset. */
-		phdr_ptr->p_offset = vmcore_off;
-		vmcore_off = vmcore_off + phdr_ptr->p_memsz;
+		phdr_ptr->p_offset = vmcore_off + (paddr - start);
+		vmcore_off = vmcore_off + size;
 	}
 	return 0;
 }
@@ -438,20 +445,27 @@ static int __init process_ptload_program_headers_elf32(char *elfptr,
 			phdr_ptr->p_memsz; /* Note sections */
 
 	for (i = 0; i < ehdr_ptr->e_phnum; i++, phdr_ptr++) {
+		u64 paddr, start, end, size;
+
 		if (phdr_ptr->p_type != PT_LOAD)
 			continue;
 
+		paddr = phdr_ptr->p_offset;
+		start = rounddown(paddr, PAGE_SIZE);
+		end = roundup(paddr + phdr_ptr->p_memsz, PAGE_SIZE);
+		size = end - start;
+
 		/* Add this contiguous chunk of memory to vmcore list.*/
 		new = get_new_element();
 		if (!new)
 			return -ENOMEM;
-		new->paddr = phdr_ptr->p_offset;
-		new->size = phdr_ptr->p_memsz;
+		new->paddr = start;
+		new->size = size;
 		list_add_tail(&new->list, vc_list);
 
 		/* Update the program header offset */
-		phdr_ptr->p_offset = vmcore_off;
-		vmcore_off = vmcore_off + phdr_ptr->p_memsz;
+		phdr_ptr->p_offset = vmcore_off + (paddr - start);
+		vmcore_off = vmcore_off + size;
 	}
 	return 0;
 }
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ