lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2375c9f90909162302m1fb89414o4f72b6b36e7cbb06@mail.gmail.com>
Date:	Thu, 17 Sep 2009 14:02:39 +0800
From:	Américo Wang <xiyou.wangcong@...il.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 2/3][mmotm] showing size of kcore

On Thu, Sep 17, 2009 at 10:44 AM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@...fujitsu.com> wrote:
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>
> Now, size of /proc/kcore which can be read by 'ls -l' is 0.
> But it's not correct value.
>
> This is a patch for showing size of /proc/kcore as following.
>
> On x86-64, ls -l shows
>  ... root root 140737486266368 2009-09-17 10:29 /proc/kcore
> Then, 7FFFFFFE02000. This comes from vmalloc area's size.
> (*) This shows "core" size, not  memory size.
>
> This patch shows the size by updating "size" field in struct proc_dir_entry.
> Later, lookup routine will create inode and fill inode->i_size based
> on this value. Then, this has a problem.
>
>  - Once inode is cached, inode->i_size will never be updated.
>
> Then, this patch is not memory-hotplug-aware.
>
> To update inode->i_size, we have to know dentry or inode.
> But there is no way to lookup them by inside kernel. Hmmm....
> Next patch will try it.
>
> Cc: WANG Cong <xiyou.wangcong@...il.com>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> ---
>  fs/proc/kcore.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> Index: mmotm-2.6.31-Sep14/fs/proc/kcore.c
> ===================================================================
> --- mmotm-2.6.31-Sep14.orig/fs/proc/kcore.c
> +++ mmotm-2.6.31-Sep14/fs/proc/kcore.c
> @@ -107,6 +107,8 @@ static void free_kclist_ents(struct list
>  */
>  static void __kcore_update_ram(struct list_head *list)
>  {
> +       int nphdr;
> +       size_t size;
>        struct kcore_list *tmp, *pos;
>        LIST_HEAD(garbage);
>
> @@ -124,6 +126,7 @@ static void __kcore_update_ram(struct li
>        write_unlock(&kclist_lock);
>
>        free_kclist_ents(&garbage);
> +       proc_root_kcore->size = get_kcore_size(&nphdr, &size);


This makes me to think if we will have some race condition here?
Two processes can open kcore at the same time...

>  }
>
>
> @@ -429,7 +432,8 @@ read_kcore(struct file *file, char __use
>        unsigned long start;
>
>        read_lock(&kclist_lock);
> -       proc_root_kcore->size = size = get_kcore_size(&nphdr, &elf_buflen);
> +       size = get_kcore_size(&nphdr, &elf_buflen);
> +
>        if (buflen == 0 || *fpos >= size) {
>                read_unlock(&kclist_lock);
>                return 0;
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ