lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 04 Oct 2013 16:00:12 +0530
From:	Janani Venkataraman <jananive@...ibm.com>
To:	linux-kernel@...r.kernel.org
Cc:	amwang@...hat.com, rdunlap@...otime.net, andi@...stfloor.org,
	aravinda@...ux.vnet.ibm.com, hch@....de, mhiramat@...hat.com,
	jeremy.fitzhardinge@...rix.com, xemul@...allels.com,
	suzuki@...ux.vnet.ibm.com, kosaki.motohiro@...fujitsu.com,
	adobriyan@...il.com, tarundsk@...ux.vnet.ibm.com,
	vapier@...too.org, roland@...k.frob.com, tj@...nel.org,
	ananth@...ux.vnet.ibm.com, gorcunov@...nvz.org, avagin@...nvz.org,
	oleg@...hat.com, eparis@...hat.com, d.hatayama@...fujitsu.com,
	james.hogan@...tec.com, akpm@...ux-foundation.org,
	torvalds@...ux-foundation.org
Subject: [RFC] [PATCH 00/19] Non disruptive application core dump
 infrastructure using task_work_add()

Hi all,

The following series implements an infrastructure for capturing the core of an 
application without disrupting its process.

So ideally what we are trying to do is to export the infrastructure using
/proc/pid/core. Reading the file would give an ELF Format core-dump at that
instant non-disruptively, without sending signals.

This would involve basically three operations:

1) Holding the threads of a process without sending a signal (SIGSTOP). At this
point we can collect the register set snapshot and collect other information
required to  create the ELF header. The above operation could be initiated with
the open() call.

2) Once the ELF header is created, read() can return the CORE DUMP data
including, the process memory page-by-page, based on the fpos (file position).

3) The threads could be released upon a close().

We discussed various approaches for the implemenation in the post given below.
-https://lkml.org/lkml/2013/9/3/122

This series is based on the Task work add approach. We didn't adopt the CRIU
approch because of the following reasons:

* It is not upstream yet.

* There are concerns about the security of the dump.

* It involves a lot of changes and this approach provides a UNIX style 
  interface.

Task work add

task_work_add() is an interface and an API. The task work add will run any
queued work before returning to user space from the kernel. So that work is
guaranteed to be done before user space can run again. So basically it queues a 
work for a task which is guaranteed to be executed when the task returns from 
kernel space to user space.

* Exploit this function to hold the threads when they are returning to the
  user space.

* Wait until all the threads of the process to be dumped, reach task_work_add.

* Once all the threads have reached, the dump is taken and they are released.

TODO:

* A mechanism to know when all the threads have reached the task added.

* A way to handle a case when one of the threads of the task to be dumped
  is blocked in the kernel.

* We could also add the infrastructure under a config option,
  say:CONFIG_ELF_GENCORE

* The current implementation doesn't wait for the threads to reach 
  wait_for_completion(). Hence there is no guarantee of collecting the
  'register set' reliably. We will address this issue in the next version.
  This is a prototype implementation to get reviews and comments.

Patches 1 to 8 deals with re-arranging the ELF code to be reusable by the
infrastructure.

Patches 9 to 19 implements the infrastructure.

Please let me know your reviews and comments.

Janani Venkataraman (19):
      Create elfcore-common.c for ELF class independent core generation helpers
      Make vma_dump_size() generic
      Make fill_psinfo generic
      Rename compat versions of the reusable core generation routines
      Export the reusable ELF core generation routines
      Define API for reading arch specif Program Headers for Core
      ia64 impelementation for elf_core_copy_extra_phdrs()
      elf_core_copy_extra_phdrs() for UML
      Create /proc/pid/core entry
      Track the core generation requests
      Check if the process is an ELF executable
      Hold the threads using task_work_add
      Create ELF Header
      Create ELF Core notes Data
      Calculate the size of the core file
      Generate the data sections for ELF Core
      Identify the ELF class of the process
      Adding support for compat ELF class data structures
      Compat ELF class core generation support


 arch/ia64/kernel/elfcore.c       |   34 +++
 arch/x86/um/elfcore.c            |   32 +++
 fs/Makefile                      |    1 
 fs/binfmt_elf.c                  |  190 ++--------------
 fs/compat_binfmt_elf.c           |    7 +
 fs/elfcore-common.c              |  169 ++++++++++++++
 fs/proc/Makefile                 |    2 
 fs/proc/base.c                   |    2 
 fs/proc/gencore-compat-elf.c     |   62 +++++
 fs/proc/gencore-elf.c            |  458 ++++++++++++++++++++++++++++++++++++++
 fs/proc/gencore.c                |  262 ++++++++++++++++++++++
 fs/proc/gencore.h                |   74 ++++++
 fs/proc/internal.h               |    1 
 include/linux/elfcore-internal.h |   72 ++++++
 include/linux/elfcore.h          |    3 
 kernel/elfcore.c                 |    6 
 16 files changed, 1209 insertions(+), 166 deletions(-)
 create mode 100644 fs/elfcore-common.c
 create mode 100644 fs/proc/gencore-compat-elf.c
 create mode 100644 fs/proc/gencore-elf.c
 create mode 100644 fs/proc/gencore.c
 create mode 100644 fs/proc/gencore.h
 create mode 100644 include/linux/elfcore-internal.h

-- 
Janani 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists