lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22183.54899.11116.307229@gargle.gargle.HOWL>
Date:	Tue, 26 Jan 2016 21:26:27 +0100
From:	Mikael Pettersson <mikpelinux@...il.com>
To:	Felix von Leitner <felix-linuxkernel@...e.de>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: fork on processes with lots of memory

Felix von Leitner writes:
 > > Dear Linux kernel devs,
 > 
 > > I talked to someone who uses large Linux based hardware to run a
 > > process with huge memory requirements (think 4 GB), and he told me that
 > > if they do a fork() syscall on that process, the whole system comes to
 > > standstill. And not just for a second or two. He said they measured a 45
 > > minute (!) delay before the system became responsive again.
 > 
 > I'm sorry, I meant 4 TB not 4 GB.
 > I'm not used to working with that kind of memory sizes.

Make sure you have >>4TB physical if you're going to fork from a process
with a 4TB virtual address space.  (I'm assuming it's not sparse, but all
actually being used.)

Disable transparent hugepages (THP).  The internal book-keeping mechanisms
have been known to run amok with large RAM sizes causing severe performance
issues.  Maybe 4.x kernels are better, I haven't checked.

If you're using explicit hugepages and these kinds of RAM sizes, don't
bother with RHEL 6 or 7 kernels -- they're broken.  Vanilla 4.x kernels work.

We're also in the TB range, though not quite 4TB, and fork()ing from inside
such processes definitely works for us.  We do disable THP since it kills us
otherwise.

 > 
 > > Their working theory is that all the pages need to be marked copy-on-write
 > > in both processes, and if you touch one page, a copy needs to be made,
 > > and than just takes a while if you have a billion pages.
 > 
 > > I was wondering if there is any advice for such situations from the
 > > memory management people on this list.
 > 
 > > In this case the fork was for an execve afterwards, but I was going to
 > > recommend fork to them for something else that can not be tricked around
 > > with vfork.
 > 
 > > Can anyone comment on whether the 45 minute number sounds like it could
 > > be real? When I heard it, I was flabberghasted. But the other person
 > > swore it was real. Can a fork cause this much of a delay? Is there a way
 > > to work around it?
 > 
 > > I was going to recommend the fork to create a boundary between the
 > > processes, so that you can recover from memory corruption in one
 > > process. In fact, after the fork I would want to munmap almost all of
 > > the shared pages anyway, but there is no way to tell fork that.
 > 
 > > Thanks,
 > 
 > > Felix
 > 
 > > PS: Please put me on Cc if you reply, I'm not subscribed to this mailing
 > > list.

-- 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ