lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.1601271905210.2349@eggly.anvils>
Date:	Wed, 27 Jan 2016 19:09:13 -0800 (PST)
From:	Hugh Dickins <hughd@...gle.com>
To:	Felix von Leitner <felix-linuxkernel@...e.de>
cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: fork on processes with lots of memory

On Tue, 26 Jan 2016, Felix von Leitner wrote:
> > Dear Linux kernel devs,
> 
> > I talked to someone who uses large Linux based hardware to run a
> > process with huge memory requirements (think 4 GB), and he told me that
> > if they do a fork() syscall on that process, the whole system comes to
> > standstill. And not just for a second or two. He said they measured a 45
> > minute (!) delay before the system became responsive again.
> 
> I'm sorry, I meant 4 TB not 4 GB.
> I'm not used to working with that kind of memory sizes.
> 
> > Their working theory is that all the pages need to be marked copy-on-write
> > in both processes, and if you touch one page, a copy needs to be made,
> > and than just takes a while if you have a billion pages.
> 
> > I was wondering if there is any advice for such situations from the
> > memory management people on this list.
> 
> > In this case the fork was for an execve afterwards, but I was going to
> > recommend fork to them for something else that can not be tricked around
> > with vfork.
> 
> > Can anyone comment on whether the 45 minute number sounds like it could
> > be real? When I heard it, I was flabberghasted. But the other person
> > swore it was real. Can a fork cause this much of a delay? Is there a way
> > to work around it?
> 
> > I was going to recommend the fork to create a boundary between the
> > processes, so that you can recover from memory corruption in one
> > process. In fact, after the fork I would want to munmap almost all of
> > the shared pages anyway, but there is no way to tell fork that.

You might find madvise(addr, length, MADV_DONTFORK) helpful:
that tells fork not to duplicate the given range in the child.

Hugh

> 
> > Thanks,
> 
> > Felix
> 
> > PS: Please put me on Cc if you reply, I'm not subscribed to this mailing
> > list.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ