lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4usw5eqvmvr7mh35hwulyiwypgyp4symsvjxqsn5afwzsgowvk@pifzkcn46j2g>
Date: Fri, 30 Aug 2024 12:48:41 -0400
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: David Hildenbrand <david@...hat.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
        Petr Spacek <pspacek@....org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH RFC] mm: mmap: Change DEFAULT_MAX_MAP_COUNT to INT_MAX

* David Hildenbrand <david@...hat.com> [240830 11:24]:
> On 30.08.24 13:41, Lorenzo Stoakes wrote:
> > On Fri, Aug 30, 2024 at 11:56:36AM GMT, Petr Spacek wrote:
> > > From: Petr Spacek <pspacek@....org>
> > > 
> > > Raise default sysctl vm.max_map_count to INT_MAX, which effectively
> > > disables the limit for all sane purposes. The sysctl is kept around in
> > > case there is some use-case for this limit.
> > > 
> > > The old default value of vm.max_map_count=65530 provided compatibility
> > > with ELF format predating year 2000 and with binutils predating 2010. At
> > > the same time the old default caused issues with applications deployed
> > > in 2024.
> > > 
> > > State since 2012: Linux 3.2.0 correctly generates coredump from a
> > > process with 100 000 mmapped files. GDB 7.4.1, binutils 2.22 work with
> > > this coredump fine and can actually read data from the mmaped addresses.
> > > 
> > > Signed-off-by: Petr Spacek <pspacek@....org>
> > 
> > NACK.
> 
> Ageed, I could have sworn I NACKed a similar patch just months ago.

You did [1], the mm list doesn't seem to have all those emails.

The initial patch isn't there but I believe the planned change was to
increase the limit to 1048576.

> 
> If you use that many memory mappings, you re doing something very, very
> wrong.

It also caught jemalloc ever-increasing the vma count in 2017 [2], 2018
[3], and something odd in 2023 [4].

It seems like there is a lot of configuring to get this to behave as one
would expect, and so you either need to tune your system or the
application, or both here?

Although there are useful reasons to increase the vma limit, most people
are fine with the limit now and it catches bad actors.  Those not fine
with the limit have a way to increase it - and pretty much all of those
people are using their own allocators, it seems.


[1]. https://lore.kernel.org/all/1a91e772-4150-4d28-9c67-cb6d0478af79@redhat.com/
[2]. https://github.com/jemalloc/jemalloc/issues/1011
[3]. https://github.com/jemalloc/jemalloc/issues/1328
[4]. https://github.com/jemalloc/jemalloc/issues/2426

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ