lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 13 Jun 2011 14:31:44 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	aquini@...ux.com
Cc:	Russ Anderson <rja@....com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	linux-mm <linux-mm@...ck.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Christoph Lameter <cl@...ux.com>, rja@...ricas.sgi.com
Subject: Re: [PATCH] mm: fix negative commitlimit when gigantic hugepages
 are allocated

On Mon, 13 Jun 2011 18:11:55 -0300
Rafael Aquini <aquini@...ux.com> wrote:

> Howdy Andrew,
> 
> Sorry, for this late reply.
> 
> On Thu, Jun 09, 2011 at 04:44:08PM -0700, Andrew Morton wrote:
> > On Thu, 2 Jun 2011 23:55:57 -0300
> > Rafael Aquini <aquini@...ux.com> wrote:
> > 
> > > When 1GB hugepages are allocated on a system, free(1) reports
> > > less available memory than what really is installed in the box.
> > > Also, if the total size of hugepages allocated on a system is
> > > over half of the total memory size, CommitLimit becomes
> > > a negative number.
> > > 
> > > The problem is that gigantic hugepages (order > MAX_ORDER)
> > > can only be allocated at boot with bootmem, thus its frames
> > > are not accounted to 'totalram_pages'. However,  they are
> > > accounted to hugetlb_total_pages()
> > > 
> > > What happens to turn CommitLimit into a negative number
> > > is this calculation, in fs/proc/meminfo.c:
> > > 
> > >         allowed = ((totalram_pages - hugetlb_total_pages())
> > >                 * sysctl_overcommit_ratio / 100) + total_swap_pages;
> > > 
> > > A similar calculation occurs in __vm_enough_memory() in mm/mmap.c.
> > > 
> > > Also, every vm statistic which depends on 'totalram_pages' will render
> > > confusing values, as if system were 'missing' some part of its memory.
> > 
> > Is this bug serious enough to justify backporting the fix into -stable
> > kernels?
> 
> Despite not having testing it, I can think the following scenario as
> troublesome:
> When gigantic hugepages are allocated and sysctl_overcommit_memory == OVERCOMMIT_NEVER.
> In a such situation, __vm_enough_memory() goes through the mentioned 'allowed'
> calculation and might end up mistakenly returning -ENOMEM, thus forcing
> the system to start reclaiming pages earlier than it would be ususal, and this could
> cause detrimental impact to overall system's performance, depending on the
> workload.
> 
> Besides the aforementioned scenario, I can only think of this causing annoyances
> with memory reports from /proc/meminfo and free(1).
> 

hm, OK, thanks.  That sounds a bit thin, but the patch is really simple
so I stuck the cc:stable onto its changelog.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ