[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0101018584d0b5a3-ea0e4d67-b00f-4254-8e1c-767fcafbec31-000000@us-west-2.amazonses.com>
Date: Fri, 6 Jan 2023 02:02:28 +0000
From: Aaron Thompson <dev@...ont.org>
To: Ingo Molnar <mingo@...nel.org>
Cc: Mike Rapoport <rppt@...nel.org>, linux-mm@...ck.org,
"H. Peter Anvin" <hpa@...or.com>,
Alexander Potapenko <glider@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Shevchenko <andy@...radead.org>,
Ard Biesheuvel <ardb@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Darren Hart <dvhart@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Ingo Molnar <mingo@...hat.com>, Marco Elver <elver@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
kasan-dev@...glegroups.com, linux-efi@...r.kernel.org,
linux-kernel@...r.kernel.org, platform-driver-x86@...r.kernel.org,
x86@...nel.org
Subject: Re: [PATCH v2 1/1] mm: Always release pages to the buddy allocator in
memblock_free_late().
On 2023-01-05 02:48, Ingo Molnar wrote:
> * Aaron Thompson <dev@...ont.org> wrote:
>
>> For example, on an Amazon EC2 t3.micro VM (1 GB) booting via EFI:
>>
>> v6.2-rc2:
>> # grep -E 'Node|spanned|present|managed' /proc/zoneinfo
>> Node 0, zone DMA
>> spanned 4095
>> present 3999
>> managed 3840
>> Node 0, zone DMA32
>> spanned 246652
>> present 245868
>> managed 178867
>>
>> v6.2-rc2 + patch:
>> # grep -E 'Node|spanned|present|managed' /proc/zoneinfo
>> Node 0, zone DMA
>> spanned 4095
>> present 3999
>> managed 3840
>> Node 0, zone DMA32
>> spanned 246652
>> present 245868
>> managed 222816 # +43,949 pages
>
> [ Note the annotation I added to the output - might be useful in the
> changelog too. ]
>
> So this patch adds around +17% of RAM to this 1 GB virtual system? That
> looks rather significant ...
>
> Thanks,
>
> Ingo
It is significant, but I wouldn't describe it as being added. I would
say that the system is currently losing 17% of RAM due to a bug, and
this patch fixes that bug.
The actual numbers depend on the mappings given by the EFI, so they're
largely out of our control. As an example, similar VMs that I run with
the OVMF EFI lose about 3%. I couldn't say for sure which is the
outlier, but my point is that the specific values are not really the
focus, this is just an example that shows that the issue can be
encountered in the wild with real impact. I know I'll be happy to get
that memory back, whether it is 3% or 17% :)
Thanks,
-- Aaron
Powered by blists - more mailing lists