lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241014175335.10447-B-hca@linux.ibm.com>
Date: Mon, 14 Oct 2024 19:53:35 +0200
From: Heiko Carstens <hca@...ux.ibm.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        linux-s390@...r.kernel.org, virtualization@...ts.linux.dev,
        linux-doc@...r.kernel.org, kvm@...r.kernel.org,
        Vasily Gorbik <gor@...ux.ibm.com>,
        Alexander Gordeev <agordeev@...ux.ibm.com>,
        Christian Borntraeger <borntraeger@...ux.ibm.com>,
        Sven Schnelle <svens@...ux.ibm.com>, Thomas Huth <thuth@...hat.com>,
        Cornelia Huck <cohuck@...hat.com>,
        Janosch Frank <frankja@...ux.ibm.com>,
        Claudio Imbrenda <imbrenda@...ux.ibm.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
        Eugenio PĂ©rez <eperezma@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jonathan Corbet <corbet@....net>, Mario Casquero <mcasquer@...hat.com>
Subject: Re: [PATCH v2 7/7] s390/sparsemem: reduce section size to 128 MiB

On Mon, Oct 14, 2024 at 04:46:19PM +0200, David Hildenbrand wrote:
> Ever since commit 421c175c4d609 ("[S390] Add support for memory hot-add.")
> we've been using a section size of 256 MiB on s390 and 32 MiB on s390.
> Before that, we were using a section size of 32 MiB on both
> architectures.
> 
> Likely the reason was that we'd expect a storage increment size of
> 256 MiB under z/VM back then. As we didn't support memory blocks spanning
> multiple memory sections, we would have had to handle having multiple
> memory blocks for a single storage increment, which complicates things.
> Although that issue reappeared with even bigger storage increment sizes
> later, nowadays we have memory blocks that can span multiple memory
> sections and we avoid any such issue completely.

I doubt that z/VM had support for memory hotplug back then already; and the
sclp memory hotplug code was always written in a way that it could handle
increment sizes smaller, larger or equal to section sizes.

If I remember correctly the section size was also be used to represent each
piece of memory in sysfs (aka memory block). So the different sizes were
chosen to avoid an excessive number of sysfs entries on 64 bit.

This problem went away later with the introduction of memory_block_size.

Even further back in time I think there were static arrays which had
2^(MAX_PHYSMEM_BITS - SECTION_SIZE_BITS) elements.

I just gave it a try and, as nowadays expected, bloat-o-meter doesn't
indicate anything like that anymore.

> 128 MiB has been used by x86-64 since the very beginning. arm64 with 4k
> base pages switched to 128 MiB as well: it's just big enough on these
> architectures to allows for using a huge page (2 MiB) in the vmemmap in
> sane setups with sizeof(struct page) == 64 bytes and a huge page mapping
> in the direct mapping, while still allowing for small hot(un)plug
> granularity.
> 
> For s390, we could even switch to a 64 MiB section size, as our huge page
> size is 1 MiB: but the smaller the section size, the more sections we'll
> have to manage especially on bigger machines. Making it consistent with
> x86-64 and arm64 feels like te right thing for now.

That's fine with me.

Acked-by: Heiko Carstens <hca@...ux.ibm.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ