lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220831034909.GA16092@sophie>
Date:   Tue, 30 Aug 2022 22:49:09 -0500
From:   Rebecca Mckeever <remckee0@...il.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     Mike Rapoport <rppt@...nel.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/4] memblock tests: add simulation of physical memory
 with multiple NUMA nodes

On Tue, Aug 30, 2022 at 01:17:56PM +0200, David Hildenbrand wrote:
> On 19.08.22 11:05, Rebecca Mckeever wrote:
> > Add functions setup_numa_memblock_generic() and setup_numa_memblock()
> > for setting up a memory layout with multiple NUMA nodes in a previously
> > allocated dummy physical memory. These functions can be used in place of
> > setup_memblock() in tests that need to simulate a NUMA system.
> > 
> > setup_numa_memblock_generic():
> > - allows for setting up a custom memory layout by specifying the amount
> >   of memory in each node, the number of nodes, and a factor that will be
> >   used to scale the memory in each node
> > 
> > setup_numa_memblock():
> > - allows for setting up a default memory layout
> > 
> > Introduce constant MEM_FACTOR, which is used to scale the default memory
> > layout based on MEM_SIZE.
> > 
> > Set CONFIG_NODES_SHIFT to 4 when building with NUMA=1 to allow for up to
> > 16 NUMA nodes.
> > 
> > Signed-off-by: Rebecca Mckeever <remckee0@...il.com>
> > ---
> >  .../testing/memblock/scripts/Makefile.include |  2 +-
> >  tools/testing/memblock/tests/common.c         | 38 +++++++++++++++++++
> >  tools/testing/memblock/tests/common.h         |  9 ++++-
> >  3 files changed, 47 insertions(+), 2 deletions(-)
> > 
> > diff --git a/tools/testing/memblock/scripts/Makefile.include b/tools/testing/memblock/scripts/Makefile.include
> > index aa6d82d56a23..998281723590 100644
> > --- a/tools/testing/memblock/scripts/Makefile.include
> > +++ b/tools/testing/memblock/scripts/Makefile.include
> > @@ -3,7 +3,7 @@
> >  
> >  # Simulate CONFIG_NUMA=y
> >  ifeq ($(NUMA), 1)
> > -	CFLAGS += -D CONFIG_NUMA
> > +	CFLAGS += -D CONFIG_NUMA -D CONFIG_NODES_SHIFT=4
> >  endif
> >  
> >  # Use 32 bit physical addresses.
> > diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock/tests/common.c
> > index eec6901081af..15d8767dc70c 100644
> > --- a/tools/testing/memblock/tests/common.c
> > +++ b/tools/testing/memblock/tests/common.c
> > @@ -34,6 +34,10 @@ static const char * const help_opts[] = {
> >  
> >  static int verbose;
> >  
> > +static const phys_addr_t node_sizes[] = {
> > +	SZ_4K, SZ_1K, SZ_2K, SZ_2K, SZ_1K, SZ_1K, SZ_4K, SZ_1K
> > +};
> > +
> >  /* sets global variable returned by movable_node_is_enabled() stub */
> >  bool movable_node_enabled;
> >  
> > @@ -72,6 +76,40 @@ void setup_memblock(void)
> >  	fill_memblock();
> >  }
> >  
> > +/**
> > + * setup_numa_memblock_generic:
> > + * Set up a memory layout with multiple NUMA nodes in a previously allocated
> > + * dummy physical memory.
> > + * @nodes: an array containing the amount of memory in each node
> > + * @node_cnt: the size of @nodes
> > + * @factor: a factor that will be used to scale the memory in each node
> > + *
> > + * The nids will be set to 0 through node_cnt - 1.
> > + */
> > +void setup_numa_memblock_generic(const phys_addr_t nodes[],
> > +				 int node_cnt, int factor)
> > +{
> > +	phys_addr_t base;
> > +	int flags;
> > +
> > +	reset_memblock_regions();
> > +	base = (phys_addr_t)memory_block.base;
> > +	flags = (movable_node_is_enabled()) ? MEMBLOCK_NONE : MEMBLOCK_HOTPLUG;
> > +
> > +	for (int i = 0; i < node_cnt; i++) {
> > +		phys_addr_t size = factor * nodes[i];
> 
> I'm a bit lost why we need the factor if we already provide sizes in the
> array.
> 
> Can you enlighten me? :)
> 
> Why can't we just stick to the sizes in the array?
> 
Without the factor, some of the tests will break if we increase MEM_SIZE
in the future (which we may need to do). I could rewrite them so that the
factor is not needed, but I thought the code would be over-complicated if
I did.

> -- 
> Thanks,
> 
> David / dhildenb
> 
Thanks,
Rebecca

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ