lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 10 Jul 2008 18:39:25 -0700
From:	Mike Travis <travis@....com>
To:	"Eric W. Biederman" <ebiederm@...ssion.com>
CC:	"H. Peter Anvin" <hpa@...or.com>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Jack Steiner <steiner@....com>, linux-kernel@...r.kernel.org,
	Arjan van de Ven <arjan@...radead.org>
Subject: Re: [RFC 00/15] x86_64: Optimize percpu accesses

Eric W. Biederman wrote:
> Mike Travis <travis@....com> writes:
> 
> 
>> The biggest growth came from moving all the xxx[NR_CPUS] arrays into
>> the per cpu area.  So you free up a huge amount of unused memory when
>> the NR_CPUS count starts getting into the ozone layer.  4k now, 16k
>> real soon now, ??? future?
> 
> Hmm.  Do you know how big a role kernel_stat plays.
> 
> It is a per cpu structure that is sized via NR_IRQS.  NR_IRQS is by NR_CPUS.
> So ultimately the amount of memory take up is NR_CPUS*NR_CPUS*32 or so.
> 
> I have a patch I wrote long ago, that addresses that specific nasty configuration
> by moving the per cpu irq counters into pointer available from struct irq_desc.
> 
> The next step which I did not get to (but is interesting from a scaling perspective)
> was to start dynamically allocating the irq structures.
> 
> Eric

If you could dig that up, that would be great.  Another engr here at SGI
took that task off my hands and he's been able to do a few things to reduce
the "# irqs" but irq_desc is still one of the bigger static arrays (>256k).

(There was some discussion a while back on this very subject.)

The top data users are:

====== Data (-l 500)
    1 - ingo-test-0701-256
    2 - 4k-defconfig
    3 - ingo-test-0701

      .1.      .2.      .3.    ..final..
  1048576  -917504  +917504 1048576      .  __log_buf(.bss)
   262144  -262144  +262144  262144      .  gl_hash_table(.bss)
   122360  -122360  +122360  122360      .  g_bitstream(.data)
   119756  -119756  +119756  119756      .  init_data(.rodata)
    89760   -89760   +89760   89760      .  o2net_nodes(.bss)
    76800   -76800  +614400  614400  +700%  early_node_map(.data)
    44548   -44548   +44548   44548      .  typhoon_firmware_image(.rodata)
    43008  +215040        .  258048  +500%  irq_desc(.data.cacheline_aligned)
    42768   -42768   +42768   42768      .  s_firmLoad(.data)
    41184   -41184   +41184   41184      .  saa7134_boards(.data)
    38912   -38912   +38912   38912      .  dabusb(.bss)
    34804   -34804   +34804   34804      .  g_Firmware(.data)
    32768   -32768   +32768   32768      .  read_buffers(.bss)
    19968   -19968  +159744  159744  +700%  initkmem_list3(.init.data)
    18041   -18041   +18041   18041      .  OperationalCodeImage_GEN1(.data)
    16507   -16507   +16507   16507      .  OperationalCodeImage_GEN2(.data)
    16464   -16464   +16464   16464      .  ipw_geos(.rodata)
    16388  +114688  -114688   16388      .  map_pid_to_cmdline(.bss)
    16384   -16384   +16384   16384      .  gl_hash_locks(.bss)
    16384  +245760        .  262144 +1500%  boot_pageset(.bss)
    16128  +215040        .  231168 +1333%  irq_cfg(.data.read_mostly)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ