lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49625145.7070402@sgi.com>
Date:	Mon, 05 Jan 2009 10:28:21 -0800
From:	Mike Travis <travis@....com>
To:	Ingo Molnar <mingo@...e.hu>
CC:	Ingo Molnar <mingo@...hat.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	"H. Peter Anvin" <hpa@...or.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Jack Steiner <steiner@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/11] x86: cpumask: some more cpumask cleanups

Ingo Molnar wrote:
> * Mike Travis <travis@....com> wrote:
> 
>> Here's some more cpumask cleanups.
>>
>>     ia64: cpumask fix for is_affinity_mask_valid()
>>     cpumask: update local_cpus_show to use new cpumask API
>>     cpumask: update pci_bus_show_cpuaffinity to use new cpumask API
>>     x86: cleanup remaining cpumask_t ops in smpboot code
>>     x86: clean up speedstep-centrino and reduce cpumask_t usage
>>     cpumask: Replace CPUMASK_ALLOC etc with cpumask_var_t.
>>     cpumask: convert struct cpufreq_policy to cpumask_var_t.
>>     cpumask: use work_on_cpu in acpi/cstate.c
>>     cpumask: use cpumask_var_t in acpi-cpufreq.c
>>     cpumask: use work_on_cpu in acpi-cpufreq.c for drv_read and drv_write
>>     cpumask: use work_on_cpu in acpi-cpufreq.c for read_measured_perf_ctrs
>>
>> This version basically splits out the changes to make it more 
>> bisectable, and has been patch-wise compile/boot tested.  Updated stats 
>> are below.
> 
> ok, i've picked them up into tip/cpus4096:

Thanks Ingo!

> 
> 1d1a70e: cpumask: use work_on_cpu in acpi-cpufreq.c for read_measured_perf_ctrs
> 4d30e6b: cpumask: use work_on_cpu in acpi-cpufreq.c for drv_read and drv_write
> 0771cd4: cpumask: use cpumask_var_t in acpi-cpufreq.c
> 9fa9864: cpumask: use work_on_cpu in acpi/cstate.c
> a2a8809: cpumask: convert struct cpufreq_policy to cpumask_var_t
> ee557bd: cpumask: replace CPUMASK_ALLOC etc with cpumask_var_t
> 3744123: x86: clean up speedstep-centrino and reduce cpumask_t usage
> c2d1cec: x86: cleanup remaining cpumask_t ops in smpboot code
> 588235b: cpumask: update pci_bus_show_cpuaffinity to use new cpumask API
> 3be8305: cpumask: update local_cpus_show to use new cpumask API
> d3b66bf: ia64: cpumask fix for is_affinity_mask_valid()
> 
> ( Sidenote, your mail scripts have a bug that do this to the Subject line:
> 
>     Subject: [PATCH 05/11] x86: clean up speedstep-centrino and reduce 
>     cpumask_t usage From: Rusty Russell <rusty@...tcorp.com.au>

It's in quilt mail (even the latest version), but since it's a script, I'll
see about fixing it manually.

> 
>   i've fixed them up manually so that Rusty is in the Author field. )
> 
> 
>> The number of stack hogs have been significantly reduced:
>>
>> ====== Stack (-l 500)
>>     1 - allyesconfig-128
>>     2 - allyesconfig-4k
>>
>>   .1.    .2.    ..final..
>>     0  +1032   1032      .  flush_tlb_page
>>     0  +1024   1024      .  kvm_reload_remote_mmus
>>     0  +1024   1024      .  kvm_flush_remote_tlbs
>>     0  +1024   1024      .  flush_tlb_mm
>>     0  +1024   1024      .  flush_tlb_current_task
> 
> Quite good! Can we fix those TLB flush cpumask uses too?

I've looked at the tlb ones and they are hairy.  But we now have a few more
facilities in place so I'll revisit them.

> 
>> And the overall memory usage is becoming quite less affected by changing
>> NR_CPUS from 128 to 4096:
> [...]
>>         .1.       .2.    ..final..
>>    11436936  +4167424    15604360   +36%  .bss
> 
> .bss seems to account for ~80% of the increase. Are these static cpumasks, 
> or do we still have NR_CPUS arrays around?

There are 72 arrays still using NR_CPUS (though some legitimately) and 14 static
cpumask_t's and 11 "DECLARE_BITMAP(..., NR_CPUS)".

There are also about 5 patches left in my queue that need further testing with
the latest tip code.

Thanks,
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ