[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160825114506.GA11762@nazgul.tnic>
Date: Thu, 25 Aug 2016 13:45:06 +0200
From: Borislav Petkov <bp@...e.de>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: "Huang, Ying" <ying.huang@...el.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Brian Gerst <brgerst@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Andy Lutomirski <luto@...capital.net>, lkp@...org,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Ville Syrjälä <ville.syrjala@...ux.intel.com>
Subject: Re: [LKP] [lkp] [x86/hweight] 65ea11ec6a:
will-it-scale.per_process_ops 9.3% improvement
On Thu, Aug 25, 2016 at 03:05:19AM -0700, H. Peter Anvin wrote:
> I'm wondering if one of those 23 invocations sets up some kind of
> corrupt data that continues to get used.
That could be one plausible explanation. Look at what calls
__sw_hweight64:
initmem_init
numa_policy_init
page_writeback_init
paging_init
pcpu_embed_first_chunk
pcpu_setup_first_chunk
sched_init
set_rq_online.part.46
setup_arch
setup_per_cpu_areas
update_sysctl
x86_64_start_kernel
x86_64_start_reservations
x86_numa_init
zone_sizes_init
I could very well imagine per CPU areas or some sched structure or
whatever getting silently corrupted.
Thanks.
--
Regards/Gruss,
Boris.
ECO tip #101: Trim your mails when you reply.
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--
Powered by blists - more mailing lists