[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CB500F69-F2BB-41B2-BDE5-3F2AEA5BE168@zytor.com>
Date: Thu, 25 Aug 2016 13:07:04 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: Borislav Petkov <bp@...e.de>
CC: "Huang, Ying" <ying.huang@...el.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Brian Gerst <brgerst@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Andy Lutomirski <luto@...capital.net>, lkp@...org,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Ville Syrjälä <ville.syrjala@...ux.intel.com>
Subject: Re: [LKP] [lkp] [x86/hweight] 65ea11ec6a: will-it-scale.per_process_ops 9.3% improvement
On August 25, 2016 4:45:06 AM PDT, Borislav Petkov <bp@...e.de> wrote:
>On Thu, Aug 25, 2016 at 03:05:19AM -0700, H. Peter Anvin wrote:
>> I'm wondering if one of those 23 invocations sets up some kind of
>> corrupt data that continues to get used.
>
>That could be one plausible explanation. Look at what calls
>__sw_hweight64:
>
>initmem_init
>numa_policy_init
>page_writeback_init
>paging_init
>pcpu_embed_first_chunk
>pcpu_setup_first_chunk
>sched_init
>set_rq_online.part.46
>setup_arch
>setup_per_cpu_areas
>update_sysctl
>x86_64_start_kernel
>x86_64_start_reservations
>x86_numa_init
>zone_sizes_init
>
>I could very well imagine per CPU areas or some sched structure or
>whatever getting silently corrupted.
>
>Thanks.
Either way, I think we can conclude that we probably did catch a real problem.
--
Sent from my Android device with K-9 Mail. Please excuse brevity and formatting.
Powered by blists - more mailing lists