[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFy4oYis6HTu7o4YwiFawRtDOPO=87v8oHZdTFS+BjnA8g@mail.gmail.com>
Date: Sat, 11 Jun 2016 18:02:57 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Rik van Riel <riel@...hat.com>,
Michal Hocko <mhocko@...e.com>,
LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...nel.org>,
Minchan Kim <minchan@...nel.org>,
Vinayak Menon <vinmenon@...eaurora.org>,
Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>, LKP <lkp@...org>
Subject: Re: [LKP] [lkp] [mm] 5c0a85fad9: unixbench.score -6.3% regression
On Sat, Jun 11, 2016 at 5:49 PM, Huang, Ying <ying.huang@...el.com> wrote:
>
> From perf profile, the time spent in page_fault and its children
> functions are almost same (7.85% vs 7.81%). So the time spent in page
> fault and page table operation itself doesn't changed much. So, you
> mean CPU may be slower to load the page table entry to TLB if accessed
> bit is not set?
So the CPU does take a microfault internally when it needs to set the
accessed/dirty bit. It's not architecturally visible, but you can see
it when you do timing loops.
I've timed it at over a thousand cycles on at least some CPU's, but
that's still peanuts compared to a real page fault. It shouldn't be
*that* noticeable, ie no way it's a 6% regression on its own.
Linus
Powered by blists - more mailing lists