[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrVQ_NxcnDr4N-VqROrMJ2hUzMKgmxjxAZu9TFbznqSDcg@mail.gmail.com>
Date: Sat, 9 Jan 2016 09:57:25 -0800
From: Andy Lutomirski <luto@...capital.net>
To: Tony Luck <tony.luck@...il.com>
Cc: linux-nvdimm <linux-nvdimm@...1.01.org>,
Dan Williams <dan.j.williams@...el.com>,
Borislav Petkov <bp@...en8.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Robert <elliott@....com>, Ingo Molnar <mingo@...nel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, X86 ML <x86@...nel.org>
Subject: Re: [PATCH v8 3/3] x86, mce: Add __mcsafe_copy()
On Sat, Jan 9, 2016 at 9:48 AM, Tony Luck <tony.luck@...il.com> wrote:
> On Fri, Jan 8, 2016 at 5:49 PM, Andy Lutomirski <luto@...capital.net> wrote:
>> On Jan 8, 2016 4:19 PM, "Tony Luck" <tony.luck@...el.com> wrote:
>>>
>>> Make use of the EXTABLE_FAULT exception table entries. This routine
>>> returns a structure to indicate the result of the copy:
>>
>> Perhaps this is silly, but could we make this feature depend on ERMS
>> and thus make the code a lot simpler?
>
> ERMS?
It's the fast string extension, aka Enhanced REP MOV STOS. On CPUs
with that feature (and not disabled via MSR), plain ol' rep movs is
the fastest way to copy bytes. I think this includes all Intel CPUs
from SNB onwards.
--Andy
Powered by blists - more mailing lists