[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <AANLkTil-PxC3C8cuPsZG6hKNzzq9MmKNtcY1IhnDy0OU@mail.gmail.com>
Date: Tue, 18 May 2010 13:54:12 -0700
From: Venkatesh Pallipadi <venkatesh.pallipadi@...il.com>
To: "H. Peter Anvin" <h.peter.anvin@...el.com>
Cc: Dave Airlie <airlied@...il.com>,
Pauli Nieminen <suokkos@...il.com>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Suresh Siddha <suresh.b.siddha@...el.com>
Subject: Re: [PATCH 5/7] arch/x86: Add array variants for setting memory to wc
caching.
On Tue, May 18, 2010 at 9:43 AM, H. Peter Anvin <h.peter.anvin@...el.com> wrote:
> On 05/18/2010 02:34 AM, Dave Airlie wrote:
>> On Thu, Apr 1, 2010 at 10:45 PM, Pauli Nieminen <suokkos@...il.com> wrote:
>>> Setting single memory pages at a time to wc takes a lot time in cache flush. To
>>> reduce number of cache flush set_pages_array_wc and set_memory_array_wc can be
>>> used to set multiple pages to WC with single cache flush.
>>>
>>> This improves allocation performance for wc cached pages in drm/ttm.
>>>
>>
>> I've got this in drm-next for quite a while and almost forgot about
>> it, I'm meant to be on holidays and I'd really like to just have Linus
>> pull my tree,
>>
>> I had only one issue with this as we had some problems with doing it
>> before but it looks like they've since been fixed in the x86 pat code
>> a kernel or two ago so this patch should be fine now.
>>
>> its been well tested in drm-next on AGP machines by the author,
>>
>> any objections to this?
>>
>> Dave.
>
> Acked-by: H. Peter Anvin <hpa@...or.com>
>
> Go ahead and push it; the patch is straightforward, and the author
> (Venki) is reliable.
>
> -hpa
>
> P.S. Please Cc: all the x86 maintainers, not just Ingo.
>
Patch actually from Pauli.
Looks good.
Acked-by: Venkatesh Pallipadi <venki@...gle.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists