[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48AC8F69.4050201@keyaccess.nl>
Date: Wed, 20 Aug 2008 23:40:57 +0200
From: Rene Herman <rene.herman@...access.nl>
To: Venki Pallipadi <venkatesh.pallipadi@...el.com>
CC: Ingo Molnar <mingo@...e.hu>, Dave Airlie <airlied@...il.com>,
"Li, Shaohua" <shaohua.li@...el.com>,
Yinghai Lu <yhlu.kernel@...il.com>,
Andreas Herrmann <andreas.herrmann3@....com>,
Arjan van de Ven <arjan@...radead.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Dave Jones <davej@...emonkey.org.uk>
Subject: Re: AGP and PAT (induced?) problem (on AMD family 6)
On 20-08-08 21:41, Venki Pallipadi wrote:
> OK. I have reproduced this list size issue locally and this order 1
> allocation and set_memory_uc on that allocation is actually coming
> from agp_allocate_memory() -> agp_generic_alloc_page() ->
> map_page_into_agp() agp_allocate_memory breaks higher order page
> requests into order 1 allocs.
>
> On my system I see multiple agp_allocate_memory requests for nrpages
> 8841, 1020, 16, 2160, 2160, 8192. Together they end up resulting in
> more than 22K entries in PAT pages.
Okay, thanks for the confirmation.
Now, how to fix...
Firstly, it seems we can conclude that any expectancy of a short PAT
list is simply destroyed by AGP. I believe the best thing migh be to
look into "fixing" AGP rather than PAT for now?
In a sense the entire purpose of the AGP GART is collecting non
contiguous pages but given that in practice it's generally still just
one or at most a few regions, going to multi-page allocs sounds most
appetising to me.
All in tree AGP drivers except sgi-agp use agp_generic_alloc_page(), ali
via m1541_alloc_page and i460 via i460_alloc_page.
Rene.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists