[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080819232801.GA24610@linux-os.sc.intel.com>
Date: Tue, 19 Aug 2008 16:28:01 -0700
From: Venki Pallipadi <venkatesh.pallipadi@...el.com>
To: Rene Herman <rene.herman@...access.nl>
Cc: "Pallipadi, Venkatesh" <venkatesh.pallipadi@...el.com>,
Ingo Molnar <mingo@...e.hu>, Dave Airlie <airlied@...il.com>,
"Li, Shaohua" <shaohua.li@...el.com>,
Yinghai Lu <yhlu.kernel@...il.com>,
Andreas Herrmann <andreas.herrmann3@....com>,
Arjan van de Ven <arjan@...radead.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: AGP and PAT (induced?) problem (on AMD family 6)
On Tue, Aug 19, 2008 at 12:22:10PM -0700, Rene Herman wrote:
> On 19-08-08 21:07, Venki Pallipadi wrote:
>
> > On Tue, Aug 19, 2008 at 07:19:44AM -0700, Rene Herman wrote:
>
> >> I believe the 14 seconds for first shutdown to 5 later might be
> >> telling. Sounds like something might have fixed up uncached
> >> entries.
> >>
> >> I'd really like a reply from the AGP or PAT side right about now.
> >
> > Hmm. Looks like there are more than 16000 entries in the PAT list!
> >
> > This delay may be due to the overhead of parsing this linked list
> > everytime for a new entry, rather than any problem with cache setting
> > itself.
> >
> > I am working on a patch to optimize this pat list parsing for the
> > simple case. Should be able to send it out later today, for testing.
>
> Thanks for the reply. It's with 64MB of AGP memory which I guess is at
> the low end these days. Would your reply mean that basically everyone on
> 2.6.27 should now be experiencing this?
>
> I noticed it was PAT related due to Shaohua Li's:
>
> http://marc.info/?l=linux-kernel&m=121783222306075&w=2
>
> which lists very different times (patch there did not help any).
>
> As another by the way, probably not surprising but I earlier also tried
> both unmounting and completely compiling out debugfs just in case I
> was seeing a debugging related sysmptom. No help either.
>
> It's evening here so I'll probably not be able to test until tomorrow.
>
Below is the patch I am testing. Let me know if this patch helps.
Thanks,
Venki
Test patch. Adds cached_entry to list add routine, in order to speed up the
lookup for sequential reserve_memtype calls.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>
---
arch/x86/mm/pat.c | 33 +++++++++++++++++++++++++++++++--
1 file changed, 31 insertions(+), 2 deletions(-)
Index: linux-2.6/arch/x86/mm/pat.c
===================================================================
--- linux-2.6.orig/arch/x86/mm/pat.c 2008-08-19 15:21:07.000000000 -0700
+++ linux-2.6/arch/x86/mm/pat.c 2008-08-19 16:00:52.000000000 -0700
@@ -207,6 +207,9 @@ static int chk_conflict(struct memtype *
return -EBUSY;
}
+static struct memtype *cached_entry;
+static u64 cached_start;
+
/*
* req_type typically has one of the:
* - _PAGE_CACHE_WB
@@ -280,11 +283,17 @@ int reserve_memtype(u64 start, u64 end,
spin_lock(&memtype_lock);
+ if (cached_entry && start >= cached_start)
+ entry = cached_entry;
+ else
+ entry = list_entry(&memtype_list, struct memtype, nd);
+
/* Search for existing mapping that overlaps the current range */
where = NULL;
- list_for_each_entry(entry, &memtype_list, nd) {
+ list_for_each_entry_continue(entry, &memtype_list, nd) {
if (end <= entry->start) {
where = entry->nd.prev;
+ cached_entry = list_entry(where, struct memtype, nd);
break;
} else if (start <= entry->start) { /* end > entry->start */
err = chk_conflict(new, entry, new_type);
@@ -292,6 +301,8 @@ int reserve_memtype(u64 start, u64 end,
dprintk("Overlap at 0x%Lx-0x%Lx\n",
entry->start, entry->end);
where = entry->nd.prev;
+ cached_entry = list_entry(where,
+ struct memtype, nd);
}
break;
} else if (start < entry->end) { /* start > entry->start */
@@ -299,7 +310,20 @@ int reserve_memtype(u64 start, u64 end,
if (!err) {
dprintk("Overlap at 0x%Lx-0x%Lx\n",
entry->start, entry->end);
- where = &entry->nd;
+ cached_entry = list_entry(entry->nd.prev,
+ struct memtype, nd);
+
+ /*
+ * Move to right position in the linked
+ * list to add this new entry
+ */
+ list_for_each_entry_continue(entry,
+ &memtype_list, nd) {
+ if (start <= entry->start) {
+ where = entry->nd.prev;
+ break;
+ }
+ }
}
break;
}
@@ -314,6 +338,8 @@ int reserve_memtype(u64 start, u64 end,
return err;
}
+ cached_start = start;
+
if (where)
list_add(&new->nd, where);
else
@@ -343,6 +369,9 @@ int free_memtype(u64 start, u64 end)
spin_lock(&memtype_lock);
list_for_each_entry(entry, &memtype_list, nd) {
if (entry->start == start && entry->end == end) {
+ if (cached_entry == entry || cached_start == start)
+ cached_entry = NULL;
+
list_del(&entry->nd);
kfree(entry);
err = 0;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists