[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1268439313.2793.148.camel@sbs-t61.sc.intel.com>
Date: Fri, 12 Mar 2010 16:15:13 -0800
From: Suresh Siddha <suresh.b.siddha@...el.com>
To: Robin Holt <holt@....com>
Cc: Ingo Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
"Pallipadi, Venkatesh" <venkatesh.pallipadi@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>
Subject: Re: [Patch] x86,pat Update the page flags for memtype atomically
instead of using memtype_lock.
On Thu, 2010-03-11 at 08:17 -0800, Robin Holt wrote:
> While testing an application using the xpmem (out of kernel) driver, we
> noticed a significant page fault rate reduction of x86_64 with respect
> to ia64. For one test running with 32 cpus, one thread per cpu, it
> took 01:08 for each of the threads to vm_insert_pfn 2GB worth of pages.
> For the same test running on 256 cpus, one thread per cpu, it took 14:48
> to vm_insert_pfn 2 GB worth of pages.
>
> The slowdown was tracked to lookup_memtype which acquires the
> spinlock memtype_lock. This heavily contended lock was slowing down
> vm_insert_pfn().
>
> With the cmpxchg on page->flags method, both the 32 cpu and 256 cpu
> cases take approx 00:01.3 seconds to complete.
>
>
> To: Ingo Molnar <mingo@...hat.com>
> To: H. Peter Anvin <hpa@...or.com>
> To: Thomas Gleixner <tglx@...utronix.de>
> Signed-off-by: Robin Holt <holt@....com>
> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>
> Cc: Suresh Siddha <suresh.b.siddha@...el.com>
> Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
> Cc: x86@...nel.org
>
> ---
>
> Changes since -V1:
> 1) Introduce atomically setting and clearing the page flags and not
> using the global memtype_lock to protect page->flags.
>
> 2) This allowed me the opportunity to convert the rwlock back into a
> spinlock and not affect _MY_ tests performance as all the pages my test
> was utilizing are tracked by struct pages.
Can you also include this spinlock to rwlock conversion, which can be
used for non RAM pages as a second patch?
Also, this patch doesn't apply to tip/master because of recent rbtree
changes in tip. Can you please send an updated patch?
> +#define _PGMT_DEFAULT 0
> +#define _PGMT_WC PG_arch_1
> +#define _PGMT_UC_MINUS PG_uncached
> +#define _PGMT_WB (PG_uncached | PG_arch_1)
> +#define _PGMT_MASK (~(PG_uncached | PG_arch_1))
> +
> static inline unsigned long get_page_memtype(struct page *pg)
> {
> - if (!PageUncached(pg) && !PageWC(pg))
> + unsigned long pg_flags = pg->flags & (PG_uncached | PG_arch_1);
> +
> + if (pg_flags == _PGMT_DEFAULT)
> return -1;
> - else if (!PageUncached(pg) && PageWC(pg))
> + else if (pg_flags == _PGMT_WC)
> return _PAGE_CACHE_WC;
> - else if (PageUncached(pg) && !PageWC(pg))
> + else if (pg_flags == _PGMT_UC_MINUS)
> return _PAGE_CACHE_UC_MINUS;
> else
> return _PAGE_CACHE_WB;
> @@ -72,25 +76,26 @@ static inline unsigned long get_page_mem
>
> static inline void set_page_memtype(struct page *pg, unsigned long memtype)
> {
> + unsigned long memtype_flags = _PGMT_DEFAULT;
> + unsigned long old_flags;
> + unsigned long new_flags;
> +
> switch (memtype) {
> case _PAGE_CACHE_WC:
> - ClearPageUncached(pg);
> - SetPageWC(pg);
> + memtype_flags = _PGMT_WC;
> break;
> case _PAGE_CACHE_UC_MINUS:
> - SetPageUncached(pg);
> - ClearPageWC(pg);
> + memtype_flags = _PGMT_UC_MINUS;
> break;
> case _PAGE_CACHE_WB:
> - SetPageUncached(pg);
> - SetPageWC(pg);
> - break;
> - default:
> - case -1:
> - ClearPageUncached(pg);
> - ClearPageWC(pg);
> + memtype_flags = _PGMT_WB;
For WB case it should be _PGMT_WB and for the case of "-1" this should
be _PGMT_DEFAULT, as in the case of free page we mark it _PGMT_DEFAULT
and when there is an explicit request to mark it WB, then we mark it
_PGMT_WB
Other than that it looks good to me.
thanks,
suresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists