[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20091204100029.b703eaa0.kamezawa.hiroyu@jp.fujitsu.com>
Date: Fri, 4 Dec 2009 10:00:29 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Minchan Kim <minchan.kim@...il.com>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
cl@...ux-foundation.org,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
yanmin_zhang@...ux.intel.com
Subject: Re: [RFC][mmotm][PATCH] percpu mm struct counter cache
On Fri, 4 Dec 2009 09:49:17 +0900
Minchan Kim <minchan.kim@...il.com> wrote:
> On Fri, Dec 4, 2009 at 9:18 AM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@...fujitsu.com> wrote:
> > Making read-side of this counter slower means making ps or top slower.
> > IMO, ps or top is too slow now and making them more slow is very bad.
>
> Also, we don't want to make regression in no-split-ptl lock system.
> Now, tick update cost is zero in no-split-ptl-lock system.
yes.
> but task switching is a little increased since compare instruction.
Ah,
+#ifdef USE_SPLIT_PTLOCKS
+extern void prepare_mm_switch(struct task_struct *prev,
+ struct task_struct *next);
+#else
+static inline prepare_mm_switch(struct task_struct *prev,
+ struct task_struct *next)
+{
+}
+#endif
makes costs zero.
> As you know, task-switching is rather costly function.
yes.
> I mind additional overhead in so-split-ptl lock system.
yes. here.
> I think we can remove the overhead completely.
>
I have another version of this patch, which switches curr_mmc.mm
lazilu in a page fault. But it requires some complicated rules.
I'll try it again rather than adding hooks in context-switch.
BTW, I'm wondering to export "curr_mmc" to other files. Maybe
there will be some more information nice to be cached per cpu+mm.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists