[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1258450465.11321.36.camel@localhost>
Date: Tue, 17 Nov 2009 17:34:25 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"hugh.dickins@...cali.co.uk" <hugh.dickins@...cali.co.uk>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, Tejun Heo <tj@...nel.org>,
Andi Kleen <andi@...stfloor.org>
Subject: Re: [MM] Make mm counters per cpu instead of atomic
On Tue, 2009-11-17 at 15:31 +0800, Zhang, Yanmin wrote:
> On Tue, 2009-11-17 at 14:48 +0800, Zhang, Yanmin wrote:
> > On Wed, 2009-11-04 at 14:14 -0500, Christoph Lameter wrote:
> > > From: Christoph Lameter <cl@...ux-foundation.org>
> > > Subject: Make mm counters per cpu
> > >
> > > Changing the mm counters to per cpu counters is possible after the introduction
> > > of the generic per cpu operations (currently in percpu and -next).
> > >
> > > With that the contention on the counters in mm_struct can be avoided. The
> > > USE_SPLIT_PTLOCKS case distinction can go away. Larger SMP systems do not
> > > need to perform atomic updates to mm counters anymore. Various code paths
> > > can be simplified since per cpu counter updates are fast and batching
> > > of counter updates is no longer needed.
> > >
> > > One price to pay for these improvements is the need to scan over all percpu
> > > counters when the actual count values are needed.
> > >
> > > Signed-off-by: Christoph Lameter <cl@...ux-foundation.org>
> > >
> > > ---
> > > fs/proc/task_mmu.c | 14 +++++++++-
> > > include/linux/mm_types.h | 16 ++++--------
> > > include/linux/sched.h | 61 ++++++++++++++++++++---------------------------
> > > kernel/fork.c | 25 ++++++++++++++-----
> > > mm/filemap_xip.c | 2 -
> > > mm/fremap.c | 2 -
> > > mm/init-mm.c | 3 ++
> > > mm/memory.c | 20 +++++++--------
> > > mm/rmap.c | 10 +++----
> > > mm/swapfile.c | 2 -
> > > 10 files changed, 84 insertions(+), 71 deletions(-)
> > >
> > > Index: linux-2.6/include/linux/mm_types.h
> > > ===================================================================
> > > --- linux-2.6.orig/include/linux/mm_types.h 2009-11-04 13:08:33.000000000 -0600
> > > +++ linux-2.6/include/linux/mm_types.h 2009-11-04 13:13:42.000000000 -0600
> > > @@ -24,11 +24,10 @@ struct address_space;
> >
> > > Index: linux-2.6/kernel/fork.c
> > > ===================================================================
> > > --- linux-2.6.orig/kernel/fork.c 2009-11-04 13:08:33.000000000 -0600
> > > +++ linux-2.6/kernel/fork.c 2009-11-04 13:14:19.000000000 -0600
> > > @@ -444,6 +444,8 @@ static void mm_init_aio(struct mm_struct
> > >
> > > static struct mm_struct * mm_init(struct mm_struct * mm, struct task_struct *p)
> > > {
> > > + int cpu;
> > > +
> > > atomic_set(&mm->mm_users, 1);
> > > atomic_set(&mm->mm_count, 1);
> > > init_rwsem(&mm->mmap_sem);
> > > @@ -452,8 +454,11 @@ static struct mm_struct * mm_init(struct
> > > (current->mm->flags & MMF_INIT_MASK) : default_dump_filter;
> > > mm->core_state = NULL;
> > > mm->nr_ptes = 0;
> > > - set_mm_counter(mm, file_rss, 0);
> > > - set_mm_counter(mm, anon_rss, 0);
> > > + for_each_possible_cpu(cpu) {
> > > + struct mm_counter *m;
> > > +
> > > + memset(m, sizeof(struct mm_counter), 0);
> > Above memset is wrong.
> > 1) m isn't initiated;
> > 2) It seems the 2nd and the 3rd parameters should be interchanged.
> Changing it to below fixes the command hang issue.
>
> for_each_possible_cpu(cpu) {
> struct mm_counter *m = per_cpu(mm->rss->readers, cpu);
>
> memset(m, 0, sizeof(struct mm_counter));
> }
Sorry. I was too optimistic and used another kernel to boot.
The right change above should be:
struct mm_counter *m = per_cpu_ptr(mm->rss, cpu);
Compiler doesn't report error/warning when I use any member.
With the change, command 'make oldconfig' and a boot command still
hangs.
Yanmin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists