lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20091216084859.a93c9727.minchan.kim@barrios-desktop>
Date:	Wed, 16 Dec 2009 08:48:59 +0900
From:	Minchan Kim <minchan.kim@...il.com>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	minchan.kim@...il.com, Lee.Schermerhorn@...com
Subject: Re: [mmotm][PATCH 2/5] mm : avoid  false sharing on mm_counter

Hi, Christoph. 

On Tue, 15 Dec 2009 09:25:01 -0600 (CST)
Christoph Lameter <cl@...ux-foundation.org> wrote:

> On Tue, 15 Dec 2009, KAMEZAWA Hiroyuki wrote:
> 
> >  #if USE_SPLIT_PTLOCKS
> > +#define SPLIT_RSS_COUNTING
> >  struct mm_rss_stat {
> >  	atomic_long_t count[NR_MM_COUNTERS];
> >  };
> > +/* per-thread cached information, */
> > +struct task_rss_stat {
> > +	int events;	/* for synchronization threshold */
> 
> Why count events? Just always increment the task counters and fold them
> at appropriate points into mm_struct. Or get rid of the mm_struct counters
> and only sum them up on the fly if needed?

We are now suffering from finding appropriate points you mentioned.
That's because we want to remove read-side overhead with no regression.
So I think Kame removed schedule update hook.

Although the hooks is almost no overhead, I don't want to make mm counters
stale because it depends on schedule point.
If any process makes many faults in its time slice and it's not preempted
(ex, RT) as extreme case, we could show stale counters. 

But now it makes consistency to merge counters.
Worst case is 64. 

In this aspect, I like this idea. 

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ