lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 11 Dec 2009 10:26:29 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	Minchan Kim <minchan.kim@...il.com>
Cc:	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	cl@...ux-foundation.org,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	mingo@...e.hu
Subject: Re: [RFC mm][PATCH 2/5] percpu cached mm counter

On Fri, 11 Dec 2009 10:25:03 +0900
Minchan Kim <minchan.kim@...il.com> wrote:

> On Fri, Dec 11, 2009 at 9:51 AM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@...fujitsu.com> wrote:
> > On Fri, 11 Dec 2009 09:40:07 +0900
> > Minchan Kim <minchan.kim@...il.com> wrote:
> >> > static inline unsigned long get_mm_counter(struct mm_struct *mm, int member)
> >> >  {
> >> > -       return (unsigned long)atomic_long_read(&(mm)->counters[member]);
> >> > +       long ret;
> >> > +       /*
> >> > +        * Because this counter is loosely synchronized with percpu cached
> >> > +        * information, it's possible that value gets to be minus. For user's
> >> > +        * convenience/sanity, avoid returning minus.
> >> > +        */
> >> > +       ret = atomic_long_read(&(mm)->counters[member]);
> >> > +       if (unlikely(ret < 0))
> >> > +               return 0;
> >> > +       return (unsigned long)ret;
> >> >  }
> >>
> >> Now, your sync point is only task switching time.
> >> So we can't show exact number if many counting of mm happens
> >> in short time.(ie, before context switching).
> >> It isn't matter?
> >>
> > I think it's not a matter from 2 reasons.
> >
> > 1. Now, considering servers which requires continuous memory usage monitoring
> > as ps/top, when there are 2000 processes, "ps -elf" takes 0.8sec.
> > Because system admins know that gathering process information consumes
> > some amount of cpu resource, they will not do that so frequently.(I hope)
> >
> > 2. When chains of page faults occur continously in a period, the monitor
> > of memory usage just see a snapshot of current numbers and "snapshot of what
> > moment" is at random, always. No one can get precise number in that kind of situation.
> >
> 
> Yes. I understand that.
> 
> But we did rss updating as batch until now.
> It was also stale. Just only your patch make stale period longer.
> Hmm. I hope people don't expect mm count is precise.
> 
I hope so, too...

> I saw the many people believed sanpshot of mm counting is real in
> embedded system.
> They want to know the exact memory usage in system.
> Maybe embedded system doesn't use SPLIT_LOCK so that there is no regression.
> 
> At least, I would like to add comment "It's not precise value." on
> statm's Documentation.

Ok, I'll will do.

> Of course, It's off topic.  :)
> 
> Thanks for commenting. Kame.

Thank you for review.

Regards,
-Kame

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists