lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090910101051.9CCC.A69D9226@jp.fujitsu.com>
Date:	Thu, 10 Sep 2009 10:15:07 +0900 (JST)
From:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To:	Minchan Kim <minchan.kim@...il.com>
Cc:	kosaki.motohiro@...fujitsu.com,
	Christoph Lameter <cl@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Mike Galbraith <efault@....de>, Ingo Molnar <mingo@...e.hu>,
	linux-mm <linux-mm@...ck.org>,
	Oleg Nesterov <onestero@...hat.com>,
	lkml <linux-kernel@...r.kernel.org>
Subject: Re: [rfc] lru_add_drain_all() vs isolation

> On Thu, 10 Sep 2009 08:58:20 +0900 (JST)
> KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com> wrote:
> 
> > > On Wed, Sep 9, 2009 at 1:27 PM, KOSAKI Motohiro
> > > <kosaki.motohiro@...fujitsu.com> wrote:
> > > >> The usefulness of a scheme like this requires:
> > > >>
> > > >> 1. There are cpus that continually execute user space code
> > > >>    without system interaction.
> > > >>
> > > >> 2. There are repeated VM activities that require page isolation /
> > > >>    migration.
> > > >>
> > > >> The first page isolation activity will then clear the lru caches of the
> > > >> processes doing number crunching in user space (and therefore the first
> > > >> isolation will still interrupt). The second and following isolation will
> > > >> then no longer interrupt the processes.
> > > >>
> > > >> 2. is rare. So the question is if the additional code in the LRU handling
> > > >> can be justified. If lru handling is not time sensitive then yes.
> > > >
> > > > Christoph, I'd like to discuss a bit related (and almost unrelated) thing.
> > > > I think page migration don't need lru_add_drain_all() as synchronous, because
> > > > page migration have 10 times retry.
> > > >
> > > > Then asynchronous lru_add_drain_all() cause
> > > >
> > > >  - if system isn't under heavy pressure, retry succussfull.
> > > >  - if system is under heavy pressure or RT-thread work busy busy loop, retry failure.
> > > >
> > > > I don't think this is problematic bahavior. Also, mlock can use asynchrounous lru drain.
> > > 
> > > I think, more exactly, we don't have to drain lru pages for mlocking.
> > > Mlocked pages will go into unevictable lru due to
> > > try_to_unmap when shrink of lru happens.
> > 
> > Right.
> > 
> > > How about removing draining in case of mlock?
> > 
> > Umm, I don't like this. because perfectly no drain often make strange test result.
> > I mean /proc/meminfo::Mlock might be displayed unexpected value. it is not leak. it's only lazy cull.
> > but many tester and administrator wiill think it's bug... ;)
> 
> I agree. I have no objection to your approach. :)
> 
> > Practically, lru_add_drain_all() is nearly zero cost. because mlock's page fault is very
> > costly operation. it hide drain cost. now, we only want to treat corner case issue. 
> > I don't hope dramatic change.
> 
> Another problem is as follow.
> 
> Although some CPUs don't have any thing to do, we do it. 
> HPC guys don't want to consume CPU cycle as Christoph pointed out.
> I liked Peter's idea with regard to this. 
> My approach can solve it, too. 
> But I agree it would be dramatic change. 

Is Perter's + mine approach bad?

It mean,

  - RT-thread binding cpu is not grabbing the page
	-> mlock successful by Peter's improvement
  - RT-thread binding cpu is grabbing the page
	-> mlock successful by mine approach
	   the page is culled later.




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ