lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 07 Aug 2008 17:45:10 +0900 (JST)
From:	Hirokazu Takahashi <taka@...inux.co.jp>
To:	kamezawa.hiroyu@...fujitsu.com
Cc:	balbir@...ux.vnet.ibm.com, ryov@...inux.co.jp,
	xen-devel@...ts.xensource.com,
	containers@...ts.linux-foundation.org,
	linux-kernel@...r.kernel.org,
	virtualization@...ts.linux-foundation.org, dm-devel@...hat.com,
	agk@...rceware.org
Subject: Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into
 two parts

Hi,

> > > >I've just noticed that most of overhead comes from the spin-locks
> > > >when reclaiming the pages inside mem_cgroups and the spin-locks to
> > > >protect the links between pages and page_cgroups.
> > > Overhead between page <-> page_cgroup lock is cannot be catched by
> > > lock_stat now.Do you have numbers ?
> > > But ok, there are too many locks ;(
> > 
> > The problem is that every time the lock is held, the associated
> > cache line is flushed.
> I think "page" and "page_cgroup" is not so heavly shared object in fast path.
> foot-print is also important here.
> (anyway, I'd like to remove lock_page_cgroup() when I find a chance)

OK.

> > > >The latter overhead comes from the policy your team has chosen
> > > >that page_cgroup structures are allocated on demand. I still feel
> > > >this approach doesn't make any sense because linux kernel tries to
> > > >make use of most of the pages as far as it can, so most of them
> > > >have to be assigned its related page_cgroup. It would make us happy
> > > >if page_cgroups are allocated at the booting time.
> > > >
> > > Now, multi-sizer-page-cache is discussed for a long time. If it's our
> > > direction, on-demand page_cgroup make sense.
> > 
> > I don't think I can agree to this.
> > When multi-sized-page-cache is introduced, some data structures will be
> > allocated to manage multi-sized-pages. 
> maybe no. it will be encoded into struct page.

It will nice and simple if it will be.

> > I think page_cgroups should be allocated at the same time.
> > This approach will make things simple.
> yes, of course.
> 
> > 
> > It seems like the on-demand allocation approach leads not only
> > overhead but complexity and a lot of race conditions.
> > If you allocate page_cgroups when allocating page structures,
> > You can get rid of most of the locks and you don't have to care about
> > allocation error of page_cgroups anymore.
> > 
> > And it will also give us flexibility that memcg related data can be
> > referred/updated inside critical sections.
> > 
> But it's not good for the systems with small "NORMAL" pages.

Even when it happens to be a system with small "NORMAL" pages, if you
want to use memcg feature, you have to allocate page_groups for most of
the pages in the system. It's impossible to avoid the allocation as far
as you use memcg.

> This discussion should be done again when more users of page_group appears and
> it's overhead is obvious.

Thanks,
Hirokazu Takahashi.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ