[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080621180530.E82D.KOSAKI.MOTOHIRO@jp.fujitsu.com>
Date: Sat, 21 Jun 2008 18:10:39 +0900
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: balbir@...ux.vnet.ibm.com
Cc: kosaki.motohiro@...fujitsu.com, Paul Menage <menage@...gle.com>,
Pavel Emelianov <xemul@...nvz.org>, containers@...ts.osdl.org,
LKML <linux-kernel@...r.kernel.org>,
Li Zefan <lizf@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] introduce task cgroup v2
> > Bad performance on the charge/uncharge?
> >
> > The only difference I can see is that res_counter uses
> > spin_lock_irqsave()/spin_unlock_irqrestore(), and you're using plain
> > spin_lock()/spin_unlock().
> >
> > Is the overhead of a pushf/cli/popf really going to matter compared
> > with the overhead of forking/exiting a task?
> >
> > Or approaching this from the other side, does res_counter really need
> > irq-safe locking, or is it just being cautious?
>
> We really need irq-safe locking. We can end up uncharging from reclaim context
> (called under zone->lru_lock and mem->zone->lru_lock - held with interrupts
> disabled)
>
> I am going to convert the spin lock to a reader writers lock, so that reads from
> user space do not cause contention. I'll experiment and look at the overhead.
Sorry, late responce.
I'm working on fix current -mm tree regression recently ;)
Note:
I am going to convert spinlock in task limit cgroup to atomic_t.
task limit cgroup has following caractatics.
- many write (fork, exit)
- few read
- fork() is performance sensitive systemcall.
if increase fork overhead, system total performance cause degression.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists