[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110704153644.GB21350@albatros>
Date: Mon, 4 Jul 2011 19:36:44 +0400
From: Vasiliy Kulikov <segoon@...nwall.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: kernel-hardening@...ts.openwall.com,
Randy Dunlap <rdunlap@...otime.net>,
Andrew Morton <akpm@...ux-foundation.org>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
"Serge E. Hallyn" <serge.hallyn@...onical.com>,
Daniel Lezcano <daniel.lezcano@...e.fr>,
Tejun Heo <tj@...nel.org>, Ingo Molnar <mingo@...e.hu>,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-security-module@...r.kernel.org
Subject: Re: [RFC] ipc: introduce shm_rmid_forced sysctl
On Mon, Jul 04, 2011 at 17:08 +0200, Oleg Nesterov wrote:
> On 06/22, Vasiliy Kulikov wrote:
> >
> > +void exit_shm(struct task_struct *task)
> > +{
> > + struct nsproxy *nsp = task->nsproxy;
> > + struct ipc_namespace *ns;
> > +
> > + if (!nsp)
> > + return;
> > + ns = nsp->ipc_ns;
> > + if (!ns || !ns->shm_rmid_forced)
>
> This looks confusing, imho. How it is possible that ->nsproxy or
> ->ipc_ns is NULL?
I spotted the same checking logic in other places. I don't know whether
it is redundant, I guess it can happen when the namespace is dying.
Probably it cannot happed inside of task do_exit(), only for extern
observers.
> > + return;
> > +
> > + /* Destroy all already created segments, but not mapped yet */
> > + down_write(&shm_ids(ns).rw_mutex);
> > + idr_for_each(&shm_ids(ns).ipcs_idr, &shm_try_destroy_current, ns);
> > up_write(&shm_ids(ns).rw_mutex);
>
> Again, I do not pretend I understand ipc/, but it seems we can check
> ns->ipc_ids[].in_use != 0 before the slow path, no?
Looks like you're right. Given it is do_exit(), the boost is
significant.
I'll send the patch for this thing and the locking part.
Thank you!
--
Vasiliy Kulikov
http://www.openwall.com - bringing security into open computing environments
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists