[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110803191639.GA6306@albatros>
Date: Wed, 3 Aug 2011 23:16:39 +0400
From: Vasiliy Kulikov <segoon@...nwall.com>
To: Manuel Lauss <manuel.lauss@...glemail.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Richard Weinberger <richard@....at>,
Marc Zyngier <marc.zyngier@....com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] shm: optimize exit_shm()
Hi Manuel,
On Wed, Aug 03, 2011 at 21:08 +0200, Manuel Lauss wrote:
> On Wed, Aug 3, 2011 at 8:28 PM, Vasiliy Kulikov <segoon@...nwall.com> wrote:
> > We may check .in_use == 0 without holding the rw_mutex as .in_use is int
> > and reads of ints are atomic. As .in_use may be changed to zero while current
> > process was sleeping in down_write(), we should check .in_use once again after
> > down_write().
[...]
> > + if (shm_ids(ns).in_use == 0)
> > + return;
> > +
> > /* Destroy all already created segments, but not mapped yet */
> > down_write(&shm_ids(ns).rw_mutex);
> > if (shm_ids(ns).in_use)
>
> This check here is now unnecessary, yes?
No, as I said in the comment above, other task may be holding the mutex and
deleting the last shm segment. So, current task will see in_use == 1
before down_write(), but == 0 after it.
> And this also fixes the oops.
Yes, but it only hides the real problem - tasks' dependency on initialized
init_*_ns.
Thanks,
--
Vasiliy Kulikov
http://www.openwall.com - bringing security into open computing environments
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists