lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090812040756.GA5330@wotan.suse.de>
Date:	Wed, 12 Aug 2009 06:07:56 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Zach Brown <zach.brown@...cle.com>
Cc:	Manfred Spraul <manfred@...orfullife.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Nadia Derbey <Nadia.Derbey@...l.net>,
	Pierre Peiffer <peifferp@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch 4/4] ipc: sem optimise simple operations

On Tue, Aug 11, 2009 at 11:23:36AM -0700, Zach Brown wrote:
> Manfred Spraul wrote:
> > On 08/11/2009 01:09 PM, npiggin@...e.de wrote:
> >> Index: linux-2.6/include/linux/sem.h
> >> ===================================================================
> >> --- linux-2.6.orig/include/linux/sem.h
> >> +++ linux-2.6/include/linux/sem.h
> >> @@ -86,6 +86,8 @@ struct task_struct;
> >>   struct sem {
> >>       int    semval;        /* current value */
> >>       int    sempid;        /* pid of last operation */
> >> +    struct list_head    negv_pending;
> >> +    struct list_head    zero_pending;
> >>   };
> >>    
> > struct sem is increased from 8 to 24 bytes.
> 
> And larger still with 64bit pointers.

Yes it is a significant growth. To answer Manfed's question, I don't
know if there are applications using large numbers of semaphores per
set. Google search for increase SEMMSL results in mostly Oracle,
which says to use 250 (which is our current default).

A semaphore set with 250 will use 2K before, and 10K afterward. I
don't know that it is a huge amount really, given that they also
have to presumably be *protecting* stuff.

We can convert them to hlists (I was going to send a patch to do
everything in hlists, but hlists are missing some _rcu variants...
maybe I should just convert the pending lists to start with).

 
> If it's a problem, this can be scaled back.  You can have pointers to
> lists and you can have fewer lists.
> 
> Hopefully it won't be a problem, though.  We can close our eyes and
> pretend that the size of the semaphore sets scale with the size of the
> system and that it's such a relatively small consumer of memory that no
> one will notice :).

The other thing is that using semaphores as sets really won't scale
well at all. It will scale better now that there are per-sem lists,
but there is still a per-set lock. They really should be discouraged.

It's not trivial to remove shared cachelines completely. Possible I
think, but I think it would further increase complexity without a
proven need at this point.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ