lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 15 Jun 2013 09:30:07 +0200
From:	Mike Galbraith <efault@....de>
To:	Manfred Spraul <manfred@...orfullife.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Rik van Riel <riel@...hat.com>,
	Davidlohr Bueso <davidlohr.bueso@...com>, hhuang@...hat.com,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 0/6] ipc/sem.c: performance improvements, FIFO

On Sat, 2013-06-15 at 07:48 +0200, Mike Galbraith wrote: 
> On Sat, 2013-06-15 at 07:27 +0200, Manfred Spraul wrote:
> 
> > Assume there is one op (semctl(), whatever) that acquires the global 
> > lock - and a continuous stream of simple ops.
> > - spin_is_locked() returns true due to the semctl().
> > - then simple ops will switch to spin_lock(&sma->sem_perm.lock).
> > - since the spinlock is acquired, the next operation will get true from 
> > spin_is_locked().
> > 
> > It will stay that way around - as long as there is at least one op 
> > waiting for sma->sem_perm.lock.
> > With enough cpus, it will stay like this forever.
> 
> Yup, pondered that yesterday, scratching my head over how to do better.
> Hints highly welcome.  Maybe if I figure out how to scratch dual lock
> thingy properly for -rt, non-rt will start acting sane too, as that spot
> seems to be itchy in both kernels.

Gee, just trying to flip back to a single semaphore lock mode if you had
to do the global wait thing fixed up -rt.  10 consecutive sem-waitzero 5
8 64 runs with the 3.8-rt9 kernel went like so, which is one hell of an
improvement.

Result matrix:
  Thread   0: 20209311
  Thread   1: 20255372
  Thread   2: 20082611
...
  Thread  61: 20162924
  Thread  62: 20048995
  Thread  63: 20142689

I must have screwed up something :)

static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
                              int nsops)
{
        struct sem *sem;
        int locknum;

        if (nsops == 1 && !sma->complex_count) {
                sem = sma->sem_base + sops->sem_num;

                /*
                 * Another process is holding the global lock on the
                 * sem_array; we cannot enter our critical section,
                 * but have to wait for the global lock to be released.
                 */
                if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
                        spin_lock(&sma->sem_perm.lock);
                        if (sma->complex_count)
                                goto wait_array;

                        /*
                         * Acquiring our sem->lock under the global lock
                         * forces new complex operations to wait for us
                         * to exit our critical section.
                         */
                        spin_lock(&sem->lock);
                        spin_unlock(&sma->sem_perm.lock);
                } else {
                        /* Lock just the semaphore we are interested in. */
                        spin_lock(&sem->lock);

                        /*
                         * If sma->complex_count was set prior to acquisition,
                         * we must fall back to the global array lock.
                         */
                        if (unlikely(sma->complex_count)) {
                                spin_unlock(&sem->lock);
                                goto lock_array;
                        }
                }

                locknum = sops->sem_num;
        } else {
                int i;
                /*
                 * Lock the semaphore array, and wait for all of the
                 * individual semaphore locks to go away.  The code
                 * above ensures no new single-lock holders will enter
                 * their critical section while the array lock is held.
                 */
 lock_array:
                spin_lock(&sma->sem_perm.lock);
 wait_array:
                for (i = 0; i < sma->sem_nsems; i++) {
                        sem = sma->sem_base + i;
#ifdef CONFIG_PREEMPT_RT_BASE
                        if (spin_is_locked(&sem->lock))
#endif
                        spin_unlock_wait(&sem->lock);
                }
                locknum = -1;

                if (nsops == 1 && !sma->complex_count) {
                        sem = sma->sem_base + sops->sem_num;
                        spin_lock(&sem->lock);
                        spin_unlock(&sma->sem_perm.lock);
                        locknum = sops->sem_num;
                }
        }
        return locknum;
}

Not very pretty, but it works markedly better.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ