lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 17 Nov 2008 17:21:19 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Linux Netdev List <netdev@...r.kernel.org>,
	"David S. Miller" <davem@...emloft.net>
Subject: [RFC] question about inet_listen_lock(), and [PATCH] net: listening_hash
 get a spinlock per bucket

Hi all

I was looking at converting tcp ehash/listening_hash rwlocks to spinlocks, and found
inet_listen_lock()/unlock() definition :

/*
 * - We may sleep inside this lock.
 * - If sleeping is not required (or called from BH),
 *   use plain read_(un)lock(&inet_hashinfo.lhash_lock).
 */
static inline void inet_listen_lock(struct inet_hashinfo *hashinfo)
{
        /* read_lock synchronizes to candidates to writers */
        read_lock(&hashinfo->lhash_lock);
        atomic_inc(&hashinfo->lhash_users);
        read_unlock(&hashinfo->lhash_lock);
}

static inline void inet_listen_unlock(struct inet_hashinfo *hashinfo)
{
        if (atomic_dec_and_test(&hashinfo->lhash_users))
                wake_up(&hashinfo->lhash_wait);
}

Then for the writer part :
---------------------------

void inet_listen_wlock(struct inet_hashinfo *hashinfo)
        __acquires(hashinfo->lhash_lock)
{
        write_lock(&hashinfo->lhash_lock);

        if (atomic_read(&hashinfo->lhash_users)) {
                DEFINE_WAIT(wait);

                for (;;) {
                        prepare_to_wait_exclusive(&hashinfo->lhash_wait,
                                                  &wait, TASK_UNINTERRUPTIBLE);
                        if (!atomic_read(&hashinfo->lhash_users))
                                break;
                        write_unlock_bh(&hashinfo->lhash_lock);
                        schedule();
                        write_lock_bh(&hashinfo->lhash_lock);
                }

                finish_wait(&hashinfo->lhash_wait, &wait);
        }
}



Other than BH protection, I dont understand why we could sleep while 
"holding" lhash_lock, inside a section protected by :

inet_listen_lock();
...
inet_listen_unlock();

If this was true, then we should have a problem when holding a 
inet_ehash_lockp() lock as well ?

Users of inet_listen_lock() are inet_diag_dump() and tcp_get_idx(), and
I fail to see them eventually sleeping.

IFF its just about BH being allowed, it seems quite a complex/unusual locking
scheme only to avoid holding read_lock_bh() for too long while reading
/proc/net/tcp or inet_diag equivalent.

Using 32 spinlocks (one for each chain) should help to reduce hold time, and
help writers concurrency. With RCU lookup, these spinlocks wont be taken
very much.

Thank you

[PATCH] net: listening_hash get a spinlock per bucket

This patch prepares RCU migration of listening_hash table for
TCP/DCCP protocols.

listening_hash table being small (32 slots per protocol), we add
a spinlock for each slot, instead of a single rwlock for whole table.

This should reduce hold time of readers, and writers concurrency.

Signed-off-by: Eric Dumazet <dada1@...mosbay.com>
---
 include/net/inet_hashtables.h |   45 +++++-----------
 net/dccp/proto.c              |    8 --
 net/ipv4/inet_diag.c          |   12 ++--
 net/ipv4/inet_hashtables.c    |   86 +++++++++++---------------------
 net/ipv4/tcp_ipv4.c           |   24 ++++----
 net/ipv6/inet6_hashtables.c   |   23 ++++----
 6 files changed, 79 insertions(+), 119 deletions(-)


View attachment "listening_lock_per_chain.patch" of type "text/plain" (14525 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ