lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 22 Nov 2010 13:39:50 +0800
From:	Cypher Wu <cypher.w@...il.com>
To:	Chris Metcalf <cmetcalf@...era.com>
Cc:	linux-kernel@...r.kernel.org,
	Américo Wang <xiyou.wangcong@...il.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH] arch/tile: fix rwlock so would-be write lockers don't
 block new readers

2010/11/15 Chris Metcalf <cmetcalf@...era.com>:
> This avoids a deadlock in the IGMP code where one core gets a read
> lock, another core starts trying to get a write lock (thus blocking
> new readers), and then the first core tries to recursively re-acquire
> the read lock.
>
> We still try to preserve some degree of balance by giving priority
> to additional write lockers that come along while the lock is held
> for write, so they can all complete quickly and return the lock to
> the readers.
>
> Signed-off-by: Chris Metcalf <cmetcalf@...era.com>
> ---
> This should apply relatively cleanly to 2.6.26.7 source code too.
>
>  arch/tile/lib/spinlock_32.c |   29 ++++++++++++++++++-----------
>  1 files changed, 18 insertions(+), 11 deletions(-)
>
> diff --git a/arch/tile/lib/spinlock_32.c b/arch/tile/lib/spinlock_32.c
> index 485e24d..5cd1c40 100644
> --- a/arch/tile/lib/spinlock_32.c
> +++ b/arch/tile/lib/spinlock_32.c
> @@ -167,23 +167,30 @@ void arch_write_lock_slow(arch_rwlock_t *rwlock, u32 val)
>         * when we compare them.
>         */
>        u32 my_ticket_;
> +       u32 iterations = 0;
>
> -       /* Take out the next ticket; this will also stop would-be readers. */
> -       if (val & 1)
> -               val = get_rwlock(rwlock);
> -       rwlock->lock = __insn_addb(val, 1 << WR_NEXT_SHIFT);
> +       /*
> +        * Wait until there are no readers, then bump up the next
> +        * field and capture the ticket value.
> +        */
> +       for (;;) {
> +               if (!(val & 1)) {
> +                       if ((val >> RD_COUNT_SHIFT) == 0)
> +                               break;
> +                       rwlock->lock = val;
> +               }
> +               delay_backoff(iterations++);
> +               val = __insn_tns((int *)&rwlock->lock);
> +       }
>
> -       /* Extract my ticket value from the original word. */
> +       /* Take out the next ticket and extract my ticket value. */
> +       rwlock->lock = __insn_addb(val, 1 << WR_NEXT_SHIFT);
>        my_ticket_ = val >> WR_NEXT_SHIFT;
>
> -       /*
> -        * Wait until the "current" field matches our ticket, and
> -        * there are no remaining readers.
> -        */
> +       /* Wait until the "current" field matches our ticket. */
>        for (;;) {
>                u32 curr_ = val >> WR_CURR_SHIFT;
> -               u32 readers = val >> RD_COUNT_SHIFT;
> -               u32 delta = ((my_ticket_ - curr_) & WR_MASK) + !!readers;
> +               u32 delta = ((my_ticket_ - curr_) & WR_MASK);
>                if (likely(delta == 0))
>                        break;
>
> --
> 1.6.5.2
>
>


I've finished my business trip and tested that patch for more than an
hour and it works. The test is still running now.

But it seems there still has a potential problem: we used ticket lock
for write_lock(), and if there are so many write_lock() occurred, is
256 ticket enough for 64 or even more cores to avoiding overflow?
Since is we try to write_unlock() and there's already write_lock()
waiting we'll only adding current ticket.


-- 
Cyberman Wu
http://www.meganovo.com
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists