lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1289832730.2607.87.camel@edumazet-laptop>
Date:	Mon, 15 Nov 2010 15:52:10 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Chris Metcalf <cmetcalf@...era.com>
Cc:	linux-kernel@...r.kernel.org,
	Américo Wang <xiyou.wangcong@...il.com>,
	netdev <netdev@...r.kernel.org>, Cypher Wu <cypher.w@...il.com>
Subject: Re: [PATCH] arch/tile: fix rwlock so would-be write lockers don't
 block new readers

Le lundi 15 novembre 2010 à 09:18 -0500, Chris Metcalf a écrit :
> This avoids a deadlock in the IGMP code where one core gets a read
> lock, another core starts trying to get a write lock (thus blocking
> new readers), and then the first core tries to recursively re-acquire
> the read lock.
> 
> We still try to preserve some degree of balance by giving priority
> to additional write lockers that come along while the lock is held
> for write, so they can all complete quickly and return the lock to
> the readers.
> 
> Signed-off-by: Chris Metcalf <cmetcalf@...era.com>
> ---
> This should apply relatively cleanly to 2.6.26.7 source code too.
> 
>  arch/tile/lib/spinlock_32.c |   29 ++++++++++++++++++-----------
>  1 files changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/tile/lib/spinlock_32.c b/arch/tile/lib/spinlock_32.c
> index 485e24d..5cd1c40 100644
> --- a/arch/tile/lib/spinlock_32.c
> +++ b/arch/tile/lib/spinlock_32.c
> @@ -167,23 +167,30 @@ void arch_write_lock_slow(arch_rwlock_t *rwlock, u32 val)
>  	 * when we compare them.
>  	 */
>  	u32 my_ticket_;
> +	u32 iterations = 0;
>  
> -	/* Take out the next ticket; this will also stop would-be readers. */
> -	if (val & 1)
> -		val = get_rwlock(rwlock);
> -	rwlock->lock = __insn_addb(val, 1 << WR_NEXT_SHIFT);
> +	/*
> +	 * Wait until there are no readers, then bump up the next
> +	 * field and capture the ticket value.
> +	 */
> +	for (;;) {
> +		if (!(val & 1)) {
> +			if ((val >> RD_COUNT_SHIFT) == 0)
> +				break;
> +			rwlock->lock = val;
> +		}
> +		delay_backoff(iterations++);

Are you sure a writer should have a growing delay_backoff() ?

It seems to me this only allow new readers to come (so adding more
unfairness to the rwlock, that already favor readers against writer[s])

Maybe allow one cpu to spin, and eventually other 'writers' be queued ?

> +		val = __insn_tns((int *)&rwlock->lock);
> +	}
>  
> -	/* Extract my ticket value from the original word. */
> +	/* Take out the next ticket and extract my ticket value. */
> +	rwlock->lock = __insn_addb(val, 1 << WR_NEXT_SHIFT);
>  	my_ticket_ = val >> WR_NEXT_SHIFT;
>  
> -	/*
> -	 * Wait until the "current" field matches our ticket, and
> -	 * there are no remaining readers.
> -	 */
> +	/* Wait until the "current" field matches our ticket. */
>  	for (;;) {
>  		u32 curr_ = val >> WR_CURR_SHIFT;
> -		u32 readers = val >> RD_COUNT_SHIFT;
> -		u32 delta = ((my_ticket_ - curr_) & WR_MASK) + !!readers;
> +		u32 delta = ((my_ticket_ - curr_) & WR_MASK);
>  		if (likely(delta == 0))
>  			break;
>  


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ