[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CEA71AD.5010606@tilera.com>
Date: Mon, 22 Nov 2010 08:35:41 -0500
From: Chris Metcalf <cmetcalf@...era.com>
To: Cypher Wu <cypher.w@...il.com>
CC: <linux-kernel@...r.kernel.org>,
Américo Wang <xiyou.wangcong@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>,
netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH] arch/tile: fix rwlock so would-be write lockers don't
block new readers
On 11/22/2010 12:39 AM, Cypher Wu wrote:
> 2010/11/15 Chris Metcalf <cmetcalf@...era.com>:
>> This avoids a deadlock in the IGMP code where one core gets a read
>> lock, another core starts trying to get a write lock (thus blocking
>> new readers), and then the first core tries to recursively re-acquire
>> the read lock.
>>
>> We still try to preserve some degree of balance by giving priority
>> to additional write lockers that come along while the lock is held
>> for write, so they can all complete quickly and return the lock to
>> the readers.
>>
>> Signed-off-by: Chris Metcalf <cmetcalf@...era.com>
>> ---
>> This should apply relatively cleanly to 2.6.26.7 source code too.
>> [...]
>
> I've finished my business trip and tested that patch for more than an
> hour and it works. The test is still running now.
>
> But it seems there still has a potential problem: we used ticket lock
> for write_lock(), and if there are so many write_lock() occurred, is
> 256 ticket enough for 64 or even more cores to avoiding overflow?
> Since is we try to write_unlock() and there's already write_lock()
> waiting we'll only adding current ticket.
This is OK, since each core can issue at most one (blocking) write_lock(),
and we have only 64 cores. Future >256 core machines will be based on
TILE-Gx anyway, which doesn't have the 256-core limit since it doesn't use
the spinlock_32.c implementation.
--
Chris Metcalf, Tilera Corp.
http://www.tilera.com
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists