[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4B7D0A9E.9020507@redhat.com>
Date: Thu, 18 Feb 2010 09:38:38 +0000
From: Christine Caulfield <ccaulfie@...hat.com>
To: David Teigland <teigland@...hat.com>
CC: Steven Whitehouse <swhiteho@...hat.com>, cluster-devel@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [Cluster-devel] dlm: Remove/bypass astd
On 17/02/10 20:29, David Teigland wrote:
> On Wed, Feb 17, 2010 at 01:23:39PM +0000, Steven Whitehouse wrote:
>>
>> While investigating Red Hat bug #537010 I started looking at the dlm's astd
>> thread. The way in which the "cast" and "bast" requests are queued looked
>> as if it might cause reordering since the "bast" requests are always
>> delivered after any pending "cast" requests which is not always the
>> correct ordering. This patch doesn't fix that bug, but it will prevent any
>> races in that bit of code, and the performance benefits are also well
>> worth having.
>>
>> I noticed that astd seems to be extraneous to requirements. The notifications
>> to astd are already running in process context, so they could be delivered
>> directly. That should improve smp performance since all the notifications
>> would no longer be funneled through a single thread.
>>
>> Also, the only other function of astd seemed to be stopping the delivery
>> of these notifications during recovery. Since, however, the notifications
>> which are intercepted at recovery time are neither modified, nor filtered
>> in any way, the only effect is to delay notifications for no obvious reason.
>>
>> I thought that probably removing the astd thread and delivering the "cast"
>> and "bast" notifications directly would improve performance due to the
>> elimination of a scheduling delay. I wrote a small test module which
>> creates a dlm lock space, and does 100,000 NL -> EX -> NL lock conversions.
>>
>> Having run this test 10 times each on a 2.6.33-rc8 kernel and then the modified
>> kernel including this patch, I got the following results:
>>
>> Original: Avg time 24.62 us per conversion (NL -> EX -> NL)
>> Modified: Avg time 9.93 us per conversion
>>
>> Which is a fairly dramatic speed up. Please consider applying this patch.
>> I've tested it in both clustered and single node GFS2 configurations. The test
>> figures are from a single node configuration which was a deliberate choice
>> in order to avoid any effects of network latency.
>
> Wow, there's no chance I'm going to even consider something like this.
> This would be a huge change in how the dlm has always operated, and would
> surely introduce very serious and hard to identify bugs (and ones that may
> not appear for a long time afterward). Given that there's *no problem* with
> the current method that has worked well for years, any change would be
> completely crazy.
I think total dismissal of the idea is unreasonable. At least give it a
chance, and test it for a while. Your argument is valid for not
including it in a stable release as yet, but not for ignoring it
completely.
Here we have a chance to remove some code, a kernel thread AND improve
performance hugely. Just saying "it's always worked like that and we're
not changing it" is the crazy argument IMHO.
Chrissie
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists