lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100217202920.GB29576@redhat.com>
Date:	Wed, 17 Feb 2010 15:29:20 -0500
From:	David Teigland <teigland@...hat.com>
To:	Steven Whitehouse <swhiteho@...hat.com>
Cc:	cluster-devel@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: dlm: Remove/bypass astd

On Wed, Feb 17, 2010 at 01:23:39PM +0000, Steven Whitehouse wrote:
> 
> While investigating Red Hat bug #537010 I started looking at the dlm's astd
> thread. The way in which the "cast" and "bast" requests are queued looked
> as if it might cause reordering since the "bast" requests are always
> delivered after any pending "cast" requests which is not always the
> correct ordering. This patch doesn't fix that bug, but it will prevent any
> races in that bit of code, and the performance benefits are also well
> worth having.
> 
> I noticed that astd seems to be extraneous to requirements. The notifications
> to astd are already running in process context, so they could be delivered
> directly. That should improve smp performance since all the notifications
> would no longer be funneled through a single thread.
> 
> Also, the only other function of astd seemed to be stopping the delivery
> of these notifications during recovery. Since, however, the notifications
> which are intercepted at recovery time are neither modified, nor filtered
> in any way, the only effect is to delay notifications for no obvious reason.
> 
> I thought that probably removing the astd thread and delivering the "cast"
> and "bast" notifications directly would improve performance due to the
> elimination of a scheduling delay. I wrote a small test module which
> creates a dlm lock space, and does 100,000 NL -> EX -> NL lock conversions.
> 
> Having run this test 10 times each on a 2.6.33-rc8 kernel and then the modified
> kernel including this patch, I got the following results:
> 
> Original: Avg time 24.62 us per conversion (NL -> EX -> NL)
> Modified: Avg time 9.93 us per conversion
> 
> Which is a fairly dramatic speed up. Please consider applying this patch.
> I've tested it in both clustered and single node GFS2 configurations. The test
> figures are from a single node configuration which was a deliberate choice
> in order to avoid any effects of network latency.

Wow, there's no chance I'm going to even consider something like this.
This would be a huge change in how the dlm has always operated, and would
surely introduce very serious and hard to identify bugs (and ones that may
not appear for a long time afterward).  Given that there's *no problem* with
the current method that has worked well for years, any change would be
completely crazy.

Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ