lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1709051534010.1900@nanos>
Date:   Tue, 5 Sep 2017 15:36:11 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Peter Zijlstra <peterz@...radead.org>
cc:     mingo@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2] smp/hotplug annotations

On Tue, 5 Sep 2017, Thomas Gleixner wrote:
> On Tue, 5 Sep 2017, Peter Zijlstra wrote:
> 
> > These two patches appear to make hotplug work again without tripping lockdep.
> 
> They cover the case where the plug/unplug succeeds, but they will not work
> when a plug/unplug operation fails, because after a fail it rolls back
> automatically, so in case UP fails, it will go down again, but the
> initiator side still waits on the 'UP' completion. Same issue on down.
> 
> I think that extra lockdep magic can be avoided completely by splitting the
> completions into a 'up' and a 'down' completion, but that only solves a
> part of the problem. The current failure handling does an automated
> rollback, so if UP fails somewhere the AP rolls back, which means it
> invokes the down callbacks. DOWN the other way round.
> 
> We can solve that by changing the way how rollback is handled so it does
> not automatically roll back.
> 
>     if (callback() < 0) {
>        store_state();
>        complete(UP);
>        wait_for_being_kicked_again()
>     }
> 
> and on the control side have
> 
>     wait_for_completion(UP);
> 
>     if (UP->failed) {
>        	kick(DOWN);
> 	wait_for_completion(DOWN);
>     }
> 
> It's not entirely trivial, but I haven't seen a real problem with it yet.

Now I found one. It's the multi instance rollback. This is a nested
rollback mechanism deep in the call chain. Seperating that one is going to
be a major pain.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ