lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6099af78-0fd8-77de-fe50-be40b239f06e@arm.com>
Date:   Fri, 3 Jul 2020 15:42:31 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Will Deacon <will@...nel.org>
Cc:     mark.rutland@....com, tuanphan@...amperecomputing.com,
        john.garry@...wei.com, linux-kernel@...r.kernel.org,
        shameerali.kolothum.thodi@...wei.com, harb@...erecomputing.com,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [RFC PATCH] perf/smmuv3: Fix shared interrupt handling

On 2020-07-03 14:42, Will Deacon wrote:
> On Wed, Jun 24, 2020 at 02:08:30PM +0100, Robin Murphy wrote:
>> On 2020-06-24 13:50, Will Deacon wrote:
>>> On Wed, Jun 24, 2020 at 12:48:14PM +0100, Robin Murphy wrote:
>>>> On 2020-04-08 17:49, Robin Murphy wrote:
>>>>> IRQF_SHARED is dangerous, since it allows other agents to retarget the
>>>>> IRQ's affinity without migrating PMU contexts to match, breaking the way
>>>>> in which perf manages mutual exclusion for accessing events. Although
>>>>> this means it's not realistically possible to support PMU IRQs being
>>>>> shared with other drivers, we *can* handle sharing between multiple PMU
>>>>> instances with some explicit affinity bookkeeping and manual interrupt
>>>>> multiplexing.
>>>>>
>>>>> RCU helps us handle interrupts efficiently without having to worry about
>>>>> fine-grained locking for relatively-theoretical race conditions with the
>>>>> probe/remove/CPU hotplug slow paths. The resulting machinery ends up
>>>>> looking largely generic, so it should be feasible to factor out with a
>>>>> "system PMU" base class for similar multi-instance drivers.
>>>>>
>>>>> Signed-off-by: Robin Murphy <robin.murphy@....com>
>>>>> ---
>>>>>
>>>>> RFC because I don't have the means to test it, and if the general
>>>>> approach passes muster then I'd want to tackle the aforementioned
>>>>> factoring-out before merging anything anyway.
>>>>
>>>> Any comments on whether it's worth pursuing this?
>>>
>>> Sorry, I don't really get the problem that it's solving. Is there a crash
>>> log somewhere I can look at? If all the users of the IRQ are managed by
>>> this driver, why is IRQF_SHARED dangerous?
>>
>> Because as-is, multiple PMU instances may make different choices about which
>> CPU they associate with, change the shared IRQ affinity behind each others'
>> backs, and break the "IRQ handler runs on event->cpu" assumption that perf
>> core relies on for correctness. I'm not sure how likely it would be to
>> actually crash rather than just lead to subtle nastiness, but wither way
>> it's not good, and since people seem to be tempted to wire up system PMU
>> instances this way we could do with a general approach for dealing with it.
> 
> Ok, thanks for the explanation. If we're just talking about multiple
> instances of the same driver, why is it not sufficient to have a static
> atomic_t initialised to -1 which tracks the current affinity and then just
> CAS that during probe()? Hotplug notifiers can just check whether or not
> it points to an online CPU

Yeah, forcing *all* PMUs owned by a driver to be affine to the same CPU 
is another way to go about it, however it slightly penalises systems 
that are wired up sensibly and *would* otherwise be able to distribute 
non-shared affinities around in a balanced manner (optimising the 
initial pmu->cpu selection in the face of NUMA is an exercise still on 
the table in some cases).

And we'd still have to have all the "has another instance already 
requested this IRQ or not yet?" logic (the general condition is "1 <= 
number of IRQs <= number of PMUs"), plus some way for the global 
affinity to migrate all the PMU contexts and IRQs at once in a 
controlled and race-free manner, so things wouldn't be *massively* 
simpler even then.

Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ