[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878ryiju8l.mognet@arm.com>
Date: Sun, 24 Oct 2021 16:52:10 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: Marc Zyngier <maz@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Ard Biesheuvel <ardb@...nel.org>
Subject: Re: [PATCH 3/3] irqchip/gic-v3-its: Limit memreserve cpuhp state lifetime
On 23/10/21 11:37, Marc Zyngier wrote:
> On Fri, 22 Oct 2021 11:33:07 +0100,
> Valentin Schneider <valentin.schneider@....com> wrote:
>> @@ -5234,6 +5243,11 @@ static int its_cpu_memreserve_lpi(unsigned int cpu)
>> paddr = page_to_phys(pend_page);
>> WARN_ON(gic_reserve_range(paddr, LPI_PENDBASE_SZ));
>>
>> +out:
>> + /* This only needs to run once per CPU */
>> + if (cpumask_equal(&cpus_booted_once_mask, cpu_possible_mask))
>> + schedule_work(&rdist_memreserve_cpuhp_cleanup_work);
>
> Which makes me wonder. Do we actually need any flag at all if all we
> need to check is whether the CPU has been through the callback at
> least once? I have the strong feeling that we are tracking the same
> state multiple times here.
>
Agreed, cf. my reply on 2/3.
> Also, could the cpuhp callbacks ever run concurrently? If they could,
> two CPUs could schedule the cleanup work in parallel, with interesting
> results. You'd need a cmpxchg on the cpuhp state in the workfn.
>
So I think the cpuhp callbacks may run concurrently, but at a quick glance
it seems like we can't get two instances of the same work executing
concurrently: schedule_work()->queue_work() doesn't re-queue a work if it's already
pending, and __queue_work() checks a work's previous pool in case it might
still be running there.
Regardless, that's one less thing to worry about if we make the cpuhp
callback body run at most once on each CPU (only a single CPU will be able
to queue the removal work).
Powered by blists - more mailing lists