lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9120047c-be2d-4b31-aff9-b5bdbbc5d37d@oracle.com>
Date: Tue, 28 Jan 2025 14:36:03 +1100
From: imran.f.khan@...cle.com
To: Thomas Gleixner <tglx@...utronix.de>, anna-maria@...utronix.de,
        frederic@...nel.org
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] timers: introduce timer_try_add_on_cpu.

Hello Thomas,

Below, I have tried further to explain the reason
behind using cpus_read_trylock in try_add_timer_on_cpu.

Say CPU X is being offlined and CPU Y is checking for
its online status (before issuing add_timer_on(X)). CPU
Y is not the bootstrap processor (BP) here, it is executing
something else that does:
    
    if cpu_online(X)
        add_timer_on(X);

If at the time of checking cpu_online(X),the hotplug thread
of CPU X i.e. cpuhp/X has not yet done __cpu_disable, CPU Y
will see CPU X as online and issue add_timer_on (in the above
snippet).
In this case, whether the timer ends on an offlined cpu or
not, depends on who gets the per_cpu timer_base.lock first.
If the bootstrap processor, offlining CPU X, gets this lock
first (in timers_dead_cpu), it will migrate all the timers from
CPU X and then release timer_base.lock. Then CPU Y (add_timer_on)
will get this lock and add timer to CPU X's timer_base, but since
CPU X's timer have already been migrated, this newly added
timer will be left on an offlined CPU.
On the other hand if CPU Y (add_timer_on) wins the race, it would
have already added the timer into timer_base of CPU X, before
BP (timers_dead_cpu) gets the timer_base.lock and migrates all
timers (including the one just added), to bootstrap processor and
hence the timer will not be left on an offlined CPU.

Could you please let me know if you see any problems/mistakes
in the above reasoning ?

>From your previous reply I could not understand if you are
totally against using cpus_read_trylock (because it may not
be needed here and I am wrongly seeing its need) or if you are
against using cpus_read_trylock in try_add_timer_on_cpu (i.e.
caller of try_add_timer_on_cpu should take this lock).
So I have tried to explain my reasoning further and know your thoughts.

Thanks,
Imran
On 16/1/2025 4:00 am, imran.f.khan@...cle.com wrote:
> Hello Thomas,
> Thanks for taking a look and your feedback.
> On 16/1/2025 3:04 am, Thomas Gleixner wrote:
>> On Thu, Jan 16 2025 at 00:41, Imran Khan wrote:
>>> + * Return:
>>> + * * %true  - If timer was started on an online cpu
>>> + * * %false - If the specified cpu was offline or if its online status
>>> + *	      could not be ensured due to unavailability of hotplug lock.
>>> + */
>>> +bool timer_try_add_on_cpu(struct timer_list *timer, int cpu)
>>> +{
>>> +	bool ret = true;
>>> +
>>> +	if (unlikely(!cpu_online(cpu)))
>>> +		ret = false;
>>> +	else if (cpus_read_trylock()) {
>>> +		if (likely(cpu_online(cpu)))
>>> +			add_timer_on(timer, cpu);
>>> +		else
>>> +			ret = false;
>>> +		cpus_read_unlock();
>>> +	} else
>>> +		ret = false;
>>> +
>>> +	return ret;
>>
>> Aside of the horrible coding style, that cpus_read_trylock() part does
>> not make any sense.
>>
>> It's perfectly valid to queue a timer on a online CPU when the CPU
>> hotplug lock is held write, which can have tons of reasons even
>> unrelated to an actual CPU hotplug operation.
>>
>> Even during a hotplug operation adding a timer on a particular CPU is
>> valid, whether that's the CPU which is actually plugged or not is
>> irrelevant.
>>
>> So if we add such a function, then it needs to have very precisely
>> defined semantics, which have to be independent of the CPU hotplug lock.
>>
> 
> The hotplug lock is being used to avoid the scenario where cpu_online tells
> that @cpu is online but @cpu gets offline before add_timer_on could
> actually add the timer to that @cpu's timer base.
> Are you saying that this can't happen or by "defined semantics"
> you mean that @cpu indicated as online by cpu_online should not get
> offline in the middle of this function.
> 
>> The only way I can imagine is that the state is part of the per CPU
>> timer base, but then I have to ask the question what is actually tried
>> to solve here.
>>
>> As far as I understood that there is an issue in the RDS code, queueing
>> a delayed work on a offline CPU, but that should have triggered at least
>> the warning in __queue_delayed_work(), right?
>>
> 
> I guess you are referring to warning of [1]. This was just added few days
> back but the timer of delayed_work can still end up on offlined cpu.
> 
>> So the question is whether this try() interface is solving any of this
>> and not papering over the CPU hotplug related issues in the RDS code in
>> some way.
>>
> 
> The RDS code that I referred to in my query, is an in-house change and there
> may be some scope of updating the cached-cpu information there with cpu hotplug
> callbacks. But we also wanted to see if something could be done on timer
> side to address the possibilty of timer ending up on an offlined cpu. That's
> why I asked earlier if you see any merit in having a try() interface.
> 
> As of now I don't have any more cases, running into this problem (putting
> timer-wheel timers on offlined cpu). May be with warning in 
> __queue_delayed_work and (if gets added) in add_timer_on we may see
> more such cases.
> 
> But if you agree, try() interface could still be added albeit without
> hotplug lock.
> 
> Thanks,
> Imran
> 
> [1]: https://github.com/torvalds/linux/blob/master/kernel/workqueue.c#L2511
>> Thanks,
>>
>>         tglx
>>
>>
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ