[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y2tiy1p3.fsf@nanos.tec.linutronix.de>
Date: Tue, 04 Feb 2020 12:30:32 +0000
From: Thomas Gleixner <tglx@...utronix.de>
To: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@...ia.com>,
linux-kernel@...r.kernel.org
Cc: "Sverdlin\, Alexander \(Nokia - DE\/Ulm\)"
<alexander.sverdlin@...ia.com>
Subject: Re: [PATCH RESEND] cpu/hotplug: Wait for cpu_hotplug to be enabled in cpu_up/down
Matija,
Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@...ia.com> writes:
> On 02/03/2020 07:08 PM, Thomas Gleixner wrote:
> EBUSY existing and being commonly used doesnt justify it in every
> situation. We do not have problem only in userspace, but kernel as well,
> no user of cpu_up/down takes into account of possible temporal
There is no point in caring about this, simply because if you look at
the 5 callsites on x86:
- Three are debug stuff which handle the error return and do not care
about whether its EBUSY or not
- One is the boot time cpu onlining which ignores any error code on
purpose because there are enough reasons why this can fail and we
want at least get up to init.
- The last one is the sysfs interface which you are tripping over.
> unavailability. Going into extreme, we could start returning EBUSY
> whenever we have resource/facility taken which would made every
> interface candidate for returning it. As I see it, EBUSY has its place
> in nonblocking APIs. Others should try (hard) not to return it. Handling
> it is further topic of its own. How large the timeout to quit? Let's say
> that we know that for cpu, it is 10 seconds which I proposed. Passing
> responsibility to select tmo to the users will spread out that policy to
> each subsystem of its own, yielding to situations where it will for
> someone work, for others not, depending on the tmo chosen.
We don't know whats the proper timeout for everyone. You picked one and
it fits your expectation, but its bound to break other peoples
expectations.
That's the exact reason why these kind of heuristics are bad and
horrible and should be avoided in general.
>> I have no idea why you need to offline/online CPUs to partition a
>> system. There are surely more sensible ways to do that, but that's not
>> part of this discussion.
>
> I'd be happy to make it part.
>
> We are using partrt from
> https://github.com/OpenEneaLinux/rt-tools/tree/master/partrt,
> cpu_up/down is part of it, AFAIK, it is there to force timer migration
> and doesnt have any other (known to me) usage. In the meantime since we
> started with core isolation, we changed how we treat isolated cores. We
> are now starting with isolcpus=cpu-list nohz_full=cpu-list
> rcu_nocbs=cpu-list, and we are atm at Linux 4.19. Earlier we had
> different setup where we wanted to use cores in the startup, partition
> later, however that showed to be problematic and not in line with how
> things are going in the area.
>
> Do you think we do not need toggle them under these conditions?
If you have that isolation thingies on the kernel command line there is
no point in doing the cpu up/down dance. It's not providing you anything
useful except steering interrupts away which you can do on the kernel
command line as well with 'irqaffinity=...'.
Thanks,
tglx
Powered by blists - more mailing lists