lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 07 Feb 2024 10:37:00 +0100
From: Björn Töpel <bjorn@...nel.org>
To: Anup Patel <apatel@...tanamicro.com>
Cc: Anup Patel <anup@...infault.org>, Palmer Dabbelt <palmer@...belt.com>,
 Paul Walmsley <paul.walmsley@...ive.com>, Thomas Gleixner
 <tglx@...utronix.de>, Rob Herring <robh+dt@...nel.org>, Krzysztof
 Kozlowski <krzysztof.kozlowski+dt@...aro.org>, Frank Rowand
 <frowand.list@...il.com>, Conor Dooley <conor+dt@...nel.org>,
 devicetree@...r.kernel.org, Saravana Kannan <saravanak@...gle.com>, Marc
 Zyngier <maz@...nel.org>, linux-kernel@...r.kernel.org, Atish Patra
 <atishp@...shpatra.org>, linux-riscv@...ts.infradead.org,
 linux-arm-kernel@...ts.infradead.org, Andrew Jones
 <ajones@...tanamicro.com>
Subject: Re: [PATCH v12 00/25] Linux RISC-V AIA Support

Anup Patel <apatel@...tanamicro.com> writes:

>> Nope. Same mechanics on x86 -- the cleanup has to be done one the
>> originating core. What I asked was "what about using a timer instead of
>> an IPI". I think this was up in the last rev as well?
>>
>> Check out commit bdc1dad299bb ("x86/vector: Replace
>> IRQ_MOVE_CLEANUP_VECTOR with a timer callback") Specifically, the
>> comment about lost interrupts, and the rational for keeping the original
>> target active until there's a new interrupt on the new cpu.
>
> Trying timer interrupt is still TBD on my side because with v12
> my goal was to implement per-device MSI domains. Let me
> explore timer interrupts for v13.

OK!

>> >> I wonder if this clean up is less intrusive, and you just need to
>> >> perform what's in the per-list instead of dealing with the
>> >> ids_enabled_bitmap? Maybe we can even remove that bitmap as well. The
>> >> chip_data/desc has that information. This would mean that
>> >> imsic_local_priv() would only have the local vectors (chip_data), and
>> >> a cleanup list/timer.
>> >>
>> >> My general comment is that instead of having these global id-tracking
>> >> structures, use the matrix together with some desc/chip_data local
>> >> data, which should be sufficient.
>> >
>> > The "ids_enabled_bitmap", "dummy hwirqs" and private imsic_vectors
>> > are required since the matrix allocator only manages allocation of
>> > per-CPU IDs.
>>
>> The information in ids_enabled_bitmap is/could be inherent in
>> imsic_local_priv.vectors (guess what x86 does... ;-)).
>>
>> Dummy hwirqs could be replaced with the virq.
>>
>> Hmm, seems like we're talking past each other, or at least I get the
>> feeling I can't get my opinions out right. I'll try to do a quick PoC,
>> to show you what I mean. That's probably easier than just talking about
>> it. ...and maybe I'll come realizing I'm all wrong!
>
> I suggest to wait for my v13 and try something on top of that
> otherwise we might duplicate efforts.

OK!

>> > I did not see any PCIe or platform device requiring this kind of
>> > reservation. Any examples ?
>>
>> It's not a requirement. Some devices allocate a gazillion interrupts
>> (NICs with many QoS queues, e.g.), but only activate a subset (via
>> request_irq()). A system using these kind of devices might run out of
>> interrupts.
>
> I don't see how this is not possible currently.

Again, this is something we can improve on later. But, this
implementation activates the interrupt at allocation time, no?

>> Problems you run into once you leave the embedded world, pretty much.
>>
>> >> * Handle managed interrupts
>> >
>> > Any examples of managed interrupts in the RISC-V world ?
>>
>> E.g. all nvme drives: nvme_setup_irqs(), and I'd assume contemporary
>> netdev drivers would use it. Typically devices with per-cpu queues.
>
> We have tested with NVMe devices, e1000e, VirtIO-net, etc and I did
> not see any issue.
>
> We can always add new features as separate incremental series as long
> as there is clear use-cause backed by real-world devices.

Agreed. Let's not feature creep.


Björn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ