[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240904204531.154290-1-edmund.raile@protonmail.com>
Date: Wed, 04 Sep 2024 20:45:51 +0000
From: Edmund Raile <edmund.raile@...tonmail.com>
To: o-takashi@...amocchi.jp
Cc: apais@...ux.microsoft.com, edmund.raile@...tonmail.com, linux-kernel@...r.kernel.org, linux-media@...r.kernel.org, linux-sound@...r.kernel.org, linux1394-devel@...ts.sourceforge.net, netdev@...r.kernel.org
Subject: firewire: use sleepable workqueue to handle 1394 OHCI IT/IR context events: test 1
Hello Sakamoto-San, I very much appreciate the idea and effort to take on the tasklet conversion in small steps instead of all-at-once!
I also thank you for the CC, I'd like to be the testing canary for the coal mine of firewire ALSA with RME FireFace!
The ALSA mailing list is a bit overwhelming and I'll likely unsubscribe so a direct CC for anything I can test is a good idea.
Trying to apply patch 1 of 5 to mainline, your kernel tree appears to be out of sync with mainline!
It was missing b171e20 from 2009 and a7ecbe9 from 2022!
I hope nothing else important is missing!
Since in fw_card_initialize, ret is tested to be 0 we'd need an else instead, is this correct?
I edited these functions of patch 1, now everything applies just fine:
@@ -571,11 +571,28 @@ void fw_card_initialize(struct fw_card *card,
}
EXPORT_SYMBOL(fw_card_initialize);
-int fw_card_add(struct fw_card *card,
- u32 max_receive, u32 link_speed, u64 guid)
+int fw_card_add(struct fw_card *card, u32 max_receive, u32 link_speed, u64 guid,
+ unsigned int supported_isoc_contexts)
{
+ struct workqueue_struct *isoc_wq;
int ret;
+ // This workqueue should be:
+ // * != WQ_BH Sleepable.
+ // * == WQ_UNBOUND Any core can process data for isoc context. The
+ // implementation of unit protocol could consumes the core
+ // longer somehow.
+ // * != WQ_MEM_RECLAIM Not used for any backend of block device.
+ // * == WQ_HIGHPRI High priority to process semi-realtime timestamped data.
+ // * == WQ_SYSFS Parameters are available via sysfs.
+ // * max_active == n_it + n_ir A hardIRQ could notify events for multiple isochronous
+ // contexts if they are scheduled to the same cycle.
+ isoc_wq = alloc_workqueue("firewire-isoc-card%u",
+ WQ_UNBOUND | WQ_HIGHPRI | WQ_SYSFS,
+ supported_isoc_contexts, card->index);
+ if (!isoc_wq)
+ return -ENOMEM;
+
card->max_receive = max_receive;
card->link_speed = link_speed;
card->guid = guid;
@@ -584,9 +601,13 @@ int fw_card_add(struct fw_card *card,
generate_config_rom(card, tmp_config_rom);
ret = card->driver->enable(card, tmp_config_rom, config_rom_length);
if (ret == 0)
list_add_tail(&card->link, &card_list);
+ else
+ destroy_workqueue(isoc_wq);
+
+ card->isoc_wq = isoc_wq;
mutex_unlock(&card_mutex);
return ret;
@@ -709,7 +729,9 @@ void fw_core_remove_card(struct fw_card *card)
{
struct fw_card_driver dummy_driver = dummy_driver_template;
unsigned long flags;
+ might_sleep();
+
card->driver->update_phy_reg(card, 4,
PHY_LINK_ACTIVE | PHY_CONTENDER, 0);
fw_schedule_bus_reset(card, false, true);
@@ -719,6 +741,7 @@ void fw_core_remove_card(struct fw_card *card)
dummy_driver.free_iso_context = card->driver->free_iso_context;
dummy_driver.stop_iso = card->driver->stop_iso;
card->driver = &dummy_driver;
+ drain_workqueue(card->isoc_wq);
spin_lock_irqsave(&card->lock, flags);
fw_destroy_nodes(card);
Building a kernel with the patch produced 6.11.0-rc6-1-mainline-00019-g67784a74e258-dirty.
Testing it with TI XIO2213B and RME Fireface 800 so far > 1 hour reveals no issues at all.
ALSA streaming works fine:
mpv --audio-device=alsa/sysdefault:CARD=Fireface800 Spor-Ignition.flac
Though I haven't the faintest clue how to measure CPU usage impact of the patch, it looks like it would be neglible.
As of finishing this, I noticed you released [2] https://lore.kernel.org/lkml/20240904125155.461886-1-o-takashi@sakamocchi.jp/T/
I'll get around to testing that one too, but tomorrow at the earliest.
Kind regards,
Edmund Raile.
Signed-off-by: Edmund Raile <edmund.raile@...tonmail.com>
Tested-by: Edmund Raile <edmund.raile@...tonmail.com>
Powered by blists - more mailing lists