[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAo+4rW_rTsY=TpxZwO8yHB5gFkRKyTvy6kQ-eeiY0vg4+fuYg@mail.gmail.com>
Date: Thu, 27 Jul 2023 14:48:52 +0800
From: Chengfeng Ye <dg573847474@...il.com>
To: Logan Gunthorpe <logang@...tatee.com>
Cc: Christophe JAILLET <christophe.jaillet@...adoo.fr>,
vkoul@...nel.org, Yunbo Yu <yuyunbo519@...il.com>,
dmaengine@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] dmaengine: plx_dma: Fix potential deadlock on &plxdev->ring_lock
Hi Logan and Christophe,
Thanks much for the reply and reminder, and yes, spin_lock_bh() should
be better.
When I wrote the patch I thought the spin_lock_bh() cannot be nested,
and afraid that if some outside callers called .dma_tx_status() callback
with softirq already disable, then spin_unlock_bh() would unintentionally
re-enable softirq(). spin_lock_irqsave() is always safer in general thus I
used it.
But I just check the document [1] about these API and found that _bh()
can be nested. Then use spin_lock_bh() should be better due to
performance concern.
> So perhaps we should just revert 1d05a0bdb420?
Then for this one I think revert 1d05a0bdb420 should be enough. May I
ask to revert that patch, should I do anything further? (like sending
a new patch).
> as explained in another reply [1], would spin_lock_bh() be enough in
> such a case?
For the another one [2], I would send a v2 patch to change to spin_lock_bh()
[1] http://books.gigatux.nl/mirror/kerneldevelopment/0672327201/ch07lev1sec6.html
[2] https://lore.kernel.org/all/5125e39b-0faf-63fc-0c51-982b2a567e21@wanadoo.fr/
Thanks again,
Chengfeng
Powered by blists - more mailing lists