[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e86748a9-6b72-4404-9042-c9b6308a9bc1@intel.com>
Date: Mon, 30 Sep 2024 13:30:58 +0200
From: Przemek Kitszel <przemyslaw.kitszel@...el.com>
To: Dan Carpenter <dan.carpenter@...aro.org>, Dmitry Torokhov
<dmitry.torokhov@...il.com>
CC: <linux-kernel@...r.kernel.org>, Peter Zijlstra <peterz@...radead.org>,
<amadeuszx.slawinski@...ux.intel.com>, Tony Nguyen
<anthony.l.nguyen@...el.com>, <nex.sw.ncis.osdt.itp.upstreaming@...el.com>,
<netdev@...r.kernel.org>, Andy Shevchenko <andriy.shevchenko@...el.com>
Subject: Re: [RFC PATCH] cleanup: make scoped_guard() to be return-friendly
On 9/30/24 13:08, Dan Carpenter wrote:
> On Mon, Sep 30, 2024 at 12:21:44PM +0200, Przemek Kitszel wrote:
>>
>> Most of the time it is just easier to bend your driver than change or
>> extend the core of the kernel.
>>
>> There is actually scoped_cond_guard() which is a trylock variant.
>>
>> scoped_guard(mutex_try, &ts->mutex) you have found is semantically
>> wrong and must be fixed.
>
> What? I'm so puzzled by this conversation.
there are two variants of scoped_guard() and you have found a place
where the wrong one is used
>
> Anyway, I don't have a problem with your goal, but your macro is wrong and will
> need to be re-written. You will need to update any drivers which use the
> scoped_guard() for try locks. I don't care how you do that. Use
> scoped_cond_guard() if you want or invent a new macro. But that work always
> falls on the person changing the API. Plus, it's only the one tsc200x-core.c
> driver so I don't understand why you're making a big deal about it.
apologies for upsetting you
I will send next iteration of this series with additional patches fixing
current code (thanks you for finding it for me in this case!)
I didn't said so in prev mail to leave you an option to send the fix for
the usage bug you have reported, just confirmed it. But by all means I'm
happy to fix current code myself.
> but your macro is wrong and will need to be re-written
could you please elaborate here?
Powered by blists - more mailing lists