[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0fedaae-7245-a5fa-b29e-5fb036d7d147@linaro.org>
Date: Mon, 14 Nov 2022 11:18:25 +0100
From: Krzysztof Kozlowski <krzysztof.kozlowski@...aro.org>
To: Dmitry Vyukov <dvyukov@...gle.com>, bongsu.jeon@...sung.com
Cc: "leon@...nel.org" <leon@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"syzkaller@...glegroups.com" <syzkaller@...glegroups.com>
Subject: Re: [PATCH net-next v3] nfc: Allow to create multiple virtual nci
devices
On 09/11/2022 01:42, Dmitry Vyukov wrote:
> On Tue, 8 Nov 2022 at 16:35, Bongsu Jeon <bongsu.jeon@...sung.com> wrote:
>>>>>> On Sat, Nov 5, 2022 at 2:04 AM Dmitry Vyukov<dvyukov@...gle.com> wrote:
>>>>>>> The current virtual nci driver is great for testing and fuzzing.
>>>>>>> But it allows to create at most one "global" device which does not allow
>>>>>>> to run parallel tests and harms fuzzing isolation and reproducibility.
>>>>>>> Restructure the driver to allow creation of multiple independent devices.
>>>>>>> This should be backwards compatible for existing tests.
>>>>>>
>>>>>> I totally agree with you for parallel tests and good design.
>>>>>> Thanks for good idea.
>>>>>> But please check the abnormal situation.
>>>>>> for example virtual device app is closed(virtual_ncidev_close) first and then
>>>>>> virtual nci driver from nci app tries to call virtual_nci_send or virtual_nci_close.
>>>>>> (there would be problem in virtual_nci_send because of already destroyed mutex)
>>>>>> Before this patch, this driver used virtual_ncidev_mode state and nci_mutex that isn't destroyed.
>>>>>
>>>>> I assumed nci core must stop calling into a driver at some point
>>>>> during the driver destruction. And I assumed that point is return from
>>>>> nci_unregister_device(). Basically when nci_unregister_device()
>>>>> returns, no new calls into the driver must be made. Calling into a
>>>>> driver after nci_unregister_device() looks like a bug in nci core.
>>>>>
>>>>> If this is not true, how do real drivers handle this? They don't use
>>>>> global vars. So they should either have the same use-after-free bugs
>>>>> you described, or they handle shutdown differently. We just need to do
>>>>> the same thing that real drivers do.
>>>>>
>>>>> As far as I see they are doing the same what I did in this patch:
>>>>> https://elixir.bootlin.com/linux/v6.1-rc4/source/drivers/nfc/fdp/i2c.c#L343
>>>>> https://elixir.bootlin.com/linux/v6.1-rc4/source/drivers/nfc/nfcmrvl/usb.c#L354
>>>>>
>>>>> They call nci_unregister_device() and then free all resources:
>>>>> https://elixir.bootlin.com/linux/v6.1-rc4/source/drivers/nfc/nfcmrvl/main.c#L186
>>>>>
>>>>> What am I missing here?
>>>>
>>>> I'm not sure but I think they are little different.
>>>> nfcmrvl uses usb_driver's disconnect callback function and fdp's i2c uses i2c_driver's remove callback function for unregister_device.
>>>> But virtual_ncidev just uses file operation(close function) not related to driver.
>>>> so Nci simulation App can call close function at any time.
>>>> If Scheduler interrupts the nci core code right after calling virtual_nci_send and then
>>>> other process or thread calls virtual_nci_dev's close function,
>>>> we need to handle this problem in virtual nci driver.
>>>
>>> Won't the same issue happen if nci send callback is concurrent with
>>> USB/I2C driver disconnect?
>>>
>>> I mean something internal to the USB subsystem cannot affect what nci
>>> subsystem is doing, unless the USB driver calls into nci and somehow
>>> notifies it that it's about to destroy the driver.
>>>
>>> Is there anything USB/I2C drivers are doing besides calling
>>> nci_unregister_device() to ensure that there are no pending nci send
>>> calls? If yes, then we should do the same in the virtual driver. If
>>> not, then all other drivers are the subject to the same use-after-free
>>> bug.
>>>
>>> But I assumed that nci_unregister_device() ensures that there are no
>>> in-flight send calls and no future send calls will be issued after the
>>> function returns.
>>
>> Ok, I understand your mention. you mean that nci_unregister_device should prevent
>> the issue using dev lock or other way. right?
>
> Yes.
>
>> It would be better to handle the issue in nci core if there is.
>
> And yes.
>
> Krzysztof, can you confirm this is the case (nci core won't call
> ops->send callback after nci_unregister_device() returns)?
You asked me like I would know. :) I took the NFC subsystem, to bring it
a bit to shape, but I did not write any of this code and I don't
actually know - until I analyze the code as we all do...
Best regards,
Krzysztof
Powered by blists - more mailing lists