lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 17 Mar 2022 11:08:35 -0400
From:   Felix Kuehling <felix.kuehling@....com>
To:     Lee Jones <lee.jones@...aro.org>
Cc:     David Airlie <airlied@...ux.ie>,
        "Pan, Xinhui" <Xinhui.Pan@....com>, linux-kernel@...r.kernel.org,
        dri-devel@...ts.freedesktop.org, amd-gfx@...ts.freedesktop.org,
        Alex Deucher <alexander.deucher@....com>,
        Christian König <christian.koenig@....com>
Subject: Re: [PATCH 1/1] drm/amdkfd: Protect the Client whilst it is being
 operated on


Am 2022-03-17 um 11:00 schrieb Lee Jones:
> Good afternoon Felix,
>
> Thanks for your review.
>
>> Am 2022-03-17 um 09:16 schrieb Lee Jones:
>>> Presently the Client can be freed whilst still in use.
>>>
>>> Use the already provided lock to prevent this.
>>>
>>> Cc: Felix Kuehling <Felix.Kuehling@....com>
>>> Cc: Alex Deucher <alexander.deucher@....com>
>>> Cc: "Christian König" <christian.koenig@....com>
>>> Cc: "Pan, Xinhui" <Xinhui.Pan@....com>
>>> Cc: David Airlie <airlied@...ux.ie>
>>> Cc: Daniel Vetter <daniel@...ll.ch>
>>> Cc: amd-gfx@...ts.freedesktop.org
>>> Cc: dri-devel@...ts.freedesktop.org
>>> Signed-off-by: Lee Jones <lee.jones@...aro.org>
>>> ---
>>>    drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c | 6 ++++++
>>>    1 file changed, 6 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>>> index e4beebb1c80a2..3b9ac1e87231f 100644
>>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>>> @@ -145,8 +145,11 @@ static int kfd_smi_ev_release(struct inode *inode, struct file *filep)
>>>    	spin_unlock(&dev->smi_lock);
>>>    	synchronize_rcu();
>>> +
>>> +	spin_lock(&client->lock);
>>>    	kfifo_free(&client->fifo);
>>>    	kfree(client);
>>> +	spin_unlock(&client->lock);
>> The spin_unlock is after the spinlock data structure has been freed.
> Good point.
>
> If we go forward with this approach the unlock should perhaps be moved
> to just before the kfree().
>
>> There
>> should be no concurrent users here, since we are freeing the data structure.
>> If there still are concurrent users at this point, they will crash anyway.
>> So the locking is unnecessary.
> The users may well crash, as does the kernel unfortunately.
We only get to kfd_smi_ev_release when the file descriptor is closed. 
User mode has no way to use the client any more at this point. This 
function also removes the client from the dev->smi_cllients list. So no 
more events will be added to the client. Therefore it is safe to free 
the client.

If any of the above were not true, it would not be safe to kfree(client).

But if it is safe to kfree(client), then there is no need for the locking.

Regards,
   Felix



>
>>>    	return 0;
>>>    }
>>> @@ -247,11 +250,13 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
>>>    		return ret;
>>>    	}
>>> +	spin_lock(&client->lock);
>> The client was just allocated, and it wasn't added to the client list or
>> given to user mode yet. So there can be no concurrent users at this point.
>> The locking is unnecessary.
>>
>> There could be potential issues if someone uses the file descriptor by dumb
>> luck before this function returns. So maybe we need to move the
>> anon_inode_getfd to the end of the function (just before list_add_rcu) so
>> that we only create the file descriptor after the client structure is fully
>> initialized.
> Bingo.  Well done. :)
>
> I can move the function as suggested if that is the best route forward?
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ