[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0ef9b45-9730-c9d5-a57e-9b7860d84a13@suse.com>
Date: Thu, 25 Aug 2022 12:22:37 +0200
From: Jan Beulich <jbeulich@...e.com>
To: Juergen Gross <jgross@...e.com>
Cc: Stefano Stabellini <sstabellini@...nel.org>,
Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>,
stable@...r.kernel.org,
Rustam Subkhankulov <subkhankulov@...ras.ru>,
xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] xen/privcmd: fix error exit of privcmd_ioctl_dm_op()
On 25.08.2022 12:13, Juergen Gross wrote:
> On 25.08.22 11:50, Jan Beulich wrote:
>> On 25.08.2022 11:26, Juergen Gross wrote:
>>> --- a/drivers/xen/privcmd.c
>>> +++ b/drivers/xen/privcmd.c
>>> @@ -602,6 +602,10 @@ static int lock_pages(
>>> *pinned += page_count;
>>> nr_pages -= page_count;
>>> pages += page_count;
>>> +
>>> + /* Exact reason isn't known, EFAULT is one possibility. */
>>> + if (page_count < requested)
>>> + return -EFAULT;
>>> }
>>
>> I don't really know the inner workings of pin_user_pages_fast()
>> nor what future plans there are with it. To be as independent of
>> its behavior as possible, how about bailing here only when
>> page_count actually is zero (i.e. no forward progress)?
>
> This would require to rework the loop in lock_pages() to be able to
> handle only a partial buffer.
Oh, I see - I've misread the code as if the loop was capping each
iteration's count to the capacity of some internal buffer (as iirc
is being done elsewhere). So ...
> This would add some complexity, but OTOH I'd get an exact error code
> back in case of failure.
... perhaps not worth it then, ...
> I'll have a try and see how the result would look like.
... unless you think this might be relevant in certain cases.
Jan
Powered by blists - more mailing lists