lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <b8b752c6-4d91-4849-8a71-e3f43a827a42@linux.ibm.com>
Date: Mon, 23 Oct 2023 22:52:15 +0200
From: Wenjia Zhang <wenjia@...ux.ibm.com>
To: "D. Wythe" <alibuda@...ux.alibaba.com>, kgraul@...ux.ibm.com,
        jaka@...ux.ibm.com, wintera@...ux.ibm.com
Cc: kuba@...nel.org, davem@...emloft.net, netdev@...r.kernel.org,
        linux-s390@...r.kernel.org, linux-rdma@...r.kernel.org
Subject: Re: [PATCH net 5/5] net/smc: put sk reference if close work was
 canceled



On 23.10.23 14:18, D. Wythe wrote:
> 
> 
> On 10/23/23 6:28 PM, Wenjia Zhang wrote:
>>
>>
>> On 23.10.23 10:52, D. Wythe wrote:
>>>
>>>
>>> On 10/23/23 4:19 PM, Wenjia Zhang wrote:
>>>>
>>>>
>>>> On 20.10.23 04:41, D. Wythe wrote:
>>>>>
>>>>>
>>>>> On 10/20/23 1:40 AM, Wenjia Zhang wrote:
>>>>>>
>>>>>>
>>>>>> On 19.10.23 09:33, D. Wythe wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 10/19/23 4:26 AM, Wenjia Zhang wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 17.10.23 04:06, D. Wythe wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 10/13/23 3:04 AM, Wenjia Zhang wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 11.10.23 09:33, D. Wythe wrote:
>>>>>>>>>>> From: "D. Wythe" <alibuda@...ux.alibaba.com>
>>>>>>>>>>>
>>>>>>>>>>> Note that we always hold a reference to sock when attempting
>>>>>>>>>>> to submit close_work. 
>>>>>>>>>> yes
>>>>>>>>>> Therefore, if we have successfully
>>>>>>>>>>> canceled close_work from pending, we MUST release that reference
>>>>>>>>>>> to avoid potential leaks.
>>>>>>>>>>>
>>>>>>>>>> Isn't the corresponding reference already released inside the 
>>>>>>>>>> smc_close_passive_work()?
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi Wenjia,
>>>>>>>>>
>>>>>>>>> If we successfully cancel the close work from the pending state,
>>>>>>>>> it means that smc_close_passive_work() has never been executed.
>>>>>>>>>
>>>>>>>>> You can find more details here.
>>>>>>>>>
>>>>>>>>> /**
>>>>>>>>> * cancel_work_sync - cancel a work and wait for it to finish
>>>>>>>>> * @work:the work to cancel
>>>>>>>>> *
>>>>>>>>> * Cancel @work and wait for its execution to finish. This function
>>>>>>>>> * can be used even if the work re-queues itself or migrates to
>>>>>>>>> * another workqueue. On return from this function, @work is
>>>>>>>>> * guaranteed to be not pending or executing on any CPU.
>>>>>>>>> *
>>>>>>>>> * cancel_work_sync(&delayed_work->work) must not be used for
>>>>>>>>> * delayed_work's. Use cancel_delayed_work_sync() instead.
>>>>>>>>> *
>>>>>>>>> * The caller must ensure that the workqueue on which @work was 
>>>>>>>>> last
>>>>>>>>> * queued can't be destroyed before this function returns.
>>>>>>>>> *
>>>>>>>>> * Return:
>>>>>>>>> * %true if @work was pending, %false otherwise.
>>>>>>>>> */
>>>>>>>>> boolcancel_work_sync(structwork_struct *work)
>>>>>>>>> {
>>>>>>>>> return__cancel_work_timer(work, false);
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> Best wishes,
>>>>>>>>> D. Wythe
>>>>>>>> As I understand, queue_work() would wake up the work if the work 
>>>>>>>> is not already on the queue. And the sock_hold() is just prio to 
>>>>>>>> the queue_work(). That means, cancel_work_sync() would cancel 
>>>>>>>> the work either before its execution or after. If your fix 
>>>>>>>> refers to the former case, at this moment, I don't think the 
>>>>>>>> reference can be hold, thus it is unnecessary to put it.
>>>>>>>>>
>>>>>>>
>>>>>>> I am quite confuse about why you think when we cancel the work 
>>>>>>> before its execution,
>>>>>>> the reference can not be hold ?
>>>>>>>
>>>>>>>
>>>>>>> Perhaps the following diagram can describe the problem in better 
>>>>>>> way :
>>>>>>>
>>>>>>> smc_close_cancel_work
>>>>>>> smc_cdc_msg_recv_action
>>>>>>>
>>>>>>>
>>>>>>> sock_hold
>>>>>>> queue_work
>>>>>>> if (cancel_work_sync())        // successfully cancel before 
>>>>>>> execution
>>>>>>> sock_put()                        //  need to put it since we 
>>>>>>> already hold a ref before   queue_work()
>>>>>>>
>>>>>>>
>>>>>> ha, I already thought you might ask such question:P
>>>>>>
>>>>>> I think here two Problems need to be clarified:
>>>>>>
>>>>>> 1) Do you think the bh_lock_sock/bh_unlock_sock in the 
>>>>>> smc_cdc_msg_recv does not protect the smc_cdc_msg_recv_action() 
>>>>>> from cancel_work_sync()?
>>>>>> Maybe that would go back to the discussion in the other patch on 
>>>>>> the behaviors of the locks.
>>>>>>
>>>>>
>>>>> Yes. bh_lock_sock/bh_unlock_sock can not block code execution 
>>>>> protected by lock_sock/unlock(). That is to say, they are not 
>>>>> exclusive.
>>>>>
>>>> No, the logic of the inference is very vague to me. My understand is 
>>>> completely different. That is what I read from the kernel code. They 
>>>> are not *completely* exclusive, because while the bottom half 
>>>> context holds the lock i.e. bh_lock_sock, the process context can 
>>>> not get the lock by lock_sock. (This is actually my main point of my 
>>>> argument for these fixes, and I didn't see any clarify from you). 
>>>> However, while the process context holds the lock by lock_sock, the 
>>>> bottom half context can still get it by bh_lock_sock, this is just 
>>>> like what you showed in the code in lock_sock. Once it gets the 
>>>> ownership, it release the spinlock.
>>>>
>>>
>>> “ while the process context holds the lock by lock_sock, the bottom 
>>> half context can still get it by bh_lock_sock,  ”
>>>
>>> You already got that, so why that sock_set_flag(DONE) and 
>>> sock_set_flag(DEAD) can not happen concurrently ?
>>>
>>
>> Then I'd ask how do you understand this sentence I wrote? "while the 
>> bottom half context holds the lock i.e. bh_lock_sock, the process 
>> context can not get the lock by lock_sock."
>>>
> 
> That's also true.  I have no questions on it.  They are asymmetrical.
> 
> But we cannot guarantee that the interrupt context always holds the lock 
> before the process context, that's why i think
> that sock_set_flag(DONE) and sock_set_flag(DEAD) can run concurrently.
> 
ok, I have to agree with that. I did too much focus on this case :(
So I think the approach of the 1st patch is also appropriate. Thank you 
for taking time to let me out!

>>>>> We can use a very simple example to infer that since bh_lock_sock 
>>>>> is type of spin-lock, if bh_lock_sock/bh_unlock_sock can block 
>>>>> lock_sock/unlock(),
>>>>> then lock_sock/unlock() can also block bh_lock_sock/bh_unlock_sock.
>>>>>
>>>>> If this is true, when the process context already lock_sock(), the 
>>>>> interrupt context must wait for the process to call
>>>>> release_sock(). Obviously, this is very unreasonable.
>>>>>
>>>>>
>>>>>> 2) If the queue_work returns true, as I said in the last main, the 
>>>>>> work should be (being) executed. How could the cancel_work_sync() 
>>>>>> cancel the work before execution successgully?
>>>>>
>>>>> No, that's not true. In fact, if queue_work returns true, it simply 
>>>>> means that we have added the task to the queue and may schedule a 
>>>>> worker to execute it,
>>>>> but it does not guarantee that the task will be executed or is 
>>>>> being executed when it returns true,
>>>>> the task might still in the list and waiting some worker to execute 
>>>>> it.
>>>>>
>>>>> We can make a simple inference,
>>>>>
>>>>> 1. A known fact is that if no special flag (WORK_UNBOUND) is given, 
>>>>> tasks submitted will eventually be executed on the CPU where they 
>>>>> were submitted.
>>>>>
>>>>> 2. If the queue_work returns true, the work should be or is being 
>>>>> executed
>>>>>
>>>>> If all of the above are true, when we invoke queue_work in an 
>>>>> interrupt context, does it mean that the submitted task will be 
>>>>> executed in the interrupt context?
>>>>>
>>>>>
>>>>> Best wishes,
>>>>> D. Wythe
>>>>>
>>>> If you say the thread is not gauranteed to be waken up in then 
>>>> queue_work to execute the work, please explain what the kick_pool 
>>>> function does.
>>>
>>> I never said that.
>>>
>> What do you understand on the kick_pool there?
> 
> 
> 
> 
> I think this simple logic-code graph can totally explain my point of 
> view in clear.
> 
> My key point is queue_work can not guarantee the work_1 is executed or 
> being executed, the work_1 might still be
> in the list ( before executed ) .
> 
> The kick_pool() might wake up the 'a_idle_worker' from schedule(), and 
> then the work_1 can be executed soon.
> But we can not said that the  work_1 is already executed or being executed.
> 
> In fact, we can invoke cancel_work_syn() to delete the work_1 from the 
> list to avoid to be executed, when the
> a_idle_worker_main has not delete(or pop) the work_1 yet.
> 
> Besides, there is a upper limit to the number of idle workers. If the 
> current number of work_x being executed exceeds this number,
> the work_1 must wait until there are idle_workers available. In that 
> case, we can not said that the  work_1 is already executed
> or being executed as well.
> 
I do agree with this explaination. My thought was that cancel_work_syn() 
deleting the work_1 from the list to avoid to be executed would rarely 
happen, as I was focusing the scenario above. Since we have the 
agreement on the locks now, I agree that would happen.

Thanks again!
Here you are:
Reviewed-by: Wenjia Zhang <wenjia@...ux.ibm.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ