[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4a9f657b-1cf1-94ff-daf8-c928b644e044@oracle.com>
Date: Wed, 11 Jan 2017 10:29:01 -0500
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
To: Juergen Gross <jgross@...e.com>, linux-kernel@...r.kernel.org,
xen-devel@...ts.xenproject.org
Subject: Re: [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent
xenstore accesses
>>> +
>>> +
>>> +static bool test_reply(struct xb_req_data *req)
>>> +{
>>> + if (req->state == xb_req_state_got_reply || !xenbus_ok())
>>> + return true;
>>> +
>>> + /* Make sure to reread req->state each time. */
>>> + cpu_relax();
>> I don't think I understand why this is needed.
> I need a compiler barrier. Otherwise the compiler read req->state only
> once outside the while loop.
Then barrier() looks the right primitive to use here. cpu_relax(), while
doing what you want, is intended for other purposes.
>
>>> +
>>> + return false;
>>> +}
>>> +
>>
>>> +static void xs_send(struct xb_req_data *req, struct xsd_sockmsg *msg)
>>> {
>>> - mutex_lock(&xs_state.transaction_mutex);
>>> - atomic_inc(&xs_state.transaction_count);
>>> - mutex_unlock(&xs_state.transaction_mutex);
>>> -}
>>> + bool notify;
>>>
>>> -static void transaction_end(void)
>>> -{
>>> - if (atomic_dec_and_test(&xs_state.transaction_count))
>>> - wake_up(&xs_state.transaction_wq);
>>> -}
>>> + req->msg = *msg;
>>> + req->err = 0;
>>> + req->state = xb_req_state_queued;
>>> + init_waitqueue_head(&req->wq);
>>>
>>> -static void transaction_suspend(void)
>>> -{
>>> - mutex_lock(&xs_state.transaction_mutex);
>>> - wait_event(xs_state.transaction_wq,
>>> - atomic_read(&xs_state.transaction_count) == 0);
>>> -}
>>> + xs_request_enter(req);
>>>
>>> -static void transaction_resume(void)
>>> -{
>>> - mutex_unlock(&xs_state.transaction_mutex);
>>> + req->msg.req_id = xs_request_id++;
>> Is it safe to do this without a lock?
> You are right: I should move this to xs_request_enter() inside the
> lock. I think I'll let return xs_request_enter() the request id.
Then please move xs_request_id's declaration close to xs_state_lock's
declaration (just like you are going to move the two other state variables)
>
>>> +static int xs_reboot_notify(struct notifier_block *nb,
>>> + unsigned long code, void *unused)
>>> {
>>> - struct xs_stored_msg *msg;
>>
>>
>>> + struct xb_req_data *req;
>>> +
>>> + mutex_lock(&xb_write_mutex);
>>> + list_for_each_entry(req, &xs_reply_list, list)
>>> + wake_up(&req->wq);
>>> + list_for_each_entry(req, &xb_write_list, list)
>>> + wake_up(&req->wq);
>> We are waking up waiters here but there is not guarantee that waiting
>> threads will have a chance to run, is there?
> You are right. But this isn't the point. We want to avoid blocking a
> reboot due to some needed thread waiting for xenstore. And this task
> is being accomplished here.
I think it's worth adding a comment mentioning this.
-boris
Powered by blists - more mailing lists