[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46b54be4-297a-af8f-aac2-f2a080752034@oracle.com>
Date: Mon, 23 Jan 2017 13:59:34 -0500
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
To: Juergen Gross <jgross@...e.com>, linux-kernel@...r.kernel.org,
xen-devel@...ts.xenproject.org
Subject: Re: [PATCH v3 3/3] xen: optimize xenbus driver for multiple
concurrent xenstore accesses
On 01/23/2017 05:09 AM, Juergen Gross wrote:
> Handling of multiple concurrent Xenstore accesses through xenbus driver
> either from the kernel or user land is rather lame today: xenbus is
> capable to have one access active only at one point of time.
>
> Rewrite xenbus to handle multiple requests concurrently by making use
> of the request id of the Xenstore protocol. This requires to:
>
> - Instead of blocking inside xb_read() when trying to read data from
> the xenstore ring buffer do so only in the main loop of
> xenbus_thread().
>
> - Instead of doing writes to the xenstore ring buffer in the context of
> the caller just queue the request and do the write in the dedicated
> xenbus thread.
>
> - Instead of just forwarding the request id specified by the caller of
> xenbus to xenstore use a xenbus internal unique request id. This will
> allow multiple outstanding requests.
>
> - Modify the locking scheme in order to allow multiple requests being
> active in parallel.
>
> - Instead of waiting for the reply of a user's xenstore request after
> writing the request to the xenstore ring buffer return directly to
> the caller and do the waiting in the read path.
>
> Additionally signal handling was optimized by avoiding waking up the
> xenbus thread or sending an event to Xenstore in case the addressed
> entity is known to be running already.
>
> As a result communication with Xenstore is sped up by a factor of up
> to 5: depending on the request type (read or write) and the amount of
> data transferred the gain was at least 20% (small reads) and went up to
> a factor of 5 for large writes.
>
> In the end some more rough edges of xenbus have been smoothed:
>
> - Handling of memory shortage when reading from xenstore ring buffer in
> the xenbus driver was not optimal: it was busy looping and issuing a
> warning in each loop.
>
> - In case of xenstore not running in dom0 but in a stubdom we end up
> with two xenbus threads running as the initialization of xenbus in
> dom0 expecting a local xenstored will be redone later when connecting
> to the xenstore domain. Up to now this was no problem as locking
> would prevent the two xenbus threads interfering with each other, but
> this was just a waste of kernel resources.
>
> - An out of memory situation while writing to or reading from the
> xenstore ring buffer no longer will lead to a possible loss of
> synchronization with xenstore.
>
> - The user read and write part are now interruptible by signals.
>
> Signed-off-by: Juergen Gross <jgross@...e.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Powered by blists - more mailing lists