[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPa8GCCBQfR-nUOmwz6oSdCao63tHUH1jfGs2XcgCe_+g3grCQ@mail.gmail.com>
Date:	Thu, 3 May 2012 20:05:15 +1000
From:	Nick Piggin <npiggin@...il.com>
To:	Doug Ledford <dledford@...hat.com>
Cc:	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
	sfr@...b.auug.org.au
Subject: Re: [Patch 1/4] ipc/mqueue: improve performance of send/recv
On 2 May 2012 03:50, Doug Ledford <dledford@...hat.com> wrote:
> Avg time to send/recv (in nanoseconds per message)
>  when queue empty            305/288                    349/318
>  when queue full (65528 messages)
>    constant priority      526589/823                    362/314
>    increasing priority    403105/916                    495/445
>    decreasing priority     73420/594                    482/409
>    random priority        280147/920                    546/436
>
> Time to fill/drain queue (65528 messages, in seconds)
>  constant priority         17.37/.12                    .13/.12
>  increasing priority        4.14/.14                    .21/.18
>  decreasing priority       12.93/.13                    .21/.18
>  random priority            8.88/.16                    .22/.17
>
> So, I think the results speak for themselves.  It's possible this
> implementation could be improved by cacheing at least one priority
> level in the node tree (that would bring the queue empty performance
> more in line with the old implementation), but this works and is *so*
> much better than what we had, especially for the common case of a
> single priority in use, that further refinements can be in follow on
> patches.
Nice work! Yeah I think if you cache a last unused entry, that
should mostly solve the empty queue regression.
I would imagine most users won't have huge queues, so the
empty case should be important too.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Powered by blists - more mailing lists
 
