[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1264163348.3032.240.camel@localhost>
Date: Fri, 22 Jan 2010 14:29:08 +0200
From: Artem Bityutskiy <dedekind@...radead.org>
To: Andi Kleen <andi@...stfloor.org>
Cc: Don Mullis <don.mullis@...il.com>, linux-kernel@...r.kernel.org,
airlied@...hat.com, david@...morbit.com
Subject: Re: [PATCH 1/2] lib: more scalable list_sort()
On Fri, 2010-01-22 at 11:43 +0100, Andi Kleen wrote:
> Don Mullis <don.mullis@...il.com> writes:
> >
> > Being just a dumb library routine, list_sort() has no idea what context
> > it's been called in, how long a list a particular client could pass in,
> > nor how expensive the client's cmp() callback might be.
> >
> > The cmp() callback already passes back a client-private pointer.
> > Hanging off of this could be a count of calls, or timing information,
> > maintained by the client. Whenever some threshold is reached, the
> > client's cmp() could do whatever good CPU-sharing citizenship required.
>
> need_resched() does all the timing/thresholding (it checks the
> reschedule flag set by the timer interrupt). You just have to call it.
> But preferable not in the inner loop, but in a outer one. It's
> not hyper-expensive, but it's not free either.
>
> The drawback is that if it's called the context always has to
> allow sleeping, so it might need to be optional.
>
> Anyways a better fix might be simply to ensure in the caller
> that lists never get as long that they become a scheduling
> hazard. But you indicated that ubifs would pass very long lists?
> Perhaps ubifs (and other calls who might have that problem) simply
> needs to be fixed.
No, they are not very long. A hundred or so I guess, rarely. But we need
to check what is really the worst case, but it should not be too many.
--
Best Regards,
Artem Bityutskiy (Артём Битюцкий)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists