[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150605222706.6470.qmail@ns.horizon.com>
Date: 5 Jun 2015 18:27:06 -0400
From: "George Spelvin" <linux@...izon.com>
To: tglx@...utronix.de
Cc: linux@...izon.com, linux-kernel@...r.kernel.org,
viresh.kumar@...aro.org
Subject: Re: [patch 2/7] timer: Remove FIFO guarantee
Two thoughts:
1) It's not clear that timer slack has violated the FIFO guarantee.
Remember, the guarantee only applies when two timers have the same
(absolute) timeout; i.e. are requested for the same time.
For two entries with the same timeout, applying slack will produce
the same adjusted timeout. That's the whole point of slack; to
force the timeouts to collide more.
Because the unadjusted timeouts aren't stored, timeout ordering
can be violated; i.e. timers for time t and t+1, which round to the
same time ater slack is applied, will expire in FIFO order rather
than timeout order. This is non-
But in the case where he FIFO guatantee used to exist, I don't think
timer slack broke it.
I'm not disagreeing with the change, but it's not clear to me that
it's as safe as you think.
2) If you want to do some more in that area, one thing I've been meaning
to get around to is eliminating the whole round_jiffies() system.
It does basically the same thing as the slack system, although with
less flexibility, and it would be wonderful to rip it out of
the kernel completely.
Additional rambling you should ignore. It exists because I haven't
figured out why it's impractical yet.
An interesting variant on the slack system would be to apply slack in the
wakeup loop rather than before sleeping. It might permit more bundling
and thus fewer wakeups.
You have the timers in original order. But computing the next expiration
is more complex. Each timer has a minimum and maximum wakeup time, and
they're sorted by minimum time. You scan the list of pending timers,
accumulating a "batch" which can all be woken up together.
You keep intersecting each new timer's wakeup interval with the pending
batch, until the batch maximum is less than the next timer's minimum.
The final result is a range of permissible timeouts, from which one is
chosen as the processor wakeup time. I'd be inclined to err on the low
side, but whatever works.
(It's also permissible to limit the size of a batch arbitrarily.
After scanning N timers, just pick a wakeup time no later than
the N+1st timer's minimum time and stop.)
When a new timer is added, there are three cases:
1) (Common) Its minimum timeout is later than the current pending wakeup.
Add it to the pending queue normally.
2) Its wakeup range includes the pending time. Add it to the batch.
3) Its maximum wakeup time is less than the pending time. Reschedule
the hardware timer. When it goes off, scan the batch for additional
timers that can be fired now, before building the new batch.
The idea is that each timer would be scanned twice: once when
building the batch, and a second time when running it.
Except that's not true if the third case happens frequently, and
batches keep getting preempted and re-scanned.
But there's an easy workaround to avoid O(n^2): at the expense of
possibly non-optimal wakeups, if what's left of the old batch is larger
than reasonable to rescan, just remember the old wakeup range (the
minimum will be accurate, because it's the minimum of the last timer
in the batch, which is still pending, but the maximum might be wrong)
and extend the batch without recomputing the maximum.
There's probably some reason this can't be made to work, but I wanted
to do a brain dump.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists