lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3B6D69C3A9EBCA4BA5DA60D9130274290413207C@dlee13.ent.ti.com>
Date:	Sun, 20 Apr 2008 09:09:12 -0500
From:	"Woodruff, Richard" <r-woodruff2@...com>
To:	"Arjan van de Ven" <arjan@...radead.org>
Cc:	"Thomas Gleixner" <tglx@...utronix.de>,
	"Ingo Molnar" <mingo@...e.hu>, <linux-kernel@...r.kernel.org>,
	<linux-pm@...ts.linux-foundation.org>
Subject: RE: Higer latency with dynamic tick (need for an io-ondemand govenor?)

On Sun, 20 Apr 2008 1:20: -0500
"Arjan van de Ven" < arjan@...radead.org > wrote:

> So right now we have the pmqos framework (and before that we had a
> simpler version of this);
> so if your realtime (or realtime-like) system cannot deal with latency
> longer then X usec,
> you can just tell the kernel,, and the deeper power states that have
> this latency, just won't get used.

Yes.  We're already using the older version today (sparingly) to set worst acceptable latency before failure.  This does work.

> What you're mentioning is sort-of-kinda different. It's the "most of the
> time go as deep as you can,
> but when I do IO, it hurts throughput".

That's 100% correct.

> There's two approaches to that in principle
> 1) Work based on historic behavior, and go less deep when there's lots
> of activity in the (recent) past
>    A few folks at Intel are working on something like this

Any data to share here? 

Interrupt frequency seemed like a good pivot, I thought I might experiment starting here.  Opinion?

> 2) You have the IO layer tell the kernel "heads up, something coming
> down soon"
>    This is more involved, especially since it's harder to predict when
> the disk will be done.
>    (it could be a 10msec seek, but it could also be in the disks cache
> memory, or it could be an SSD or,
>    the disk may have to read the sector 5 times because of weak
> magnetics... it's all over the map)
>    Another complication is that we need to only do this for
> "synchronous" IO's.. which is known at higher layers
>    in the block stack, but I think gets lost towards the bottom.

Not so many phones have a disk to care about the above device example.  However, I been wondering if the use of an mmap to non-file-backed devices and the use of the fadvice() hint might be useful for other classes of devices.  We have some media devices which do special mappings.

For UMPC's is the disk the main problem device which is slowing down the system?

Some of the slow downs I was talking about were happening in HS-USB devices.

> There's another problem with 2); in a multicore world; all packages
> EXACPT for the one which will get the irq can go to a deeper state
> anyway....
> but it might be hard to predict which CPU will get the completion irq.

I see. I expect there are some interesting problems there.  I would hope some affinity would keep an IRQ on the same CPU most of the time.  But like you say if its offline or in a very high latency state you would have to re-route that IRQ to a machine capable of handling it.

I know of some IRQ IP blocks with some QOS features, this seems like a place to use these.

Regards,
Richard W.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ