lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3a028a6583c7fac372c8711930fc1874.squirrel@www.codeaurora.org>
Date:	Tue, 15 May 2012 21:24:21 +0300 (IDT)
From:	"Konstantin Dorfman" <kdorfman@...eaurora.org>
To:	"Venkatraman S" <svenkatr@...com>
Cc:	linux-kernel@...r.kernel.org, linux-mmc@...r.kernel.org,
	arnd.bergmann@...aro.org, cjb@...top.org, alex.limberg@...disk.com,
	ilan.smith@...disk.com, lporzio@...ron.com,
	"Venkatraman S" <svenkatr@...com>
Subject: Re: [RFC PATCH 00/11] [FS, MM, block,
      MMC]: eMMC High Priority Interrupt Feature

On Wed, April 18, 2012 9:25 am, Venkatraman S wrote:
Hello,

> a) At the top level, some policy decisions have to be made on what is
> worth preempting for.
> 	This implementation uses the demand paging requests and swap
> read requests as potential reads worth preempting an ongoing long write.
> 	This is expected to provide improved responsiveness for smarphones
> with multitasking capabilities - example would be launch a email
> application
> while a video capture session (which causes long writes) is ongoing.
> b) At the block handler, the higher priority request should be queued
>   ahead of the pending requests in the elevator
> c) At the MMC block and core level, transactions have to executed to
> enforce the rules of the MMC spec and make a reasonable tradeoff if the
> ongoing command is really worth preempting. (For example, is it too close
> to completing already ?).

Do you have some profiling information (on real or synthetic scenarios)
that may prove/show improvement in read latency for your design?

It could be useful to use blktrace engine with some post processing to get
per request (or per block) latency.

Also you can gather some statistics about how often demand paging and swap
read requests occurs during typical user scenarios.

Without such statistical analysis we are risking to do hard work with zero
benefits.
Does this make sense?

Thanks,
Kostya
Consultant for Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ