lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 16 Mar 2016 09:45:13 -0700
From:	Mark Salyzyn <salyzyn@...roid.com>
To:	Ulf Hansson <ulf.hansson@...aro.org>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Jonathan Corbet <corbet@....net>,
	Adrian Hunter <adrian.hunter@...el.com>,
	Yangbo Lu <yangbo.lu@...escale.com>,
	Tomas Winkler <tomas.winkler@...el.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	James Bottomley <JBottomley@...n.com>,
	Kuninori Morimoto <kuninori.morimoto.gx@...esas.com>,
	Grant Grundler <grundler@...omium.org>,
	Jon Hunter <jonathanh@...dia.com>,
	Luca Porzio <lporzio@...ron.com>,
	Yunpeng Gao <yunpeng.gao@...el.com>,
	Chuanxiao Dong <chuanxiao.dong@...el.com>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
	linux-mmc <linux-mmc@...r.kernel.org>
Subject: Re: [PATCH v2] mmc: Add CONFIG_MMC_SIMULATE_MAX_SPEED

On 03/16/2016 06:03 AM, Ulf Hansson wrote:
> On 22 February 2016 at 18:18, Mark Salyzyn <salyzyn@...roid.com> wrote:
>> When CONFIG_MMC_SIMULATE_MAX_SPEED is enabled, Expose max_read_speed,
>> max_write_speed and cache_size sysfs controls to simulate a slow
>> eMMC device. The boot default values, should one wish to set this
>> behavior right from kernel start:
>>
>> CONFIG_MMC_SIMULATE_MAX_READ_SPEED
>> CONFIG_MMC_SIMULATE_MAX_WRITE_SPEED
>> CONFIG_MMC_SIMULATE_CACHE_SIZE
>>
>> respectively; and if not defined are 0 (off), 0 (off) and 4 MB
>> also respectively.
> So this changelog doesn't really tell me *why* this feature is nice to
> have. Could you elaborate on this and thus also extend the information
> in the changelog please.

Will do. Why is certainly missing ;-}

Basically we have three choices to determine how a system may behave 
when one has an aged out eMMC:

1) wait until we can acquire a device with an old eMMC.
2) increase the temperature on the device and run io activity under a 
controlled level until the number of available erase blocks disappear, 
or the physical device itself slows.
3) we can adjust the driver to behave in the similar manner, but backed 
with a healthy (or rather healthier) eMMC.

#3 is plain just faster and cheaper.

I have one other duty for this driver is to switch out the default 
config parameters with module (kernel command line) parameters. Alas I 
have been swamped for the past little while.

> Moreover, I have briefly reviewed the code, but I don't want to go
> into the details yet... Instead, what I am trying to understand if
> this is something that useful+specific for the MMC subsystem, or if
> it's something that belongs in the upper generic BLOCK layer. Perhaps
> you can comment on this as well?

A feature much like this can be useful in an upper generic block layer, 
in fact I have done so in past lives for spinning media, or RAID 
systems, for private/proprietary/development needs. However, each type 
of system has a different set of characteristics and tunables to 
simulate more accurately their behavior. It is, however, far more 
complex to simulate with a device that allows more than one outstanding 
command, so it is dead simple to add this into the eMMC driver.

This change starts out with some of the basics, but device cache 
behavior is certainly different between this, RAID or spinning media 
(eMMC is simpler to emulate). And if/when we feel the need to expand the 
simulation to incorporate a limited pool of erase blocks due to aging or 
lack of recent fstrim, we certainly will enter device-specific 
territory. It will be easier to build in additional precision to the 
simulation if we keep this inside the eMMC driver.

Spinning media, for instance, would have its own simulation of drive 
head, track and sector position in order to simulate the latencies, 
however I have found adding an average latency works well enough in most 
scenarios. For RAID, _all_ component drives would need to have their own 
mechanical tracking if we wanted to add precision. If I put something 
like this in the block layer, I will be signing up for a quagmire if I 
was to aid the additional development. Do not get me started on solid 
state drives ...

Sadly, I am only passionate about eMMC _today_ since this could work on 
any of the 1.6billion devices on the planet right now, and it is a tiny 
and KISS cut-in ;-} (merged clean from linux3.4 to current)

>
> Kind regards
> Uffe
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ