lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 27 Sep 2013 15:36:05 -0700
From:	Grant Grundler <grundler@...omium.org>
To:	Ulf Hansson <ulf.hansson@...aro.org>
Cc:	Grant Grundler <grundler@...omium.org>,
	Chris Ball <cjb@...top.org>,
	Seungwon Jeon <tgih.jun@...sung.com>,
	linux-mmc <linux-mmc@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mmc: core: remove issue_fn indirect function call

On Fri, Sep 27, 2013 at 2:04 AM, Ulf Hansson <ulf.hansson@...aro.org> wrote:
...
>> Two ways to handle this: I can
>> 1) add a local function prototype of mmc_blk_issue_rq() to queue.c
>> 2) move mmc_init_queue() and mmc_queue_thread() from queue.c to block.c
>>
>> (2) actually makes sense since both functions are block IO specific.
>>
>> Thoughts? Preference? Other ideas?
>
> Hi Grant,
>
> Generally I am in favour of cleaning up messy code. But in this case
> it now seems a bit overworked.
>
> How about actually leaving it as is?

Hi Ulf,
Sure - I can leave it.

I'll point out that mmcqd uses about 1/2 a CPU core when busy driving
eMMC devices behind dw_mmc interface. That's pretty bad. I haven't
worked out exact "service demand" (avg CPU cycles to service one IO)
but that is the right metric to be looking at when evaluating design
changes.

My feeling is there too many layers in this subsystem. I count 6
layers of data structures when starting with struct mmc_blk_data. I've
attached my drawing (scanned pdf). "boxes" are host memory footprint.
Arrows are pointers to other host memory.

I thought the block layer provides sufficient IO request queueing and
can't justify why this code needs to manage it's own array (of two)
struct mmc_queue_req. Given how much faster CPUs have always been than
block storage, the difference in latency (nano or microseconds at
best) doesn't seem worth managing our own local queues - especially
since I'm convinced this code has SMP bugs that don't exist in the
block layer.

cheers,
grant

Download attachment "20130927141112165.pdf" of type "application/pdf" (202478 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ