lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKLKtzeo1x+0Kxd0+29yqvXh4FBiS+p=T7b1-BWb6HXH5e1hhQ@mail.gmail.com>
Date:	Tue, 12 Jun 2012 17:51:22 +0530
From:	Saugata Das <saugata.das@...aro.org>
To:	"Ted Ts'o" <tytso@....edu>
Cc:	Artem Bityutskiy <dedekind1@...il.com>,
	Saugata Das <saugata.das@...ricsson.com>,
	linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	linux-mmc@...r.kernel.org, patches@...aro.org, venkat@...aro.org,
	Arnd Bergmann <arnd.bergmann@...aro.org>
Subject: Re: [PATCH 2/3] ext4: Context support

On 11 June 2012 17:57, Ted Ts'o <tytso@....edu> wrote:
> On Mon, Jun 11, 2012 at 02:41:31PM +0300, Artem Bityutskiy wrote:
>>
>> Word "context" is very generic and it is widely used various things, and
>> I believe we should try to avoid overloading this term and obfuscating
>> the I/O stack with various functions and other identifiers like
>> "get_context()". This would hurt readability. It is fine to use it
>> withing the UFS-specific code, but not globally withing the kernel code.
>>
>> I do not really have good name candidates, but even "ufscontext" is
>> already better than just "context". Or "iocontext" ? Or just "ufsdata" ?
>
> Before we try naming it, can we get some more details about exactly
> how context in the eMMC context works?
>
> It appears to be a way of grouping related writes together (yes?) but
> at what granularity?  What are the restrictions at the device level?
>

Yes, the idea is to group the read, write requests for a file to a
common context so that MMC can optimize the performance.

There is no restriction on the number of blocks which can be added in
the context. However, MMC restricts the number of contexts to 15. So,
potentially, multiple file system contexts will map to single MMC
context.

> The proof-of-concept patches seem to use the inode number as a way of
> trying to group related writes, but what about at a larger level than
> that?  For example, if we install a RPM or deb package where all of
> the files will likely be replaced together, should that be given the
> same context?

In this patch, context is used at file level based on inode number.
So, in the above example, multiple contexts will be used for the
directory, file updates during RPM installation.

>
> How likely does it have to be that related blocks written under the
> same context must be deleted at the same time for this concept to be
> helpful?

There is no restriction that related blocks within the MMC context
needs to be deleted together

> If we have a context where it is the context assumption does
> not hold (example: a database where you have a random access
> read/write pattern with blocks updated in place) how harm will it be
> to the device format if those blocks are written under the same
> context?
>

MMC context allows the data blocks to be overwritten or randomly accessed

> The next set of questions we need to ask is how generalizable is this
> concept to devices that might be more sophisticated than simple eMMC
> devices.  If we're going to expose something all the way out to the
> file system layer, it would be nice if it worked on more than just
> low-end flash devices, but also on more sophisticated devices as well.
>

This context mechanism will be used on both UFS and MMC devices. If
there are some alternate suggestions on what can be used as context
from file system perspective, then please  suggest.


> Regards,
>
>                                        - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ