[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E1E6052.6030002@vflare.org>
Date: Wed, 13 Jul 2011 20:19:46 -0700
From: Nitin Gupta <ngupta@...are.org>
To: Jerome Marchand <jmarchan@...hat.com>
CC: Greg Kroah-Hartman <gregkh@...e.de>,
Linux Kernel List <linux-kernel@...r.kernel.org>,
Robert Jennings <rcj@...ux.vnet.ibm.com>,
Jeff Moyer <jmoyer@...hat.com>
Subject: Re: [PATCH 3/4] Staging: zram: allow partial page operations
On 07/01/2011 02:47 AM, Jerome Marchand wrote:
> On 06/10/2011 06:41 PM, Nitin Gupta wrote:
>> On 06/10/2011 06:28 AM, Jerome Marchand wrote:
>>> Commit 7b19b8d45b216ff3186f066b31937bdbde066f08 (zram: Prevent overflow
>>> in logical block size) introduced ZRAM_LOGICAL_BLOCK_SIZE constant to
>>> prevent overflow of logical block size on 64k page kernel.
>>> However, the current implementation of zram only allow operation on block
>>> of the same size as a page. That makes theorically legit 4k requests fail
>>> on 64k page kernel.
>>>
>>> This patch makes zram allow operation on partial pages. Basically, it
>>> means we still do operations on full pages internally, but only copy the
>>> relevent segments from/to the user memory.
>>>
>>
>> Couldn't we just change struct queue_limits.logical_block_size type to
>> unsigned int or something so it could hold value of 64K? Then we could
>> avoid making all these changes to handle partial page requests.
>
> I've finally done some tests. At least FAT filesystems are unable to cope
> with 64k logical blocks. Probably some other fs are affected too. I we want
> to support them, zram need to handle operation on partial pages.
>
Sorry for late reply.
If this is the case, we surely need partial page operations. I also
looked into these patches and they look good (though I've have not
tested them).
Thanks,
Nitin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists