lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 Sep 2009 11:30:16 +0300
From:	Pekka Enberg <penberg@...helsinki.fi>
To:	Nitin Gupta <ngupta@...are.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hugh.dickins@...cali.co.uk>,
	Ed Tomlinson <edt@....ca>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, linux-mm-cc@...top.org,
	Ingo Molnar <mingo@...e.hu>,
	Frédéric Weisbecker <fweisbec@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>, Greg KH <greg@...ah.com>
Subject: Re: [PATCH 2/4] virtual block device driver (ramzswap)

On Tue, Sep 15, 2009 at 11:21 AM, Nitin Gupta <ngupta@...are.org> wrote:
> I don't want to ponder too much about this point now. If you all are okay
> with keeping this function buried in driver, I will do so. I'm almost tired
> maintaining this compcache thing outside of mainline.

Yup, whatever makes most sense to you.

>> Then make ramzswap depend on !CONFIG_ARM. In any case, CONFIG_ARM bits
>> really don't belong into drivers/block.
>
> ARM is an extremely important user of compcache -- Its currently being
> tested (unofficially) on Android, Nokia etc.

That's not a technical argument for keeping CONFIG_ARM in the driver.

>>>>> +
>>>>> +       trace_mark(ramzswap_lock_wait, "ramzswap_lock_wait");
>>>>> +       mutex_lock(&rzs->lock);
>>>>> +       trace_mark(ramzswap_lock_acquired, "ramzswap_lock_acquired");
>>>>
>>>> Hmm? What's this? I don't think you should be doing ad hoc
>>>> trace_mark() in driver code.
>>>
>>> This is not ad hoc. It is to see contention over this lock which I believe is a
>>> major bottleneck even on dual-cores. I need to keep this to measure improvements
>>> as I gradually make this locking more fine grained (using per-cpu buffer etc).
>>
>> It is ad hoc. Talk to the ftrace folks how to do it properly. I'd keep
>> those bits out-of-tree until the issue is resolved, really.
>
> /me is speechless.

That's fine, I CC'd the ftrace folks. Hopefully they'll be able to help you.

>
>>>>> +       rzs->compress_buffer = kzalloc(2 * PAGE_SIZE, GFP_KERNEL);
>>>>
>>>> Use alloc_pages(__GFP_ZERO) here?
>>>
>>> alloc pages then map them (i.e. vmalloc). What did we gain? With
>>> vmalloc, pages might
>>> not be physically contiguous which might hurt performance as
>>> compressor runs over this buffer.
>>>
>>> So, use kzalloc().
>>
>> I don't know what you're talking about. kzalloc() calls
>> __get_free_pages() directly for your allocation. You probably should
>> use that directly.
>
> What is wrong with kzalloc? I'm wholly totally stumped.
> I respect your time reviewing the code but this really goes over my head.
> We can continue arguing about get_pages vs kzalloc but I doubt if we will
> gain anything out of it.

The slab allocator needs metadata for the allocation so you're wasting
memory. If you really want *two pages*, why don't you simply use the
page allocator for that?

Btw, Nitin, why are you targeting drivers/block and not
drivers/staging at this point? It seems obvious enough that there are
still some issues that need to be ironed out (like the CONFIG_ARM
thing) so submitting the driver for inclusion in drivers/staging and
fixing it up there incrementally would likely save you from a lot of
trouble. Greg, does ramzswap sound like something that you'd be
willing to take?

                        Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ