lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Apr 2013 18:32:54 -0400
From:	Robert Love <rlove@...gle.com>
To:	Shankar Brahadeeswaran <shankoo77@...il.com>
Cc:	Dan Carpenter <dan.carpenter@...cle.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Bjorn Bringert <bringert@...gle.com>,
	Al Viro <viro@...iv.linux.org.uk>, devel@...verdev.osuosl.org,
	Hugh Dickins <hughd@...gle.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Anjana V Kumar <anjanavk12@...il.com>
Subject: Re: [BUG] staging: android: ashmem: Deadlock during ashmem_shrink

On Tue, Apr 30, 2013 at 9:29 AM, Shankar Brahadeeswaran
<shankoo77@...il.com> wrote:

> Question:
> On occasions when we return because of the lock unavailability, what
> could be the worst case number of ashmem pages that are left
> unfreed (lru_count). Will it be very huge and have side effects?

On that VM shrink path, all of them, but they'll go on the next pass.
Even if they didn't, however, that is fine: The ashmem cache
functionality is advisory. From user-space's point of view, it doesn't
even know when VM pressure will occur, so it can't possibly care.

> To get the answer for this question, I added some instrumentation code
> to ashmem_shrink function on top of the patch. I ran Android monkey
> tests with lot of memory hungry applications so as to hit the Low
> Memory situation more frequently. After running this for almost a day
> I did not see a situation where the shrinker did not have the mutex.
> In fact what I found is that (in this use case at-least) most of the
> time the "lru_count" is zero, which means the application has not
> unpinned the pages. So the shrinker has no job to do (basically
> shrink_slab does not call ashmem_shrinker second time). So worst case
> if we hit a scenario where the shrinker is called I'm sure the
> lru_count would be very low. So even if the shrinker returns without
> freeing them (because of unavailability of the lock) its not going to
> be costly.

That is expected. This race window is very, very small.

> After this experiment, I too think that this patch (returning from
> ashmem_shrink if the lock is not available) is good enough and does
> not seem to have any major side effects.
>
> PS: Any plans of submitting this patch formally?

Sure. Greg? :)

       Robert
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ