lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 13 Jun 2012 13:42:25 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	John Stultz <john.stultz@...aro.org>
CC:	KOSAKI Motohiro <kosaki.motohiro@...il.com>,
	Dave Hansen <dave@...ux.vnet.ibm.com>,
	Dmitry Adamushko <dmitry.adamushko@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Android Kernel Team <kernel-team@...roid.com>,
	Robert Love <rlove@...gle.com>, Mel Gorman <mel@....ul.ie>,
	Hugh Dickins <hughd@...gle.com>,
	Rik van Riel <riel@...hat.com>,
	Dave Chinner <david@...morbit.com>, Neil Brown <neilb@...e.de>,
	Andrea Righi <andrea@...terlinux.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
	Taras Glek <tgek@...illa.com>, Mike Hommey <mh@...ndium.org>,
	Jan Kara <jack@...e.cz>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH 3/3] [RFC] tmpfs: Add FALLOC_FL_MARK_VOLATILE/UNMARK_VOLATILE
 handlers

On 06/13/2012 10:21 AM, John Stultz wrote:

> On 06/12/2012 05:10 PM, Minchan Kim wrote:
>> On 06/13/2012 04:35 AM, John Stultz wrote:
>>
>>> On 06/12/2012 12:16 AM, Minchan Kim wrote:
>>>> Please, Cced linux-mm.
>>>>
>>>> On 06/09/2012 12:45 PM, John Stultz wrote:
>>>>
>>>>
>>>>> volatile.  Since we assume ranges are un-touched when volatile, that
>>>>> should preserve LRU purging behavior on single node systems and on
>>>>> multi-node systems it will approximate fairly closely.
>>>>>
>>>>> My main concern with this approach is marking and unmarking volatile
>>>>> ranges needs to be fast, so I'm worried about the additional
>>>>> overhead of
>>>>> activating each of the containing pages on mark_volatile.
>>>> Yes. it could be a problem if range is very large and populated
>>>> already.
>>>> Why can't we make new hooks?
>>>>
>>>> Just concept for showing my intention..
>>>>
>>>> +int shrink_volatile_pages(struct zone *zone)
>>>> +{
>>>> +       int ret = 0;
>>>> +       if (zone_page_state(zone, NR_ZONE_VOLATILE))
>>>> +               ret = shmem_purge_one_volatile_range();
>>>> +       return ret;
>>>> +}
>>>> +
>>>>    static void shrink_zone(struct zone *zone, struct scan_control *sc)
>>>>    {
>>>>           struct mem_cgroup *root = sc->target_mem_cgroup;
>>>> @@ -1827,6 +1835,18 @@ static void shrink_zone(struct zone *zone,
>>>> struct scan_control *sc)
>>>>                   .priority = sc->priority,
>>>>           };
>>>>           struct mem_cgroup *memcg;
>>>> +       int ret;
>>>> +
>>>> +       /*
>>>> +        * Before we dive into trouble maker, let's look at easy-
>>>> +        * reclaimable pages and avoid costly-reclaim if possible.
>>>> +        */
>>>> +       do {
>>>> +               ret = shrink_volatile_pages();
>>>> +               if (ret)
>>>> +                       zone_watermark_ok(zone, sc->order, xxx);
>>>> +                               return;
>>>> +       } while(ret)
>>> Hmm. I'm confused.
>>> This doesn't seem that different from the shrinker approach.
>>
>> Shrinker is called after shrink_list so it means normal pages can be
>> reclaimed
>> before we reclaim volatile pages. We shouldn't do that.
> 
> 
> Ah. Ok. Maybe that's a reasonable compromise between the shrinker
> approach and the more complex approach I just posted to lkml?
> (Forgive me for forgetting to CC you and linux-mm with my latest post!)


NP.

> 
>>> How does this resolve the numa-unawareness issue that Kosaki-san brought
>>> up?
>> Basically, I think your shrink function should be more smart.
>>
>> when fallocate is called, we can get mem_policy from shmem_inode_info
>> and pass it to
>> volatile_range so that volatile_range can keep the information of NUMA.
> Hrm.. That sounds reasonable. I'll look into the mem_policy bits and try
> to learn more.
> 
>> When shmem_purge_one_volatile_range is called, it receives zone
>> information.
>> So shmem_purge_one_volatile_range should find a range matched with
>> NUMA policy and
>> passed zone.
>>
>> Assumption:
>>    A range may include same node/zone pages if possible.
>>
>> I am not familiar with NUMA handling code so KOSAKI/Rik can point out
>> if I am wrong.
> Right, the range may cross nodes/zones but maybe that's not a huge deal?
> The only bit I'd worry about is the lru scanning being non-constant as
> we searched for a range that matched the node we want to free from. I
> guess we could have per-node/zone lrus.


Good.

> 
> 
>>>>> The other question I have with this approach is if we're on a system
>>>>> that doesn't have swap, it *seems* (not totally sure I understand it
>>>>> yet) the tmpfs file pages will be skipped over when we call
>>>>> shrink_lruvec.  So it seems we may need to add a new lru_list enum and
>>>>> nr[] entry (maybe LRU_VOLATILE?).   So then it may be that when we
>>>>> mark
>>>>> a range as volatile, instead of just activating it, we move it to the
>>>>> volatile lru, and then when we shrink from that list, we call back to
>>>>> the filesystem to trigger the entire range purging.
>>>> Adding new LRU idea might make very slow fallocate(VOLATILE) so I hope
>>>> we can avoid that if possible.
>>> Indeed. This is a major concern. I'm currently prototyping it out so I
>>> have a concrete sense of the performance cost.
>> If performance loss isn't big, that would be a approach!
> I've not had a chance yet to measure it, as I wanted to get my very
> rough patches out for discussion first. But if folks don't nack it
> outright I'll be providing some data there.  The hard part is that range
> creation would have a linear cost with the number of pages in the range,
> which at some point will be a pain.


That's right. So IMHO, my suggestion could be a solution.
I looked through your new patchset[5/6]. I know your intention but code still have problems.
But I didn't commented out. Before the detail review, I would like to hear opinions from others
and am curious about that whether you decide turning the approach or not.
It can save our precious time. :) 

> 
> Thanks again for your input!
> -john


Thanks for your effort!

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ