lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTikA+ugFNS95Zs_o6QqG2u4r2g93=Q@mail.gmail.com>
Date:	Fri, 3 Jun 2011 08:01:44 +0900
From:	Minchan Kim <minchan.kim@...il.com>
To:	Andrea Arcangeli <aarcange@...hat.com>
Cc:	Mel Gorman <mgorman@...e.de>, Mel Gorman <mel@....ul.ie>,
	akpm@...ux-foundation.org, Ury Stankevich <urykhy@...il.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: compaction: Abort compaction if too many pages are
 isolated and caller is asynchronous

On Fri, Jun 3, 2011 at 7:32 AM, Andrea Arcangeli <aarcange@...hat.com> wrote:
> On Fri, Jun 03, 2011 at 07:23:48AM +0900, Minchan Kim wrote:
>> I mean we have more tail pages than head pages. So I think we are likely to
>> meet tail pages. Of course, compared to all pages(page cache, anon and
>> so on), compound pages would be very small percentage.
>
> Yes that's my point, that being a small percentage it's no big deal to
> break the loop early.

Indeed.

>
>> > isolated the head and it's useless to insist on more tail pages (at
>> > least for large page size like on x86). Plus we've compaction so
>>
>> I can't understand your point. Could you elaborate it?
>
> What I meant is that if we already isolated the head page of the THP,
> we don't need to try to free the tail pages and breaking the loop
> early, will still give us a chance to free a whole 2m because we
> isolated the head page (it'll involve some work and swapping but if it
> was a compoundtranspage we're ok to break the loop and we're not
> making the logic any worse). Provided the PMD_SIZE is quite large like
> 2/4m...

Do you want this? (it's almost pseudo-code)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7a4469b..9d7609f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1017,7 +1017,7 @@ static unsigned long isolate_lru_pages(unsigned
long nr_to_scan,
        for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
                struct page *page;
                unsigned long pfn;
-               unsigned long end_pfn;
+               unsigned long start_pfn, end_pfn;
                unsigned long page_pfn;
                int zone_id;

@@ -1057,9 +1057,9 @@ static unsigned long isolate_lru_pages(unsigned
long nr_to_scan,
                 */
                zone_id = page_zone_id(page);
                page_pfn = page_to_pfn(page);
-               pfn = page_pfn & ~((1 << order) - 1);
+               start_pfn = pfn = page_pfn & ~((1 << order) - 1);
                end_pfn = pfn + (1 << order);
-               for (; pfn < end_pfn; pfn++) {
+               while (pfn < end_pfn) {
                        struct page *cursor_page;

                        /* The target page is in the block, ignore it. */
@@ -1086,17 +1086,25 @@ static unsigned long
isolate_lru_pages(unsigned long nr_to_scan,
                                break;

                        if (__isolate_lru_page(cursor_page, mode, file) == 0) {
+                               int isolated_pages;
                                list_move(&cursor_page->lru, dst);
                                mem_cgroup_del_lru(cursor_page);
-                               nr_taken += hpage_nr_pages(page);
+                               isolated_pages = hpage_nr_pages(page);
+                               nr_taken += isolated_pages;
+                               /* if we isolated pages enough, let's
break early */
+                               if (nr_taken > end_pfn - start_pfn)
+                                       break;
+                               pfn += isolated_pages;
                                nr_lumpy_taken++;
                                if (PageDirty(cursor_page))
                                        nr_lumpy_dirty++;
                                scan++;
                        } else {
                                /* the page is freed already. */
-                               if (!page_count(cursor_page))
+                               if (!page_count(cursor_page)) {
+                                       pfn++;
                                        continue;
+                               }
                                break;
                        }
                }

>
> The only way this patch makes things worse is for slub order 3 in the
> process of being freed. But tail pages aren't generally free anyway so
> I doubt this really makes any difference plus the tail is getting
> cleared as soon as the page reaches the buddy so it's probably

Okay. Considering getting clear PG_tail as soon as slub order 3 is
freed, it would be very rare case.

> unnoticeable as this then makes a difference only during a race (plus
> the tail page can't be isolated, only head page can be part of lrus
> and only if they're THP).
>
>> > insisting and screwing lru ordering isn't worth it, better to be
>> > permissive and abort... in fact I wouldn't dislike to remove the
>> > entire lumpy logic when COMPACTION_BUILD is true, but that alters the
>> > trace too...
>>
>> AFAIK, it's final destination to go as compaction will not break lru
>> ordering if my patch(inorder-putback) is merged.
>
> Agreed. I like your patchset, sorry for not having reviewed it in
> detail yet but there were other issues popping up in the last few
> days.

No problem. it's urgent than mine. :)

>
>> >> get_page(cursor_page)
>> >> /* The page is freed already */
>> >> if (1 == page_count(cursor_page)) {
>> >>       put_page(cursor_page)
>> >>       continue;
>> >> }
>> >> put_page(cursor_page);
>> >
>> > We can't call get_page on an tail page or we break split_huge_page,
>>
>> Why don't we call get_page on tail page if tail page isn't free?
>> Maybe I need investigating split_huge_page.
>
> Yes it's split_huge_page, only gup is allowed to increase the tail
> page because we're guaranteed while gup_fast does it,
> split_huge_page_refcount isn't running yet, because the pmd wasn't
> set as splitting and the irqs were disabled (or we'd be holding the
> page_table_lock for gup slow version after checking again the pmd
> wasn't splitting and so __split_huge_page_refcount will wait).

Thanks. I will have a time to understand your point with reviewing
split_huge_page and your this comment.

You convinced me and made me think thing I didn't think about which
are good points.
Thanks, Andrea.

>
> Thanks,
> Andrea
>



-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ