lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UcAqGX26pcQLzFUevHsLu-CtiyOYe15uG3bkhGZ5BJKAg@mail.gmail.com>
Date:   Thu, 7 Mar 2019 13:32:28 -0800
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     Nitesh Narayan Lal <nitesh@...hat.com>,
        kvm list <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-mm <linux-mm@...ck.org>,
        Paolo Bonzini <pbonzini@...hat.com>, lcapitulino@...hat.com,
        pagupta@...hat.com, wei.w.wang@...el.com,
        Yang Zhang <yang.zhang.wz@...il.com>,
        Rik van Riel <riel@...riel.com>,
        "Michael S. Tsirkin" <mst@...hat.com>, dodgen@...gle.com,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        dhildenb@...hat.com, Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [RFC][Patch v9 2/6] KVM: Enables the kernel to isolate guest free pages

On Thu, Mar 7, 2019 at 11:30 AM David Hildenbrand <david@...hat.com> wrote:
>
> On 07.03.19 20:23, Nitesh Narayan Lal wrote:
> >
> > On 3/7/19 1:30 PM, Alexander Duyck wrote:
> >> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@...hat.com> wrote:
> >>> This patch enables the kernel to scan the per cpu array
> >>> which carries head pages from the buddy free list of order
> >>> FREE_PAGE_HINTING_MIN_ORDER (MAX_ORDER - 1) by
> >>> guest_free_page_hinting().
> >>> guest_free_page_hinting() scans the entire per cpu array by
> >>> acquiring a zone lock corresponding to the pages which are
> >>> being scanned. If the page is still free and present in the
> >>> buddy it tries to isolate the page and adds it to a
> >>> dynamically allocated array.
> >>>
> >>> Once this scanning process is complete and if there are any
> >>> isolated pages added to the dynamically allocated array
> >>> guest_free_page_report() is invoked. However, before this the
> >>> per-cpu array index is reset so that it can continue capturing
> >>> the pages from buddy free list.
> >>>
> >>> In this patch guest_free_page_report() simply releases the pages back
> >>> to the buddy by using __free_one_page()
> >>>
> >>> Signed-off-by: Nitesh Narayan Lal <nitesh@...hat.com>
> >> I'm pretty sure this code is not thread safe and has a few various issues.
> >>
> >>> ---
> >>>  include/linux/page_hinting.h |   5 ++
> >>>  mm/page_alloc.c              |   2 +-
> >>>  virt/kvm/page_hinting.c      | 154 +++++++++++++++++++++++++++++++++++
> >>>  3 files changed, 160 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/include/linux/page_hinting.h b/include/linux/page_hinting.h
> >>> index 90254c582789..d554a2581826 100644
> >>> --- a/include/linux/page_hinting.h
> >>> +++ b/include/linux/page_hinting.h
> >>> @@ -13,3 +13,8 @@
> >>>
> >>>  void guest_free_page_enqueue(struct page *page, int order);
> >>>  void guest_free_page_try_hinting(void);
> >>> +extern int __isolate_free_page(struct page *page, unsigned int order);
> >>> +extern void __free_one_page(struct page *page, unsigned long pfn,
> >>> +                           struct zone *zone, unsigned int order,
> >>> +                           int migratetype);
> >>> +void release_buddy_pages(void *obj_to_free, int entries);
> >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >>> index 684d047f33ee..d38b7eea207b 100644
> >>> --- a/mm/page_alloc.c
> >>> +++ b/mm/page_alloc.c
> >>> @@ -814,7 +814,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
> >>>   * -- nyc
> >>>   */
> >>>
> >>> -static inline void __free_one_page(struct page *page,
> >>> +inline void __free_one_page(struct page *page,
> >>>                 unsigned long pfn,
> >>>                 struct zone *zone, unsigned int order,
> >>>                 int migratetype)
> >>> diff --git a/virt/kvm/page_hinting.c b/virt/kvm/page_hinting.c
> >>> index 48b4b5e796b0..9885b372b5a9 100644
> >>> --- a/virt/kvm/page_hinting.c
> >>> +++ b/virt/kvm/page_hinting.c
> >>> @@ -1,5 +1,9 @@
> >>>  #include <linux/mm.h>
> >>>  #include <linux/page_hinting.h>
> >>> +#include <linux/page_ref.h>
> >>> +#include <linux/kvm_host.h>
> >>> +#include <linux/kernel.h>
> >>> +#include <linux/sort.h>
> >>>
> >>>  /*
> >>>   * struct guest_free_pages- holds array of guest freed PFN's along with an
> >>> @@ -16,6 +20,54 @@ struct guest_free_pages {
> >>>
> >>>  DEFINE_PER_CPU(struct guest_free_pages, free_pages_obj);
> >>>
> >>> +/*
> >>> + * struct guest_isolated_pages- holds the buddy isolated pages which are
> >>> + * supposed to be freed by the host.
> >>> + * @pfn: page frame number for the isolated page.
> >>> + * @order: order of the isolated page.
> >>> + */
> >>> +struct guest_isolated_pages {
> >>> +       unsigned long pfn;
> >>> +       unsigned int order;
> >>> +};
> >>> +
> >>> +void release_buddy_pages(void *obj_to_free, int entries)
> >>> +{
> >>> +       int i = 0;
> >>> +       int mt = 0;
> >>> +       struct guest_isolated_pages *isolated_pages_obj = obj_to_free;
> >>> +
> >>> +       while (i < entries) {
> >>> +               struct page *page = pfn_to_page(isolated_pages_obj[i].pfn);
> >>> +
> >>> +               mt = get_pageblock_migratetype(page);
> >>> +               __free_one_page(page, page_to_pfn(page), page_zone(page),
> >>> +                               isolated_pages_obj[i].order, mt);
> >>> +               i++;
> >>> +       }
> >>> +       kfree(isolated_pages_obj);
> >>> +}
> >> You shouldn't be accessing __free_one_page without holding the zone
> >> lock for the page. You might consider confining yourself to one zone
> >> worth of hints at a time. Then you can acquire the lock once, and then
> >> return the memory you have freed.
> > That is correct.
> >>
> >> This is one of the reasons why I am thinking maybe a bit in the page
> >> and then spinning on that bit in arch_alloc_page might be a nice way
> >> to get around this. Then you only have to take the zone lock when you
> >> are finding the pages you want to hint on and setting the bit
> >> indicating they are mid hint. Otherwise you have to take the zone lock
> >> to pull pages out, and to put them back in and the likelihood of a
> >> lock collision is much higher.
> > Do you think adding a new flag to the page structure will be acceptable?
>
> My lesson learned: forget it. If (at all) reuse some other one that
> might be safe in that context. Hard to tell if that is even possible and
> will be accepted upstream.

I was thinking we could probably just resort to reuse. Essentially
what we are looking at doing is idle page tracking so my thought is to
see if we can just reuse those bits in the buddy allocator. Then we
would essentially have 3 stages, young, "hinting", and idle.

> Spinning is not the solution. What you would want is the buddy to
> actually skip over these pages and only try to use them (-> spin) when
> OOM. Core mm changes (see my other reply).

It is more of a workaround. Ideally we should almost never encounter
this anyway as what we really want to be doing is performing hints on
cold pages, so hopefully we will be on the other end of the LRU list
from any active allocations.

> This all sounds like future work which can be built on top of this work.

Actually I was kind of thinking about this the other way. The simple
spin approach is a good first step. If we have a bit or two in the
page that tells us if the page is available or not we could then
follow-up with optimizations to only allocate either a young or idle
page and doesn't bother with pages being "hinted", at least in the
first pass.

As it currently stands we are only really performing hints on higher
order pages anyway so if we happen to encounter a slight delay under
memory pressure it probably wouldn't be that noticeable versus the
memory system having to go through and try to compact things from some
lower order pages. In my mind us introducing a delay in memory
allocation in the case of a collision would be preferable versus us
triggering allocation failures.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ