lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120824003353.GG10777@t510.redhat.com>
Date:	Thu, 23 Aug 2012 21:33:53 -0300
From:	Rafael Aquini <aquini@...hat.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Peter Zijlstra <peterz@...radead.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	virtualization@...ts.linux-foundation.org,
	Rusty Russell <rusty@...tcorp.com.au>,
	Rik van Riel <riel@...hat.com>, Mel Gorman <mel@....ul.ie>,
	Andi Kleen <andi@...stfloor.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
	Minchan Kim <minchan@...nel.org>
Subject: Re: [PATCH v8 1/5] mm: introduce a common interface for balloon
 pages mobility

On Fri, Aug 24, 2012 at 02:36:16AM +0300, Michael S. Tsirkin wrote:
> I would wake it each time after adding a page, then it
> can stop waiting when it leaks enough.
> But again, it's cleaner to just keep tracking all
> pages, let mm hang on to them by keeping a reference.
> 
Here is a rough idea on how it's getting:

Basically, I'm have introducing an atomic counter to track isolated pages, I
also have changed vb->num_pages into an atomic conter. All inc/dec operations
take place under pages_lock spinlock, and we only perform work under page lock.

It's still missing the wait-part (I'll write it during the weekend) and your
concerns (and mine) will be addressed, IMHO.

---8<---
+/*
+ *
+ */
+static inline void __wait_on_isolated_pages(struct virtio_balloon *vb,
+                                           size_t num)
+{
+       /* There are no isolated pages for this balloon device */
+       if (!atomic_read(&vb->num_isolated_pages))
+               return;
+
+       /* the leak target is smaller than # of pages on vb->pages list */
+       if (num < (atomic_read(&vb->num_pages) -
+           atomic_read(&vb->num_isolated_pages)))
+               return;
+       else {
+               spin_unlock(&vb->pages_lock);
+               /* wait stuff goes here */
+               spin_lock(&vb->pages_lock);
+       }
+}
+
 static void leak_balloon(struct virtio_balloon *vb, size_t num)
 {
-       struct page *page;
+       /* The array of pfns we tell the Host about. */
+       unsigned int num_pfns;
+       u32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];

        /* We can only do one array worth at a time. */
-       num = min(num, ARRAY_SIZE(vb->pfns));
+       num = min(num, ARRAY_SIZE(pfns));

-       for (vb->num_pfns = 0; vb->num_pfns < num;
-            vb->num_pfns += VIRTIO_BALLOON_PAGES_PER_PAGE) {
-               page = list_first_entry(&vb->pages, struct page, lru);
-               list_del(&page->lru);
-               set_page_pfns(vb->pfns + vb->num_pfns, page);
-               vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE;
+       for (num_pfns = 0; num_pfns < num;
+            num_pfns += VIRTIO_BALLOON_PAGES_PER_PAGE) {
+               struct page *page = NULL;
+               spin_lock(&vb->pages_lock);
+               __wait_on_isolated_pages(vb, num);
+
+               if (!list_empty(&vb->pages))
+                       page = list_first_entry(&vb->pages, struct page, lru);
+               /*
+                * Grab the page lock to avoid racing against threads isolating
+                * pages from, or migrating pages back to vb->pages list.
+                * (both tasks are done under page lock protection)
+                *
+                * Failing to grab the page lock here means this page is being
+                * isolated already, or its migration has not finished yet.
+                */
+               if (page && trylock_page(page)) {
+                       clear_balloon_mapping(page);
+                       list_del(&page->lru);
+                       set_page_pfns(pfns + num_pfns, page);
+                       atomic_sub(VIRTIO_BALLOON_PAGES_PER_PAGE,
+                                  &vb->num_pages);
+                       unlock_page(page);
+               }
+               spin_unlock(&vb->pages_lock);
        }

        /*
@@ -182,8 +251,10 @@ static void leak_balloon(struct virtio_balloon *vb, size_t
num)
         * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST);
         * is true, we *have* to do it in this order
         */
+       mutex_lock(&vb->balloon_lock);
        tell_host(vb, vb->deflate_vq);
-       release_pages_by_pfn(vb->pfns, vb->num_pfns);
+       mutex_unlock(&vb->balloon_lock);
+       release_pages_by_pfn(pfns, num_pfns);
 }
---8<---
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ