[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100430154427.GA32340@amt.cnet>
Date: Fri, 30 Apr 2010 12:44:27 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Avi Kivity <avi@...hat.com>
Cc: Lai Jiangshan <laijs@...fujitsu.com>,
LKML <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org
Subject: Re: [PATCH] kvm mmu: reduce 50% memory usage
On Thu, Apr 29, 2010 at 09:43:40PM +0300, Avi Kivity wrote:
> On 04/29/2010 09:09 PM, Marcelo Tosatti wrote:
> >
> >You missed quadrant on 4mb large page emulation with shadow (see updated
> >patch below).
>
> Good catch.
>
> >Also for some reason i can't understand the assumption
> >does not hold for large sptes with TDP, so reverted for now.
>
> It's unrelated to TDP, same issue with shadow. I think the
> calculation is correct. For example the 4th spte for a level=2 page
> will yield gfn=4*512.
Under testing i see sp at level 2, with sp->gfn == 4096, mmu_set_spte
setting index 8 to gfn 4096 (whereas kvm_mmu_page_get_gfn returns 4096 +
8*512).
Lai, can you please take a look at it? You should see the
kvm_mmu_page_set_gfn BUG_ON by using -mem-path on hugetlbfs.
> >@@ -393,6 +393,27 @@ static void mmu_free_rmap_desc(struct kvm_rmap_desc *rd)
> > kfree(rd);
> > }
> >
> >+static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index)
> >+{
> >+ gfn_t gfn;
> >+
> >+ if (!sp->role.direct)
> >+ return sp->gfns[index];
> >+
> >+ gfn = sp->gfn + index * (1<< (sp->role.level - 1) * PT64_LEVEL_BITS);
> >+ gfn += sp->role.quadrant<< PT64_LEVEL_BITS;
>
> PT64_LEVEL_BITS * level
>
> >+
> >+ return gfn;
> >+}
> >+
> >
>
>
> --
> I have a truly marvellous patch that fixes the bug which this
> signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists