[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1248199231.2368.17.camel@localhost>
Date: Tue, 21 Jul 2009 20:00:31 +0200
From: Jerome Glisse <glisse@...edesktop.org>
To: Thomas Hellström <thomas@...pmail.org>
Cc: linux-kernel@...r.kernel.org, dri-devel@...ts.sf.net
Subject: Re: TTM page pool allocator
On Tue, 2009-07-21 at 19:34 +0200, Jerome Glisse wrote:
> On Thu, 2009-06-25 at 17:53 +0200, Thomas Hellström wrote:
> >
> > 4) We could now skip the ttm_tt_populate() in ttm_tt_set_caching, since
> > it will always allocate cached pages and then transition them.
> >
>
> Okay 4) is bad, what happens (my brain is a bit meltdown so i might be
> wrong) :
> 1 - bo get allocated tt->state = unpopulated
> 2 - bo is mapped few page are faulted tt->state = unpopulated
> 3 - bo is cache transitioned but tt->state == unpopulated but
> they are page which have been touch by the cpu so we need
> to clflush them and transition them, this never happen if
> we don't call ttm_tt_populate and proceed with the remaining
> of the cache transitioning functions
>
> As a workaround i will try to go through the pages tables and
> transition existing pages. Do you have any idea for a better
> plan ?
>
> Cheers,
> Jerome
My workaround ruin the whole idea of pool allocation what happens
is that most bo get cache transition page per page. My thinking
is that we should do the following:
- is there is a least one page allocated then fully populate
the object and do cache transition on all the pages.
- otherwise update caching_state and leaves object unpopulated
This needs that we some how reflect the fact that there is at least
one page allocated, i am thinking to adding a new state for that :
ttm_partialy_populated
Thomas what do you think about that ?
Cheers,
Jerome
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists