[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50E3FE8C.8000309@nvidia.com>
Date: Wed, 2 Jan 2013 11:31:56 +0200
From: Terje Bergström <tbergstrom@...dia.com>
To: Mark Zhang <nvmarkzhang@...il.com>
CC: "thierry.reding@...onic-design.de" <thierry.reding@...onic-design.de>,
"airlied@...ux.ie" <airlied@...ux.ie>,
"dev@...xeye.de" <dev@...xeye.de>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCHv4 3/8] gpu: host1x: Add channel support
On 02.01.2013 09:40, Mark Zhang wrote:
> On 12/21/2012 07:39 PM, Terje Bergstrom wrote:
>> Add support for host1x client modules, and host1x channels to submit
>> work to the clients. The work is submitted in GEM CMA buffers, so
>> this patch adds support for them.
>>
>> Signed-off-by: Terje Bergstrom <tbergstrom@...dia.com>
>> ---
> [...]
>> +/*
>> + * Begin a cdma submit
>> + */
>> +int host1x_cdma_begin(struct host1x_cdma *cdma, struct host1x_job *job)
>> +{
>> + struct host1x *host1x = cdma_to_host1x(cdma);
>> +
>> + mutex_lock(&cdma->lock);
>> +
>> + if (job->timeout) {
>> + /* init state on first submit with timeout value */
>> + if (!cdma->timeout.initialized) {
>> + int err;
>> + err = host1x->cdma_op.timeout_init(cdma,
>> + job->syncpt_id);
>> + if (err) {
>> + mutex_unlock(&cdma->lock);
>> + return err;
>> + }
>> + }
>> + }
>> + if (!cdma->running)
>> + host1x->cdma_op.start(cdma);
>> +
>> + cdma->slots_free = 0;
>> + cdma->slots_used = 0;
>> + cdma->first_get = host1x->cdma_pb_op.putptr(&cdma->push_buffer);
>> +
>> + trace_host1x_cdma_begin(job->ch->dev->name);
>
> Seems missing "mutex_unlock(&cdma->lock);" here.
That's intentional. Writing a job to channel is atomic, so lock is taken
from host1x_cdma_begin() until host1x_cdma_end().
Terje
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists