[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3c852dec2279fa95af268357a438d442ddb70d44.camel@ndufresne.ca>
Date: Thu, 07 Jun 2018 13:53:33 -0400
From: Nicolas Dufresne <nicolas@...fresne.ca>
To: Tomasz Figa <tfiga@...omium.org>, Hans Verkuil <hverkuil@...all.nl>
Cc: dave.stevenson@...pberrypi.org,
Linux Media Mailing List <linux-media@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Mauro Carvalho Chehab <mchehab@...nel.org>,
Pawel Osciak <posciak@...omium.org>,
Alexandre Courbot <acourbot@...omium.org>, kamil@...as.org,
a.hajda@...sung.com, Kyungmin Park <kyungmin.park@...sung.com>,
jtp.park@...sung.com, Philipp Zabel <p.zabel@...gutronix.de>,
Tiffany Lin (林慧珊)
<tiffany.lin@...iatek.com>,
Andrew-CT Chen (陳智迪)
<andrew-ct.chen@...iatek.com>,
Stanimir Varbanov <stanimir.varbanov@...aro.org>,
todor.tomov@...aro.org,
Paul Kocialkowski <paul.kocialkowski@...tlin.com>,
Laurent Pinchart <laurent.pinchart@...asonboard.com>
Subject: Re: [RFC PATCH 1/2] media: docs-rst: Add decoder UAPI specification
to Codec Interfaces
Le jeudi 07 juin 2018 à 16:30 +0900, Tomasz Figa a écrit :
> > > v4l2-compliance (so probably one for Hans).
> > > testUnlimitedOpens tries opening the device 100 times. On a normal
> > > device this isn't a significant overhead, but when you're allocating
> > > resources on a per instance basis it quickly adds up.
> > > Internally I have state that has a limit of 64 codec instances (either
> > > encode or decode), so either I allocate at start_streaming and fail on
> > > the 65th one, or I fail on open. I generally take the view that
> > > failing early is a good thing.
> > > Opinions? Is 100 instances of an M2M device really sensible?
> >
> > Resources should not be allocated by the driver until needed (i.e. the
> > queue_setup op is a good place for that).
> >
> > It is perfectly legal to open a video node just to call QUERYCAP to
> > see what it is, and I don't expect that to allocate any hardware resources.
> > And if I want to open it 100 times, then that should just work.
> >
> > It is *always* wrong to limit the number of open arbitrarily.
>
> That's a valid point indeed. Besides the querying use case, userspace
> might just want to pre-open a bigger number of instances, but it
> doesn't mean that they would be streaming all at the same time indeed.
We have used in GStreamer the open() failure to be able to fallback to
software when the instances are exhausted. The pros was it fails really
early, so falling back is easy. If you remove this, it might not fail
before STREAMON. At least in GStreamer, it too late to fallback to
software. So I don't have better idea then limiting on Open calls.
Nicolas
Powered by blists - more mailing lists