[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1180448712.21547.314.camel@localhost>
Date: Tue, 29 May 2007 11:25:12 -0300
From: Mauro Carvalho Chehab <mchehab@...radead.org>
To: Thierry Merle <thierry.merle@...e.fr>
Cc: video4linux-list@...hat.com, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Jiri Slaby <jirislaby@...il.com>
Subject: Re: [PATCH 1/1] V4L: stk11xx, add a new webcam driver
> > Hi Mauro and Markus,
> > Just to summ up what I understood we need:
> >
> > What do we need in userspace, only for v4l (dvb is not concerned):
> > - colorspace translations
> > - filters that be done in hardware if the selected hardware can,
> > otherwise software plugin
> > - decompression algorithm like stk11xx or usbvision (the decompression
> > algorithm is in kernelspace since it is of linear complexity but shall
> > be moved to userspace)
Yes. The first focus, IMO, should be the last one.
> > Using pwlib will not mean that application developers will use pwlib
> > to decode v4l driver outputs.
> > C bindings are much more popular than C++ bindings and do not prevent
> > object oriented design.
IMO, we should implement very simple and efficient C subroutines.
> > Application developers implement their own codecs.
They can do it if they want. However, if we have a very consistent and
easy to use subroutines for those weird decompress stuff, it is likely
that they will use it.
> > As an example, every application do deinterlacing internally or not...
> > Application developers will probably not use pwlib v4l extensions
> > because they will prefer to write adapted codecs for their framework.
I think we shouldn't deal with deinterlacing. the API should be as
simple as possible, focusing on implementing some stuff highly linked
with the hardware (like specific decompression stuff, proprietary
colospace conversions, etc).
> >
> > Much more important for me is to see the actual specification of the
> > needed v4l extensions points, with advice/participation of
> > application/codec developers.
> As an example, we could empacket frames with a header containing
> audio/video format as it is done for MPEG streams.
> Is it possible without breaking the current ABI?
Yes, it is possible. In fact, some devices currently work by generating
a common audio and video stream. The driver may just send the packet
as-is, leaving to the userspace API the function to de-merge and
synchronize audio and video.
> Do application developers would cope with that?
Maybe.
Cheers,
Mauro
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists