lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 12 Dec 2011 17:49:34 +0800
From:	Ming Lei <ming.lei@...onical.com>
To:	Sylwester Nawrocki <snjw23@...il.com>
Cc:	linux-omap@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
	linux-kernel@...r.kernel.org, linux-media@...r.kernel.org
Subject: Re: [RFC PATCH v1 6/7] media: video: introduce face detection driver module

Hi,

On Mon, Dec 12, 2011 at 1:43 AM, Sylwester Nawrocki <snjw23@...il.com>

>> For OMAP4 FD, it is not needed to include FD into MC framework since a
>> intermediate buffer is always required. If your HW doesn't belong to this
>> case, what is the output of your HW FD in the link? Also sounds FD results
>> may not be needed at all for use space application in the case.
>
> The result data is similar to OMAP4 one, plus a few other attributes.
> User buffers may be filled by other than FD device driver.

OK.


>> Could you provide some practical use cases about these?
>
> As above, and any device with a camera that controls something and makes
> decision according to presence of human face in his view.

Sounds a reasonable case, :-)


>> If FD result is associated with a frame, how can user space get the frame seq
>> if no v4l2 buffer is involved? Without a frame sequence, it is a bit
>> difficult to retrieve FD results from user space.
>
> If you pass image data in memory buffers from user space, yes, it could be
> impossible.

It is easy to get the frame sequence from v4l2_buffer for the case too, :-)

>
> Not really, still v4l2_buffer may be used by other (sub)driver within same video
> processing pipeline.

OK.

A related question: how can we make one application to support the two kinds of
devices(input from user space data as OMAP4, input from SoC bus as Samsung)
at the same time? Maybe some capability info is to be exported to user space?
or other suggestions?

And will your Samsung FD HW support to detect faces from memory? or just only
detect from SoC bus?


> It will be included in the FD result... or in a dedicated v4l2 event data structure.
> More important, at the end of the day, we'll be getting buffers with image data
> at some stage of a video pipeline, which would contain same frame identifier
> (I think we can ignore v4l2_buffer.field for FD purpose).

OK, I will associate FD result with frame identifier, and not invent a
dedicated v4l2 event for query frame seq now until a specific requirement
for it is proposed.

I will convert/integrate recent discussions into patches of v2 for further
review, and sub device support will be provided. But before starting to do it,
I am still not clear how to integrate FD into MC framework. I understand FD
sub device is only a media entity, so how can FD sub device find the media
device(struct media_device)?  or just needn't to care about it now?


thanks,
--
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ