lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEZL83reDb0dY4gTd8LwUXUsYNoYtYePZbDO4d_wDBv4FdCsdw@mail.gmail.com>
Date:   Thu, 9 Nov 2023 19:41:43 +0800
From:   Esker Wong <esker@...gle.com>
To:     Laurent Pinchart <laurent.pinchart@...asonboard.com>
Cc:     Ricardo Ribalda <ribalda@...omium.org>, nicolas@...fresne.ca,
        Kieran Bingham <kieran.bingham@...asonboard.com>,
        Sakari Ailus <sakari.ailus@....fi>,
        Esker Wong <esker@...omium.org>,
        Mauro Carvalho Chehab <mchehab@...nel.org>,
        linux-media@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] media: uvcvideo: Implement V4L2_EVENT_FRAME_SYNC

Hi Laurent,

We not only need t(dqbuf) - t(exposure), we are also interested in
t(dqbuf) - t(frame_recieved) to know more detail inside the camera for
analysis.

Thanks,
Esker








On Thu, Nov 9, 2023 at 7:34 PM Laurent Pinchart
<laurent.pinchart@...asonboard.com> wrote:
>
> Hi Esker,
>
> On Thu, Nov 09, 2023 at 08:59:13AM +0800, Esker Wong wrote:
> > Hi Laurent,
> >
> > The use case here we want is actually the latency. We want to know
> > what is the most accurate delay the user will feel when the camera
> > capture the frame to the frame get displayed. So anytime earlier then
> > the dqbuf should be an improvement here. It would help us know better
> > of a camera quality.
>
> If it's the latency you're after, wouldn't it be best to use the
> timestamp produced by the camera ? That's the most accurate information.
> It is expressed relative to the camera internal clock, but UVC transmits
> information that enables conversion of the value to a system timestamp.
>
> > On Thu, Nov 9, 2023 at 8:03 AM Laurent Pinchart wrote:
> > > On Wed, Nov 08, 2023 at 11:46:40PM +0100, Ricardo Ribalda wrote:
> > > > On Wed, 8 Nov 2023 at 21:32, <nicolas@...fresne.ca> wrote:
> > > > >
> > > > > The fact that you interpret the time from FRAME_SYNC to DQBUF (well the
> > > > > READ IO notification) as the actual latency is yours of course. It
> > > > > assumes that the camera on the other end does not introduce other
> > > >
> > > > We want to use this signal to measure how much power is used since we
> > > > start receiving the frame until we can use it.
> > > > I agree with you that the latency between capture and dqbuf should be
> > > > measured using the timestamp. That is not our use case here.
> > > >
> > > > > source of latency (or that these are negligible). You are also going to
> > > > > introduce a lot of jitter, since it relies on when the OS decides to
> > > > > wake up your process.
> > > >
> > > > We have measured a jitter of around 2.5 msec, which is acceptable for our needs.
> > > >
> > > > > I think my opinion resides in if you can accurately *enough* implement
> > > > > what the spec says for FRAME_SYNC then do it, otherwise just don't lie.
> > > >
> > > > What the specs says is:
> > > > ```
> > > > Triggered immediately when the reception of a frame has begun
> > > > ```
> > > > In my opinion, that is true for usb devices, we are triggering it as
> > > > soon as the transfer has started to the eyes of the driver. We cannot
> > > > trigger earlier than that.
> > > >
> > > >
> > > > > I think for ISO, "after the first chunk" i a small lie, but acceptable.
> > > > > But for BULK, the way it was explained is that it will be always very
> > > > > close to DQBUF time. and it should not emit FRAME_SYNC for this type of
> > > > > UVC device. If it fits other events fine of course, I'm just making a
> > > > > judgment on if its fits V4L2_EVENT_FRAME_SYNC or not.
> > > >
> > > > nit: I believe that you have swapped iso and bulk on this description
> > >
> > > I've confused the USB packet size and the UVC payload size. The latter
> > > is typically much bigger for bulk devices than isoc devices, but the
> > > former will be in similar order of magnitudes in a large number of
> > > cases, but not all cases.
> > >
> > > The URB size is the result of the USB packet size and number of packets
> > > per URB. The uvcvideo driver currently sets the number of packets per
> > > URB to 32 at most (and lowers it if the frame size is small, or if not
> > > enough memory can be allocated). This could be increased or made dynamic
> > > in the future, as higher speeds typically benefit from larger URB sizes.
> > > The packet size differs between bulk and isoc endpoints.
> > >
> > > For bulk, the packet size can be up to 512 bytes for USB 2.0 and 1024
> > > bytes for USB 3.0, and the device can select a smaller size. The largest
> > > URB size (again based on the current implementation of the uvcvideo
> > > driver) is thus 32 KiB.
> > >
> > > For isochronous the situation is more complicated. The term "packet" as
> > > used in the uvcvideo driver actually means all the data transferred in
> > > one service interval, thus made of multiple isoc packets. It is heavily
> > > dependent on the USB speed, and the device can advertise different
> > > supported sizes (which translate directly to the reserved bandwidth for
> > > the transfer), with the driver picking the smallest bandwidth large
> > > enough for the data rate required by the resolution and frame rate. The
> > > theoretical worst case is 1024 bytes per isoc packet * 16 isoc packets
> > > per burst * 6 burst per interval * 32 "packets" per URB, equal to 3 MiB.
> > >
> > > Even with the largest URB size you have witnessed of ~1 MiB, we will end
> > > up lying quite a bit if we consider the URB completion callback for the
> > > first URB of the frame as indicating the start of reception.
> > >
> > > > > In term of accuracy, if timestamp was passed with the FRAME_SYNC event,
> > > > > it would not matter how fast your process the event anymore and greatly
> > > > > improve accuracy.
> > > >
> > > > +1 to that. If we could easily change the uAPI for FRAME_SYNC that
> > > > should definitely be implemented.
> > > >
> > > > > > Not to mention that the UVC timestamping requires a bit of love.
> > > > > >
> > > > > > @Laurent Pinchart, @Kieran Bingham  any progress reviewing :P :
> > > > > > https://patchwork.linuxtv.org/project/linux-media/list/?series=10083
> > > > >
> > > > > Thanks for working on this by the way, hope someone will find the time
> > > > > to review this. The timestamps should in theory provide a jitter free
> > > >
> > > > It already has a couple of Reviewed-by stamped in.... ;)
> > > >
> > > > > measurement of the delay Esker is trying to measure, and if it wasn't
> > > > > of bugs (and crazy complexity) it would in the worst case match the
> > > > > transfer time.
> > > >
> > > > Sorry to repeat myself, but just to avoid the confusion: Esker needs
> > > > to know how much power is used since we start receiving a frame until
> > > > it is available for dqbuf, not de frame latency.
> > >
> > > As I think everybody is aware, the earliest notification you get on the
> > > CPU side is the *end* of reception of the first URB, which can possibly
> > > be significantly later than the start of reception of the frame.
> > >
> > > Based on what I understand, the goal is to measure the CPU power
> > > consumption related to CPU processing of the frame. If that's the case,
> > > there's good and bad news. The good news is that the CPU doesn't process
> > > the frame at all until the URB has been received (if you were to measure
> > > the power consumption of the USB host controller too, it would be a
> > > different story), so the delay shouldn't be a problem. The bad news is
> > > that I don't see how the information you're trying to get will help you,
> > > as there's plenty of other things unrelated to the uvcvideo driver that
> > > can take CPU time while a frame is being received. That may not be any
> > > of my business, but from the point of view of the uvcvideo driver, I'm
> > > less inclined to accept a possibly significant V4L2_EVENT_FRAME_SYNC lie
> > > if the use case ends up making little sense :-)
>
> --
> Regards,
>
> Laurent Pinchart

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ