Currently it's exported as AVFrame.pkt_pts, which is also the only use
for that field. The reason it is done like this is that lavc used to
export various codec-specific "timing" information in AVFrame.pts, which
is not done anymore.
Since it is confusing to the callers to have a separate field which is
used only for decoder timestamps and nothing else, deprecate pkt_pts and
use just AVFrame.pts everywhere.
This uses a new MMAL feature, which limits the number of extra frames
that can be buffered within the decoder. VIDEO_MAX_NUM_CALLBACKS can
be defined as positive or negative number. Positive numbers are
absolute, and can lead to deadlocks if the user underestimates the
number of required buffers. Negative numbers specify the number of extra
buffers, e.g. -1 means no extra buffer, (-1-N) means N extra buffers.
Set a gratuitous default of -11 (N=10). This is much lower than the
firmware default, which appears to be 96.
This is backwards compatible, but needs a symbol only present in newer
firmware headers. (It's an enum item, so it requires a check in
configure.)
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Slight simplification. The result is the same. Also, change the
wording of the message as requested in patch review.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Fixes apparent mmal_port_disable() freezes in ffmmal_stop_decoder() when
calling ffmmal_decode() with flush semantics a large number of times in
a row.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The assert in ffmmal_stop_decoder() could trigger sometimes. The
packets_buffered counter was indeed not correctly maintained, and
packets were not subtracted from it if they were still in the waiting
queue.
For some reason, this happened especially with VC-1.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Register mmaldec as mpeg2 decoder. Supporting mpeg2 in mmaldec is just a
matter of setting the correct MMAL_ENCODING on the input port. To ease the
addition of further supported mmal codecs a macro is introduced to generate
the decoder and decoder class structs.
Signed-off-by: Julian Scheel <julian@jusst.de>
Signed-off-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
There is no avpriv_atomic_get, instead avpriv_atomic_int_get is to be used for
integers. This fixes building mmaldec.
Signed-off-by: Julian Scheel <julian@jusst.de>
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
In some situations, MMAL won't return a decoded frame for certain input
frames. This can happen if a frame fails to decode, or if a packet does
not actually contain a complete frame. In these situations, we would
deadlock (or actually timeout) waiting for an expected output frame,
which is not ideal. On the other hand, there are situations where we
definitely have to block to avoid deadlocks. (This mess is a
consequence of trying to map MMAL's asynchronous and flexible
dataflow to libavcodec, which is more static and rigid.)
Solve this by doing a blocking wait only if the amount of buffered data
is too big. The whole purpose of the blocking wait is to avoid excessive
buffering of input data, so we can skip it if it appears to be low. The
consequence is that libavcodec can gracefully return no frame to the
API user.
We want to track the number of full packets to make our heuristic work.
But MMAL buffers are fixed-size, requiring splitting large packets. This
is why the previous commit is needed. We use the ..._FRAME_END flag to
remember packet boundaries, but MMAL does not preserve these buffer
flags when returning buffers to the user.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The next commit needs 1 bit of additional information per MMAL buffer
sent to the MMAL input port. This information will be needed when the
buffer is recycled (i.e. returned by the input port's callback).
Normally, we could use MMAL_BUFFER_HEADER_FLAG_USER0, but that is
unexpectedly not preserved.
Do this by storing a pointer to FFBufferEntry in the MMAL buffer's
user data, instead of an AVBufferRef. This also changes the lifetime
of FFBufferEntry.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
AVBufferRef.data and AVPacket.data don't need to have the same value.
AVPacket could point anywhere into the buffer. Likewise, the sizes
don't need to be the same.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This works only for extradata sizes up to 128 bytes. Additionally, I
could never actually see it doing anything. The new code using
MMAL_BUFFER_HEADER_FLAG_CONFIG now takes care of this.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
We can send mp4-style data directly. But for some reason, this requires
sending the extradata as buffer with MMAL_BUFFER_HEADER_FLAG_CONFIG
set. Reuse the infrastructure for sending AVPackets to do this.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This also drops setting the frame->pts field. This is usually not set by
decoders, so this would be an inconsistency that's at worst a danger to
the API user.
It appears the buffer->dts field is normally not set by the MMAL
decoder, so don't use it. If it's ever going to be set by MMAL, we
don't know whether the value will be what we want.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The generic code in utils.c sets the AVFrame.pkt_dts field from the
packet it was supposedly decoded. This does not have to be true for a
fully asynchronous decoder like mmaldec. It could be overwritten with an
incorrect value. Even if the decoder doesn't determine the DTS (but sets
it to AV_NOPTS_VALUE), it's impossible to determine a correct value in
utils.c.
Decoders can now be marked with FF_CODEC_CAP_SETS_PKT_DTS, in which case
utils.c won't overwrite the field. The decoders are expected to set this
field (even if they only set it to AV_NOPTS_VALUE).
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This MMAL feature fills in missing timestamps from the framerate set on
the input port. This is generally unwanted, since libavcodec decoders
merely pass through timestamps without ever "fixing" them. The framerate
is also unknown, and even the timebase doesn't have to be set.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Don't try to do a blocking wait for MMAL output if we haven't even sent
a single real packet, but only flush packets. Obviously we can't expect
to get anything back.
Additionally, don't send a flush packet to MMAL in the same case. It
appears the MMAL decoder will sometimes hang in mmal_vc_port_disable()
(called from ffmmal_close_decoder()), waiting for a reply from the GPU
which never arrives. Either MMAL disallows sending flush packets without
preceding real data, or it's a MMAL bug.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
I can't come up with a nice way to handle this. It's hard to keep the
lock-stepped input/output in this case. You can't predict whether the
MMAL decoder will output a picture (because it's asynchronous), so
you have to assume in general that any packet could produce 0 or 1
frames. You can't continue to write input packets to the decoder,
because then you might get too many output frames, which you can't
get rid of because the lavc decoding API does not allow the decoder
to return an output frame without consuming an input frame (except
when flushing).
The ideal fix is a M:N decoding API (preferably asynchronous), which
would make this code potentially much cleaner. For now, this hack
will do.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This is optional, but ensures that linking with -Wl,--as-needed does not
drop the library containing the MMAL VC driver. The driver normally
"registers" itself in the library constructor, but since no symbols are
explicitly referenced, the linker could remove it with --as-needed
enabled.
Signed-off-by: Diego Biurrun <diego@biurrun.de>