For example, if the jpeg contains exif information
and the rotation direction is included in the exif,
the displaymatrix will be set on the side_data of the frame when decoding.
However, when ffplay is used to play the image,
only the side data in the stream will be determined.
It does not check whether the frame also contains rotation information,
causing it to play in the wrong direction
Reviewed-by: Zhao Zhili <zhilizhao@tencent.com>
Signed-off-by: Wang Yaqiang <wangyaqiang03@kuaishou.com>
It may be NULL, as is the case for D3D11VA_VLD.
Running "ffmpeg -h decoder=h264" on a Windows build
Before:
Decoder h264 [H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10]:
Supported hardware devices: dxva2 (null) d3d11va cuda
After:
Decoder h264 [H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10]:
Supported hardware devices: dxva2 d3d11va cuda
Signed-off-by: James Almer <jamrial@gmail.com>
This is designed to improve and unify error handling for
allocation failures for the many (often small) allocations that we have
in the fftools. These typically either don't return an error message
or an error message that is not really helpful to the user
and can be replaced by a generic error message without loss of
information.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
update_video_stats() currently uses OutputStream.data_size to print the
total size of the encoded stream so far and the average bitrate.
However, that field is updated in the muxer thread, right before the
packet is sent to the muxer. Not only is this racy, but the numbers may
not match even if muxing was in the main thread due to bitstream
filters, filesize limiting, etc.
Introduce a new counter, data_size_enc, for total size of the packets
received from the encoder and use that in update_video_stats(). Rename
data_size to data_size_mux to indicate its semantics more clearly.
No synchronization is needed for data_size_mux, because it is only read
in the main thread in print_final_stats(), which runs after the muxer
threads are terminated.
It is either equal to OutputStream.enc_ctx->codec, or NULL when enc_ctx
is NULL. Replace the use of enc with enc_ctx->codec, or the equivalent
enc_ctx->codec_* fields where more convenient.
ost->enc is always non-NULL here, since
- this code is never called for streamcopy
- opening the output file will fail if an encoder cannot be found, so
filters are never initialized
This code cannot be triggered, since after 90944ee3ab opening the
output file will abort if an encoder cannot be found and streamcopy was
not explicitly requested.
It races with the demuxing thread. Instead, send the information along
with the demuxed packets.
Ideally, the code should stop using the stream-internal parsing
completely, but that requires considerably more effort.
Fixes races, e.g. in:
- fate-h264-brokensps-2580
- fate-h264-extradata-reload
- fate-iv8-demux
- fate-m4v-cfr
- fate-m4v
Don't silently replace it with the default layout for the amount of channels
from the requested layout.
Should fix ticket #9869
Signed-off-by: James Almer <jamrial@gmail.com>
c11fb46731 led to a regression whereby the return code for missing
input or input probe is overridden by writer close return code and
hence not conveyed in the exit code.
Use it instead of AVStream.codecpar in the main thread. While
AVStream.codecpar is documented to only be updated when the stream is
added or avformat_find_stream_info(), it is actually updated during
demuxing. Accessing it from a different thread then constitutes a race.
Ideally, some mechanism should eventually be provided for signalling
parameter updates to the user. Then the demuxing thread could pick up
the changes and propagate them to the decoder.
Discontinuity detection/correction is left in the main thread, as it is
entangled with InputStream.next_dts and related variables, which may be
set by decoding code.
Fixes races e.g. in fate-ffmpeg-streamloop after
aae9de0cb2.
This will allow to move normal offset handling to demuxer thread, since
discontinuities currently have to be processed in the main thread, as
the code uses some decoder-produced values.
InputFile.ts_offset can change during transcoding, due to discontinuity
correction. This should not affect the streamcopy starting timestamp.
Cf. bf2590aed3
-stream_loop is currently handled by destroying the demuxer thread,
seeking, then recreating it anew. This is very messy and conflicts with
the future goal of moving each major ffmpeg component into its own
thread.
Handle -stream_loop directly in the demuxer thread. Looping requires the
demuxer to know the duration of the file, which takes into account the
duration of the last decoded audio frame (if any). Use a thread message
queue to communicate this information from the main thread to the
demuxer thread.
This avoids a potential race with the demuxer adding new streams. It is
also more efficient, since we no longer do inter-thread transfers of
packets that will be just discarded.
This undocumented feature runtime-enables dumping input packets. I can
think of no reasonable real-world use case that cannot also be
accomplished in a different way. Keeping this functionality would
interfere with the following commit moving it to the input thread (then
setting the variable would require locking or atomics, which would be
unnecessarily complicated for a feature that probably nobody uses).
There are currently three possible modes for an output stream:
1) The stream is produced by encoding output from some filtergraph. This
is true when ost->enc_ctx != NULL, or equivalently when
ost->encoding_needed != 0.
2) The stream is produced by copying some input stream's packets. This
is true when ost->enc_ctx == NULL && ost->source_index >= 0.
3) The stream is produced by attaching some file directly. This is true
when ost->enc_ctx == NULL && ost->source_index < 0.
OutputStream.stream_copy is currently used to identify case 2), and
sometimes to confusingly (or even incorrectly) identify case 1). Remove
it, replacing its usage with checking enc_ctx/source_index values.