Should fix valgrind warnings about Conditional jump or move depends on uninitialised value.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
See the comment block at the top of fftools/ffmpeg_sched.h for more
details on what this scheduler is for.
This commit adds the scheduling code itself, along with minimal
integration with the rest of the program:
* allocating and freeing the scheduler
* passing it throughout the call stack in order to register the
individual components (demuxers/decoders/filtergraphs/encoders/muxers)
with the scheduler
The scheduler is not actually used as of this commit, so it should not
result in any change in behavior. That will change in future commits.
As for the analogous decoding change, this is only a preparatory step to
a fully threaded architecture and does not yet make encoding truly
parallel. The main thread will currently submit a frame and wait until
it has been fully processed by the encoder before moving on. That will
change in future commits after filters are moved to threads and a
thread-aware scheduler is added.
This code suffers from a known issue - if an encoder with a sync queue
receives EOF it will terminate after processing everything it currently
has, even though the sync queue might still be triggered by other
threads. That will be fixed in following commits.
* the code is made shorter and simpler
* avoids constantly allocating and freeing AVPackets, thanks to
ThreadQueue integration with ObjPool
* is consistent with decoding/filtering/muxing
* reduces the diff in the future switch to thread-aware scheduling
This makes ifile_get_packet() always block. Any potential issues caused
by this will be resolved by the switch to thread-aware scheduling in
future commits.
Otherwise they'd be silently ignored if received by the filtering thread
before the filtergraph can be initialized, which would make the output
dependent on the order in which frames from different inputs arrive.
As previously for decoding, this is merely "scaffolding" for moving to a
fully threaded architecture and does not yet make filtering truly
parallel - the main thread will currently wait for the filtering thread
to finish its work before continuing. That will change in future commits
after encoders are also moved to threads and a thread-aware scheduler is
added.
Avoid making decisions based on current graph input state, which makes
the output dependent on the order in which the frames from different
inputs are interleaved.
Makes the output of fate-filter-overlay-dvdsub-2397 more correct - the
subtitle appears two frames later, which is closer to its PTS as stored
in the file.
Declaring the function argument as const fixes a warning down the line
that the const parameter is stripped. We don't modify this argument.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Current code tracks min/max pts for each stream separately; then when
the file ends it combines them with last frame's duration to compute the
total duration of each stream; finally it selects the longest of those
durations as the file duration.
This is incorrect - the total file duration is the largest timestamp
difference between any frames, regardless of the stream.
Also change the way the last frame information is reported from decoders
to the muxer - previously it would be just the last frame's duration,
now the end timestamp is sent, which is simpler.
Changes the result of the fate-ffmpeg-streamloop-transcode-av test,
where the timestamps are shifted slightly forward. Note that the
matroska demuxer does not return the first audio packet after seeking
(due to buggy interaction betwen the generic code and the demuxer), so
there is a gap in audio.
This ensures that tq_receive() will always return EOF after all streams
were receive-finished, even though the sending side might not have
closed them yet. This may allow the receiver to avoid manually tracking
which streams it has already closed.
Fixes ticket #10638 (and should also fix ticket #10482)
by restoring the behaviour from before
3c7dd5ed37.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
GetStdHandle is unavailable outside of the desktop API subset.
This didn't use to be a problem with earlier WinSDKs, as kbhit also
used to be available only for desktop apps, and this whole section is
wrapped in #if HAVE_KBHIT. With newer WinSDKs, kbhit() is available also
for non-desktop apps, while GetStdHandle still isn't.
Signed-off-by: Martin Storsjö <martin@martin.st>
Fix rendering of int values within a side data element, which was
broken since commit d2d3a83ad9, where the side data element was
correctly marked as a variable fields element. Logic to render a
string variable was implemented already, but it was not implemented
for the int fields path, which was enabled by that commit.
Also, code and schema is changed in order to account for multiple
variable-fields elements - such as side data, contained within the
same parent. Previously it was assumed that a single variable-fields
element was contained within the parent, which was the case for tags,
but is not the case for side-data.
Previously data was rendered as:
<side_data_list>
<side_data side_data_type="CPB properties" max_bitrate="0" min_bitrate="0" avg_bitrate="0" buffer_size="327680" vbv_delay="-1"/>
</side_data_list>
Now as:
<side_data_list>
<side_data type="CPB properties">
<side_datum key="side_data_type" value="CPB properties"/>
<side_datum key="max_bitrate" value="0"/>
<side_datum key="min_bitrate" value="0"/>
<side_datum key="avg_bitrate" value="0"/>
<side_datum key="buffer_size" value="49152"/>
<side_datum key="vbv_delay" value="-1"/>
</side_data>
</side_data_list>
Variable-fields elements are rendered as a containing element wrapping
generic key/values elements, enabling use of strict XML schema.
Fix trac issue:
https://trac.ffmpeg.org/ticket/10613
An AVFormatContext leaks on errors that happen before it is attached
to its permanent place (an InputFile). Fix this by attaching
it earlier.
Given that it is not documented that avformat_close_input() is usable
with an AVFormatContext that has only been allocated with
avformat_alloc_context() and not opened with avformat_open_input(),
one error path before avformat_open_input() had to be treated
specially: It uses avformat_free_context().
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The av_opt_eval family of functions emits errors messages on error
and can therefore not be used with fake objects when the AVClass
has a custom item_name callback. The AVClass for AVCodecContext
has such a custom callback (it searches whether an AVCodec is set
to use its name). In practice it means that whatever is directly
after the "cc" pointer to the AVClass for AVCodec in the stack frame
of ist_add() will be treated as a pointer to an AVCodec with
unpredictable consequences.
Fix this by using an actual AVCodecContext instead of a fake object.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Its function is analogous to that of the fps filter, so filtering is a
more appropriate place for this.
The main practical reason for this move is that it places the encoding
sync queue right at the boundary between filters and encoders. This will
be important when switching to threaded scheduling, as the sync queue
involves multiple streams and will thus need to do nontrivial
inter-thread synchronization.
In addition to framerate conversion, the closely-related
* encoder timebase selection
* applying the start_time offset
are also moved to filtering.
That field is used by the framerate code to track whether any output has
been generated for the last input frame(*). Its use in the last
invocation of print_report() is meant to account for the very last
filtered frame being dropped in the number of dropped frames printed in
the log. However, that is a highly inappropriate place to do so, as it
makes assumptions about vsync logic in completely unrelated code. Move
the increment to encoder flush instead.
(*) the name is misleading, as the input frame has not yet been dropped
and may still be output in the future
Always use the functionality of the latter, which makes more sense as it
avoids losing keyframes due to vsync code dropping frames.
Deprecate the 'source_no_drop' value, as it is now redundant.
Unlike the 'source' mode, which preserves source keyframe-marking as-is,
the 'source_no_drop' mode attempts to keep track of keyframes dropped by
framerate conversion and mark the next output frame as key in such
cases. However,
* c75be06148 broke this functionality entirely, and made it equivalent
to 'source'
* even before it would only work when the frame immediately following
the dropped keyframe is preserved and not dropped as well
Also, drop a redundant check for 'frame' in setting dropped_keyframe, as
it is redundant with the check on the above line.
ifilter_send_eof() will fail if the input has no real or fallback
parameters, so there is no need to handle the case of some inputs being
in EOF state yet having no parameters.
This is no longer needed as the side data is available for decoders in the
AVCodecContext.
The tests affected reflect the removal of useless CPB and Stereo 3D side
data in packets.
Signed-off-by: James Almer <jamrial@gmail.com>