Enjoy some cache locality and use less threads.
About 5x speedup (from 60ms to 12ms to decode a 4k frame).
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The rationale is that coded_frame was only used to communicate key_frame,
pict_type and quality to the caller, as well as a few other random fields,
in a non predictable, let alone consistent way.
There was agreement that there was no use case for coded_frame, as it is
a full-sized AVFrame container used for just 2-3 int-sized properties,
which shouldn't even belong into the AVCodecContext in the first place.
The appropriate AVPacket flag can be used instead of key_frame, while
quality is exported with the new AVPacketSideData quality factor.
There is no replacement for the other fields as they were unreliable,
mishandled or just not used at all.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This is necessary to preserve the quality information currently exported
with coded_frame. Add the new side data to every encoder that needs it,
and use it in avconv.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Allocating coded_frame is what most encoders do anyway, so it makes
sense to always allocate and free it in a single place. Moreover a lot
of encoders freed the frame with av_freep() instead of the correct API
av_frame_free().
This bring uniformity to encoder behaviour and prevents applications
from erroneusly accessing this field when not allocated. Additionally
this helps isolating encoders that export information with coded_frame,
and heavily simplifies its deprecation.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This change (and the following ones of the same kind) is mainly to
simplify wrapping this section with an #if FF_API block later on.
No functional changes are applied, the fields of the context coded_frame
fields are directly initialized, instead of keeping a reference to the
coded_frame itself.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This filter currently assumes that the input audio is continuous and
does some timestamps manipulation based on this assumption.
This is unnecessary if we are only converting the channel layout or the
sample format, without resampling. In such a case, just leave the
timestamps as they are.
The current code is less than straightforward due to the fact that
output streams can be created based on filtergraph definitions. This
change should make the code simpler and more readable. It will also be
useful in the future commits.
Since global options are processed before all the other options now, we
do not have to try creating the complex filtergraphs several times
anymore, it is enough to do it once after the input files are opened.
There is only one decoder left that supports this (libopus, which is not
used by default since we have a native one) and this code goes against
the avconv design, since it propagates information back from the encoder
to decoder.
The upper halves are not guaranteed to be zero in x86-64.
Also use `test` instead of `and` when the result isn't used for anything other
than as a branch condition, this allows some register moves to be eliminated.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>