In particular, add the missing dependency on the scale and
aresample filters (and therefore on libswscale resp. libswresample).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
AVID streams - currently handled by the AVRN decoder - can be (depending
on extradata contents) either MJPEG or raw video. To decode the MJPEG
variant, the AVRN decoder currently instantiates a MJPEG decoder
internally and forwards decoded frames to the caller (possibly after
cropping them).
This is suboptimal, because the AVRN decoder does not forward all the
features of the internal MJPEG decoder, such as direct rendering.
Handling such forwarding in a full and generic manner would be quite
hard, so it is simpler to just handle those streams in the MJPEG decoder
directly.
The AVRN decoder, which now handles only the raw streams, can now be
marked as supporting direct rendering.
This also removes the last remaining internal use of the obsolete
decoding API.
It depends on the muxer generating the timestamps, which is deprecated
and scheduled for removal on next bump.
A bunch of tests change timestamps, because of ffmpeg.c is not
generating them correctly. This should be fixed later.
Filters mostly work in native endianness, but they must output
a specified endianness, usually little: that requires a final
conversion for big endian.
I do not know what's the deal with gif-deal: inserting explicitly
the filters that are implicitly inserted result in less frames in
output. Probably a strange problem of duration.
Explicitly insert the scale or aresample filter where it would
have been inserted by the negotiation.
Re-enable conversions if it cannot be done easily.
If a conversion is needed in a test, we want to know about it.
If the negotiation changes and makes new conversion necessary,
we want to know about it even more.
The tests previously rounded the timestamps. Its better in a fate test to preserve
the data from the demuxer and decoder.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The constants used in the decoder used floating point precision,
and this caused different values to be generated on different
architectures. Additionally on big endian machines, the fate test
would output bytes in native order, which is different from the one
hardcoded in the test.
So, eradicate floating point numbers and use fixed point (32.32)
arithmetics everywhere, replacing constants with precomputed integer
values, and force the pixel format output to be the same in the fate
test.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
It was merged with the iff_ilbm decoder in commit
929a24efff.
Define AV_CODEC_ID_IFF_BYTERUN1 as AV_CODEC_ID_IFF_ILBM for API
compatibility.
Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Most of the fate-dds-* and fate-txd-* tests already
output into the same pixel format regardless of
platform endianness, so there's no need to force
conversion to another format.
This fixes the tests fate-txd-16bpp, fate-txd-odd,
fate-dds-rgb16, fate-dds-rgb24 and fate-dds-xrgb on
big endian, where the tests seem to fail due to issues
with certain conversion codepaths in swscale.
Those conversion codepaths should of course be fixed, but
the individual decoder tests should use as little extra
conversion steps as possible.
Signed-off-by: Martin Storsjö <martin@martin.st>
Using the internal DXTC routines brings support for non multiple of 4
textures. A new test is added to cover this feature. Hashes differ
since the decoding algorithm is different, though no visual changes
have been spotted.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Use it instead of checking CODEC_FLAG_BITEXACT in the first stream's
codec context.
Using codec options inside lavf is fragile and can easily break when the
muxing codec context is not the encoding context.
Also set the RGBA pixel format correctly as the native endian format,
which is what it returns.
This fixes the tests on big endian.
Signed-off-by: Martin Storsjö <martin@martin.st>