Said table is 64 bytes long and exported so that it can be used both
in libavcodec and libavformat. This commit stops doing so and instead
duplicates it for shared builds, because the overhead of exporting the
symbol is bigger than 64 bytes. It consists of the length of the name of
the symbol (2x24 bytes), two entries in .dynsym (2x24 bytes), two
entries for symbol version (2x2 bytes), one hash value in the exporting
library (4 bytes) in addition to one entry in the importing library's
.got and .rela.dyn (8 + 24 bytes).
(The above numbers are for a Linux/GNU/Elf system; the numbers for other
platforms may be different.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This prevents making the DCAParseError enum part of the ABI.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: foo86 <foobaz86@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
There are 3 different places where DCA core frame header is parsed:
decoder, parser and demuxer. Each one uses ad-hoc code. Add common core
frame header parsing function that will be used in all places.
Signed-off-by: James Almer <jamrial@gmail.com>
This is now required by dcadec_decode_frame(). All remaining users of
avpriv_dca_convert_bitstream() have been updated to expect EXSS marker.
Signed-off-by: James Almer <jamrial@gmail.com>
The function is used on unaligned buffers (such as those provided
by AVPacket), accessing them as uint16_t causes SIGBUS crashes on
architectures like SPARC.
This fixes ubsan runtime error: load of misaligned address for type
'const uint16_t', which requires 2 byte alignment
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
src and dst are only 8-bit-aligned, so accessing them as uint16_t causes
SIGBUS crashes on architectures like sparc.
This fixes ubsan runtime error: load of misaligned address for type
'const uint16_t', which requires 2 byte alignment
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
These decoders use a special non-MPEG2 IDCT. Call it directly
instead of going through dsputil. There is never any reason
to use a regular IDCT with these decoders or to use the EA IDCT
with other codecs.
This also fixes the bizarre situation of eamad and eatqi decoding
incorrectly if eatgq is disabled.
Signed-off-by: Mans Rullgard <mans@mansr.com>
The were broken since August of 2010 without anyone noticing until
three weeks ago. Nobody cares about it anymore and hopefully Marvell
will support NEON like in the PXA978 from now on.
It contains optimizations that are not specific to i386 and
libavutil uses this naming scheme already.
Originally committed as revision 16270 to svn://svn.ffmpeg.org/ffmpeg/trunk
c is 1.9x faster than previous c (on various x86 cpus), sse is 1.6x faster than previous sse.
Originally committed as revision 14698 to svn://svn.ffmpeg.org/ffmpeg/trunk
include paths in the source files.
mostly from a patch by Ronald S. Bultje, rbultje ronald.bitfreak net
Originally committed as revision 9034 to svn://svn.ffmpeg.org/ffmpeg/trunk
Patch by Zuxy Meng, zuxy <<dot>> meng >>at<< gmail <<dot>> com
Minor non-functional diff-related fixes by me.
Originally committed as revision 5125 to svn://svn.ffmpeg.org/ffmpeg/trunk