This sets __OUTPUT_FORMAT__ to win64 instead of win32, even though both
(through -m amd64) produce 64-bit binary code.
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
Functions using INIT_MMX may still access XMM registers through direct
means (xmm0-15). Therefore, they still need to be marked for clobber
so they can be properly saved/restored.
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
The spec says the following speaker mapping is default:
center front speaker
left, right center front speakers,
left, right outside front speakers,
left surround, right surround rear speakers,
front low frequency effects speaker
It currently has different meanings at different times (dts of the last
read packet/pts of the last decoded frame). Reduce obfuscation by
storing pts of the decoded frame in the frame itself.
Conflicts:
ffmpeg.c
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* qatar/master:
swscale: make yuv2yuv1 use named registers.
h264: mark h264_idct_add8_10 with number of XMM registers.
swscale: fix V plane memory location in bilinear/unscaled RGB/YUYV case.
vp8: always update next_framep[] before returning from decode_frame().
avconv: estimate next_dts from framerate if it is set.
avconv: better next_dts usage.
avconv: rename InputStream.pts to last_dts.
avconv: reduce overloading for InputStream.pts.
avconv: rename InputStream.next_pts to next_dts.
avconv: rework -t handling for encoding.
avconv: set encoder timebase for subtitles.
pva-demux test: add -vn
swscale: K&R formatting cosmetics for SPARC code
apedec: allow the user to set the maximum number of output samples per call
apedec: do not unnecessarily zero output samples for mono frames
apedec: allocate a single flat buffer for decoded samples
apedec: use sizeof(field) instead of sizeof(type)
swscale: split C output functions into separate file.
swscale: Split C input functions into separate file.
bytestream: Add bytestream2 writing API.
The avconv changes are due to massive regressions and bugs not merged yet.
Conflicts:
ffmpeg.c
libavcodec/vp8.c
libswscale/swscale.c
libswscale/x86/swscale_template.c
tests/fate/demux.mak
tests/ref/lavf/asf
tests/ref/lavf/avi
tests/ref/lavf/mkv
tests/ref/lavf/mpg
tests/ref/lavf/nut
tests/ref/lavf/ogg
tests/ref/lavf/rm
tests/ref/lavf/ts
tests/ref/seek/lavf_avi
tests/ref/seek/lavf_mkv
tests/ref/seek/lavf_rm
Merged-by: Michael Niedermayer <michaelni@gmx.at>
This fixes crashes in e.g. PNG decoding with SSE2 enabled. In fact, many
x86 optimizations for codecs assume that our buffer strides are 16-byte
aligned.
Also slightly move around code not allocate a new frame if we won't
decode it. This prevents us from putting undecoded frames in frame
pointers, which (in mt decoding) other threads will use and wait on
as references, causing a deadlock (if we skipped decoding) or a crash
(if we didn't initialized next_framep[] at all).
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
next_dts is used for estimating the dts of the next packet if it's
missing. Therefore, it makes no sense to set it from the pts of the last
decoded frame. Also it should be estimated from the current packet
duration/ticks_per_frame always, not only when a frame was successfully
decoded.
It currently has different meanings at different times (dts of the last
read packet/pts of the last decoded frame). Reduce obfuscation by
storing pts of the decoded frame in the frame itself.
Current code compares the desired recording time with InputStream.pts,
which has a very unclear meaning. Change the code to use actual
timestamps of the frames passed to the encoder.
In several tests, one less frame is encoded, which is more correct.
In the idroq test one more frame is encoded, which is again more
correct.
Behavior with stream copy should be unchanged.
The actual number (1/1000) will probably require some
discussion/tweaking in the future, but should be good enough for now,
since the timestamps in AVSubtitle are in this timebase by definition.
The output is obviously not supposed to contain video (since only
-acodec copy is specified), but that only happens because of the way -t
handling is implemented currently.
It makes sense in some cases to split up the output packet to save on memory
usage (ape frames can be very large), but the current/default size is
arbitrary. Allowing the user to configure this gives more flexibility and
requires minimal additional code.
* qatar/master:
Revert "v210enc: use FFALIGN()"
doxygen: Do not include license boilerplates in Doxygen comment blocks.
avplay: reset decoder flush state when seeking
ape: skip packets with invalid size
ape: calculate final packet size instead of guessing
ape: stop reading after the last frame has been read
ape: return AVERROR_EOF instead of AVERROR(EIO) when demuxing is finished
ape: return error if seeking to the current packet fails in ape_read_packet()
avcodec: Clarify AVFrame member documentation.
v210dec: check for coded_frame allocation failure
v210enc: use stride as it is already calculated
v210enc: use FFALIGN()
v210enc: return proper AVERROR codes instead of -1
v210enc: do not set coded_frame->key_frame
v210enc: check for coded_frame allocation failure
drawtext: add 'fix_bounds' option on coords fixing
drawtext: fix text_{w, h} expression vars
drawtext: add missing braces around an if() block.
Conflicts:
libavcodec/arm/vp8.h
libavcodec/arm/vp8dsp_init_arm.c
libavcodec/v210dec.c
libavfilter/vf_drawtext.c
libavformat/ape.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>