In high bit depth the pixels will not be stored in uint8_t like in the
normal case, but in uint16_t. The pixel size is thus 1 in normal bit
depth and 2 in high bit depth.
Preparatory patch for high bit depth h264 decoding support.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The functions moved are used when decoding h264.
Preparatory patch for high bit depth h264 decoding support.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Unfortunately the output buffer size check assumes that the
input buffer is never over-consumed, thus this actually
also allowed to write outside the output buffer if "lucky".
AS libavcodec/arm/ac3dsp_armv6.o
ffmpeg-src/libavcodec/arm/ac3dsp_armv6.S: Assembler messages:
ffmpeg-src/libavcodec/arm/ac3dsp_armv6.S:40: Error: selected processor
does not support `movw r8,#0x1fe0'
make[1]: *** [libavcodec/arm/ac3dsp_armv6.o] Error 1
MOVW is ARMv7 way to load constant:
* movw, or move wide, will move a 16-bit constant into a register,
implicitly zeroing the top 16 bits of the target register.
* movt, or move top, will move a 16-bit constant into the top half
of a given register without altering the bottom 16 bits
To load 32 bit constant, movw lower16; movt upper16; is better than
ldr if available, because:
While this approach takes two instructions, it does not require any
extra space to store the constant so both the movw/movt method and the
ldr method will end up using the same amount of memory. Memory
bandwidth is precious in and the movw/movt approach avoids an extra
read on the data side, not to mention the read could have missed the
cache.
But here it is armv6 optimization, so that we have to use ldr.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The thread_type API allows you to request only FF_THREAD_FRAME (instead of
FRAME | SLICE), but it was being ignored.
We don't implement both of them at the same time, so there isn't an effect
on current codecs, except that you can request no kinds of threading now
(a bit useless).
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
As previously discussed, the CrystalHD hardware returns exceptionally
useless information about interlaced h.264 content - to the extent
that it's not possible to distinguish most MBAFF and PAFF content until
it's too late.
In an attempt to compensate for this, I'm introducing two mechanisms:
1) Peeking at the picture number of the next picture
The hardware provides a capability to peek the next picture number. If
it is the same as the current picture number, then we are clearly dealing
with two fields and not a frame or fieldpair.
If this always worked, it would be all we need, but it's not guaranteed
to work. Sometimes, the next picture may not be decoded sufficiently
for the number to be known; alternately, a corruption in the stream may
cause the hardware to refuse to return the number even if the next
intact frame is decoded. In either case, the query will return 0.
If we are unable to peek the next picture number, we assume that the
picture is a frame/fieldpair and return it accordingly. If that turns
out to be incorrect, we discard the second field, and the user has
to live with the glitch. In testing, false detection can occur for
the first couple of seconds, and then the pipeline stabalizes and
we get correct detection.
2) Use the h264_parser to detect when individual input fields have
been combined into an output fieldpair.
I have multiple PAFF samples where this behaviour is detected. The
peeking mechanism described above will correctly detect that the
output is a fieldpair, but we need to know what the input type was
to ensure pipeline stability (only return one output frame per input
frame).
If we find ourselves with an output fieldpair, yet the input picture
type was a field, as reported by the parser, then we are dealing with
this case, and can make sure not to return anything on the next
decode() call.
Taken together, these allow us to remove the hard-coded hacks for
different h.264 types, and we can clearly describe the conditions
under which we can trust the hardware's claim that content is
interlaced.
Signed-off-by: Philip Langdale <philipl@overt.org>
Now that we know the type of the input picture, we have to bring
that information to the output picture to help identify its type.
We do this by adding a field to the opaque_list node.
Signed-off-by: Philip Langdale <philipl@overt.org>
As the hardware is unreliable, we will have to use the h.264 parser
to identify whether an input picture is a field or a frame. This
change loads the parser and extracts the picture type.
Signed-off-by: Philip Langdale <philipl@overt.org>
In preparation for adding additional fields to the node, return
the node instead of the pts value. This requires the caller to
free the node.
Signed-off-by: Philip Langdale <philipl@overt.org>
I found another MBAFF sample where the input:output pattern is
the same as mpeg2 and vc1 (fieldpair input, individual field output).
While I'm not sure how you can output individual fields from MBAFF,
if I apply the mpeg2/vc1 handling to this file, it plays correctly.
So, this changes the detection algorithm to handle the known cases.
Whitespace will be fixed in a separate change.
Signed-off-by: Philip Langdale <philipl@overt.org>
* ffmpeg-mt/master:
DUPLICATE mingw32 compilation after 'unbreak avcodec_thread_init'
pthread: validate_thread_parameters() ignored slice-threading being intentionally off
DUPLICATE Remove unnecessary parameter from ff_thread_init() and fix behavior
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
CONFIG_VDPAU is the condition on which ff_vdpau_mpeg_picture_complete
is compiled in, so it is more appropriate, particularly since the
separate VDPAU decoder should be removed in the longer term.
thread_count passed to ff_thread_init() is only used to set AVCodecContext.
thread_count, and can be removed. Instead move it to the legacy implementation
of avcodec_thread_init().
This also fixes the problem that calling avcodec_thread_init() with pthreads
enabled did not set it since ff1efc524c.
Signed-off-by: Janne Grunau <janne-libav@jannau.net>
Currently, the parser is buggy and only processes the stream extradata
when the flag is set. This fixes it to actually inspect the frames.
Whitespce will be fixed in a separate change.
Signed-off-by: Philip Langdale <philipl@overt.org>
The pulldown flags should be communicated to the client of the libavcodec library. Not doing so causes jerky playback with pulldown content. Note that this change requires the patch previously provided here: http://ffmpeg.org/pipermail/ffmpeg-devel/2011-April/110314.html
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The attached patch fixes the jerky playback of VC-1 content with pulldown. The pulldown flags were incorrectly set. They must be correct in order to display the frames with the correct timing as mentioned in the specifications: "SMPTE 421M: VC-1 Compressed Video Bitstream Format and Decoding Process". More precisely the following tables:
Table 20: Progressive P picture layer bitstream for Advanced Profile
Table 22: Progressive B picture layer bitstream for Advanced Profile
Table 23: Progressive Skipped picture layer bitstream for Advanced Profile
Table 82: Interlaced Frame I and BI picture layer bitstream for Advanced Profile
Table 83: Interlaced Frame P picture layer bitstream for Advanced Profile
Table 84: Interlaced Frame B picture layer bitstream for Advanced Profile
Table 85: Picture Layer bitstream for Field 1 of Interlace Field Picture for Advanced Profile
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>