That function currently does two things -- reinitializing the DSP
contexts and setting low_delay based on the SPS values.
The former more appropriately belongs in h264_slice_header_init(), while
the latter only really makes sense in decode_slice_header().
The third call to ff_h264_set_parameter_from_sps(), done immediately
after parsing a new SPS, appears to serve no useful purpose, so it is
just dropped.
Also, drop now unneeded H264Context.cur_chroma_format_idc.
Currently, the DPB is initialized in alloc_tables() and uninitialized in
free_tables(), but those functions manage frame size-dependent
variables, so DPB management does not logically belong in there.
Since we want the init/uninit to happen exactly once per the context
lifetime, init_context()/free_context() are the proper place for this
code.
The generic code copies the main context's private data to all the
others. However that is quite dangerous, as it might end up copying some
pointers that are or will become invalid.
Since everything we actually need will be copied later in
update_thread_context(), it's safest to zero the private data in
init_thread_copy(), so it works the same way as init for the main
context.
Also change the method for allocating them. Instead of two possible
alloc calls from different places, just ensure they are allocated at the
start of each slice. This should be simpler and less bug-prone than the
previous method.
It does not work correctly and apparently never did. There is no
indication that this (mis)feature is ever used in the wild or even that
any software other than the reference supports it.
Since the code that attempts to support it adds some nontrivial
complexity and has resulted in several bugs in the past, it is better to
just drop it.
Currently, it needs to be initialized by the ER caller (which is
currently either a mpegvideo decoder or h264dec). However, since none of
those decoders use MECmpContext for anything except ER, it makes more
sense to handle it purely inside ER.
When decoding, this field holds the inverse of the framerate that can be
written in the headers for some codecs. Using a field called 'time_base'
for this is very misleading, as there are no timestamps associated with
it. Furthermore, this field is used for a very different purpose during
encoding.
Add a new field, called 'framerate', to replace the use of time_base for
decoding.