This commit does for AVOutputFormat what commit
20f9727018 did for AVCodec:
It adds a new type FFOutputFormat, moves all the internals
of AVOutputFormat to it and adds a now reduced AVOutputFormat
as first member.
This does not affect/improve extensibility of both public
or private fields for muxers (it is still a mess due to lavd).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Removed the unnecessary calls to ff_format_io_close
this patch introduced in dashenc_delete_file.
dashenc_delete_file functions open a
new HTTP connection regardless of the http_persistent value,
So change their behaviour to keep http connections open
if http_persistent is set.
Signed-off-by: Basel Sayeh <basel.sayeh@hotmail.com>
Profile can be derived from values codecpar pixel format only with software
formats. For hardware formats, we're forced to parse a frame header to get
the required information.
Signed-off-by: James Almer <jamrial@gmail.com>
The general demuxing API uses parsers and decoders. Therefore
FFStream contains pointers to AVCodecContexts and
AVCodecParserContext and lavf/internal.h includes lavc/avcodec.h.
Yet actually only a few files files really use these; and it is best
when this number stays small. Therefore this commit uses opaque
structs in lavf/internal.h for these contexts and stops including
avcodec.h.
This also avoids including lavc/codec_desc.h implicitly. All other
headers are implicitly included as now (mostly through codec.h).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This avoids unnecessary rebuilds of most source files if only the
list of enabled components has changed, but not the other properties
of the build, set in config.h.
Signed-off-by: Martin Storsjö <martin@martin.st>
Otherwise there is no way to detect an error returned by avio_close() because
ff_format_io_close cannot get the return value.
Checking the return value of the close function is important in order to check
if all data was successfully written and the underlying close() operation was
successful.
It can also be useful even for read mode because it can return any pending
AVIOContext error, so the user don't have to manually check AVIOContext->error.
In order to still support if the user overrides io_close, the generic code only
uses io_close2 if io_close is either NULL or the default io_close callback.
Signed-off-by: Marton Balint <cus@passwd.hu>
For most check_bitstream() functions this just avoids having
to dereference s->streams[pkt->stream_index] themselves; but for
meta-muxers it will allow to forward the packet to stream with
a different stream_index (belonging to a different AVFormatContext)
without using a spare packet.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Try to make the feature easier to use, especially since the user
have enabled -strict experimental manually. The user shouldn't
be surprised that hls_playlist is enabled for lhls automatically,
so change the log level from warning to info for that.
There is a little chance that user specified contradicted options
like -streaming 0 -ldash 1, however, it's more likely that user
didn't know or forgot to enable streaming for ldash. So enabling
streaming automatically to make the feature easier to use, similar
like enable FF_MOV_FLAG_FRAGMENT/EMPTY_MOOV/DEFAULT_BASE_MOOF and
so on for FF_MOV_FLAG_CMAF.
Do this by allocating AVStream together with the data that is
currently in AVStreamInternal; or rather: Put AVStream at the
beginning of a new structure called FFStream (which encompasses
more than just the internal fields and is a proper context in its own
right, hence the name) and remove AVStreamInternal altogether.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It only uses an AVIOContext and an AVBPrint.
When doing so, it turned out that several non-users of
ff_read_line_to_bprint_overwrite() and ff_bprint_to_codecpar_extradata()
relied on libavformat/internal.h to include bprint.h or avstring.h
for them. In order to avoid a repeat of this and in order to reduce
unnecessary dependencies, a forward declaration of struct AVBPrint is
used instead of including bprint.h.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When streaming mode is enabled, the DASH manifest is written on the
first packet for the segment so that the segment can be advertised
immediately to clients. It was also still writing the manifest at the
end of the segment leading to two duplicate writes.
When streaming mode is enabled with fMP4/CMAF for DASH output, the
segment files are available to read by players as soon as the first byte
is written instead of only after the file is fully written. The DASH
manifest currently only gets written when the final write to the segment
file occurs. This means that players cannot stream the first segment
while it is being written.
When -lhls is enabled with MP4 segments the HLS manifest is written
immediately to advertise the in-flight segments. This change adds the
same behavior for the DASH manifest so players can stream it
immediately.
This reverts commit d6d407d2d7.
Hack not needed after a2b1dd0ce3.
Will fix#7480 and #8904.
This will include e.g. CODECS="hvc1.2.4.L123.B0" into m3u8.
Signed-off-by: Valerii Zapodovnikov <val.zapod.vz@gmail.com>
With audio/video HLS playlists, audio chunklists are treated as
alternative renditions for video chunklists. This is wrong for
audio-only HLS playlists.
fixes: 9252
This is possible now that the next-API is gone.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
In ticket #8754 there is discourse surrounding the error
message which is printed upon a mismatched aspect ratio in
derived encodings. This should make it clearer to the user
as to the issues which they are experiencing.
Reviewed-by: "Jeyapal, Karthick" <kjeyapal@akamai.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
If stream's bitrate is not specified:
- for static manifest: an average bitrate will be calculated and used,
- for dynamic manifest: first segment's bitrate will be calculated and used, as before,
for bandwidth setting in adaptation sets.
Current muxers only use a single bitstream filter, so there is no need to
maintain code which operates on a list of bitstream filters. When multiple
bitstream filters are needed muxers can simply use a list bitstream filter.
If there is a use case in the future when different bitstream filters should be
added at subsequent packets then a new API possibly involving reconfiguring the
list bitstream filter can be added knowing the exact requirements.
Signed-off-by: Marton Balint <cus@passwd.hu>
This ensures it's written at the beginning of a segment in non streaming mode
when segment duration differs from fragment duration.
Signed-off-by: James Almer <jamrial@gmail.com>
It's not exclusive for Low Latency streaming. The muxer will serve partial
segments regardless of streaming mode.
Signed-off-by: James Almer <jamrial@gmail.com>