Pass correct pointer to av_log() and update some error/warning message,
it's will help the debugging
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
After the commit 9f61abc811, we can use AVFormatContext.strict_std_compliance
instead of HLSContext.strict_std_compliance to avoid the code redundancy.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Cleanup the applehttp as demuxer name, when use the command :
ffmpeg -formats, get the confused information like:
"
E hls Apple HTTP Live Streaming
D hls,applehttp Apple HTTP Live Streaming
"
we don't use applehttp as the demuxer/muxer name usually, so
cleanup the applehttp and update the documents.
After the change, get the information from "ffmpeg -formats":
"
DE hls Apple HTTP Live Streaming
"
Reviewed-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
This code will print a warning if any user agent is set - even if the
API user used the proper non-deprecated "user_agent" option.
This change should not even break anything, because even if the user
sets the deprecated "user-agent" option, http.c copies it to the
"user_agent" option anyway.
The HLSContext struct contains fields which duplicate the data stored in the
avio_opts field. This change removes those fields in favor of avio_opts, and
updates the code accordingly.
The original patch caused the buffer pointed to by new_cookies in open_url to be
leaked. The only thing that buffer is used for is to store the value until it
can be passed to av_dict_set. To fix the leak, v2 of the patch simply calls
av_dict_set with the AV_DICT_DONT_STRDUP_VAL flag, so that the dictionary takes
ownership of the memory instead of copying it again.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Richard Shaffer <rshaffer@tunein.com>
The rw_timeout option is currently not applied when opening media playlist,
segment, or encryption key URLs. This can cause the HLS demuxer to block
indefinitely, even when the rw_timeout option has been specified. This change
simply enables carrying over the rw_timeout option when the demuxer opens these
URLs.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Richard Shaffer <rshaffer@tunein.com>
The rw_timeout option is currently not applied when opening media playlist,
segment, or encryption key URLs. This can cause the HLS demuxer to block
indefinitely, even when the rw_timeout option has been specified. This change
simply enables carrying over the rw_timeout option when the demuxer opens these
URLs.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Richard Shaffer <rshaffer@tunein.com>
If a subdemuxer has the updated metadata event flag set, the metadata is copied
to the corresponding stream. The flag is cleared on the subdemuxer and the
appropriate event flag is set on the stream.
Signed-off-by: wm4 <nfxjfg@googlemail.com>
AVERROR_EXIT happens when the user's interrupt callback signals that
playback should be aborted. In this case, the demuxer shouldn't print a
warning, as it's expected that all network accesses are stopped.
The seek function can just return an error if seeking is unavailable,
but often this is too late. Add a flag that signals that the stream is
unseekable, and use it in HLS.
This makes little sense due to how HLS works, and only causes some
additional annoyances if the HLS read_seek function fails (for example
if it's a live stream). It was most likely unintended.
fix CID: 1426991
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Encrypted HLS segments have regular http:// urls, but open_input()
actually prefixes them with crypto+ before calling open_url(), so
they end up using the crypto protocol and not the http protocol.
This means invoking ff_http_do_new_request will fail, so we avoid
calling it in the first place. After the earlier http.c commit,
the failure results in a warning printed to the user. In earlier
versions, the failure would cause a segfault.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Some http/1.0 implementations, like python's SimpleHTTPServer, can only support one client connection at a time. Making a second request while the first is still connected leads to a deadlock.
This change enables multiple connections for http/1.1 servers only, which need to support keepalive by default and should have no problem with concurrent requests.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Fixes compile error when building with network or protocols disabled.
This code would never be reached (because the demuxer fails much earlier on http playlists or segments), so it doesn't matter much what we do here as long as it compiles.
Signed-off-by: Aman Gupta <aman@tmm1.net>
AVERROR_EOF is an internal error which means the http socket is no longer
valid for new requests. It informs the caller that a new connection must
be established, and as such does not need to be surfaced to the user as
a warning.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This improves network throughput of the hls demuxer by avoiding
the latency introduced by downloading segments one at a time.
The problem is particularly noticable over high-latency network
connections: for instance, if RTT is 250ms, there will a 250ms idle
period between when one segment response is read and the next one
starts.
The obvious solution to this is to use HTTP pipelining, where a
second request can be sent (on the persistent http/1.1 connection)
before the first response is fully read. Unfortunately the way the
http protocol is implemented in avformat makes implementing pipleining
very complex.
Instead, this commit simulates pipelining using two separate persistent
http connections. This has the advantage of working independently of
the http_persistent option, and can be used with http/1.0 servers as
well. The pair of connections is swapped every time a new segment starts
downloading, and a request for the next segment is sent on the secondary
connection right away. This means the second response will be ready and
waiting by the time the current response is fully read.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi>
This teaches the HLS demuxer to use the HTTP protocols
multiple_requests=1 option, to take advantage of "Connection:
Keep-Alive" when downloading playlists and segments from the HLS server.
With the new option, you can avoid TCP connection and TLS negotiation
overhead, which is particularly beneficial when streaming via a
high-latency internet connection.
Similar to the http_persistent option recently implemented in hlsenc.c
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi>
Fixes: loop.m3u
The default max iteration count of 1000 is arbitrary and ideas for a better solution are welcome
Found-by: Xiaohei and Wangchu from Alibaba Security Team
Previous version reviewed-by: Steven Liu <lingjiujianke@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This reduces the attack surface of local file-system
information leaking.
It prevents the existing exploit leading to an information leak. As
well as similar hypothetical attacks.
Leaks of information from files and symlinks ending in common multimedia extensions
are still possible. But files with sensitive information like private keys and passwords
generally do not use common multimedia filename extensions.
It does not stop leaks via remote addresses in the LAN.
The existing exploit depends on a specific decoder as well.
It does appear though that the exploit should be possible with any decoder.
The problem is that as long as sensitive information gets into the decoder,
the output of the decoder becomes sensitive as well.
The only obvious solution is to prevent access to sensitive information. Or to
disable hls or possibly some of its feature. More complex solutions like
checking the path to limit access to only subdirectories of the hls path may
work as an alternative. But such solutions are fragile and tricky to implement
portably and would not stop every possible attack nor would they work with all
valid hls files.
Developers have expressed their dislike / objected to disabling hls by default as well
as disabling hls with local files. There also where objections against restricting
remote url file extensions. This here is a less robust but also lower
inconvenience solution.
It can be applied stand alone or together with other solutions.
limiting the check to local files was suggested by nevcairiel
This recommits the security fix without the author name joke which was
originally requested by Nicolas.
Found-by: Emil Lerner and Pavel Cheremushkin
Reported-by: Thierry Foucu <tfoucu@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This reduces the attack surface of local file-system
information leaking.
It prevents the existing exploit leading to an information leak. As
well as similar hypothetical attacks.
Leaks of information from files and symlinks ending in common multimedia extensions
are still possible. But files with sensitive information like private keys and passwords
generally do not use common multimedia filename extensions.
It does not stop leaks via remote addresses in the LAN.
The existing exploit depends on a specific decoder as well.
It does appear though that the exploit should be possible with any decoder.
The problem is that as long as sensitive information gets into the decoder,
the output of the decoder becomes sensitive as well.
The only obvious solution is to prevent access to sensitive information. Or to
disable hls or possibly some of its feature. More complex solutions like
checking the path to limit access to only subdirectories of the hls path may
work as an alternative. But such solutions are fragile and tricky to implement
portably and would not stop every possible attack nor would they work with all
valid hls files.
Developers have expressed their dislike / objected to disabling hls by default as well
as disabling hls with local files. There also where objections against restricting
remote url file extensions. This here is a less robust but also lower
inconvenience solution.
It can be applied stand alone or together with other solutions.
limiting the check to local files was suggested by nevcairiel
Found-by: Emil Lerner and Pavel Cheremushkin
Reported-by: Thierry Foucu <tfoucu@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>