You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

2060 lines
94 KiB

Switch codebase to use sanitized_Request instead of compat_urllib_request.Request [downloader/dash] Use sanitized_Request [downloader/http] Use sanitized_Request [atresplayer] Use sanitized_Request [bambuser] Use sanitized_Request [bliptv] Use sanitized_Request [brightcove] Use sanitized_Request [cbs] Use sanitized_Request [ceskatelevize] Use sanitized_Request [collegerama] Use sanitized_Request [extractor/common] Use sanitized_Request [crunchyroll] Use sanitized_Request [dailymotion] Use sanitized_Request [dcn] Use sanitized_Request [dramafever] Use sanitized_Request [dumpert] Use sanitized_Request [eitb] Use sanitized_Request [escapist] Use sanitized_Request [everyonesmixtape] Use sanitized_Request [extremetube] Use sanitized_Request [facebook] Use sanitized_Request [fc2] Use sanitized_Request [flickr] Use sanitized_Request [4tube] Use sanitized_Request [gdcvault] Use sanitized_Request [extractor/generic] Use sanitized_Request [hearthisat] Use sanitized_Request [hotnewhiphop] Use sanitized_Request [hypem] Use sanitized_Request [iprima] Use sanitized_Request [ivi] Use sanitized_Request [keezmovies] Use sanitized_Request [letv] Use sanitized_Request [lynda] Use sanitized_Request [metacafe] Use sanitized_Request [minhateca] Use sanitized_Request [miomio] Use sanitized_Request [meovideo] Use sanitized_Request [mofosex] Use sanitized_Request [moniker] Use sanitized_Request [mooshare] Use sanitized_Request [movieclips] Use sanitized_Request [mtv] Use sanitized_Request [myvideo] Use sanitized_Request [neteasemusic] Use sanitized_Request [nfb] Use sanitized_Request [niconico] Use sanitized_Request [noco] Use sanitized_Request [nosvideo] Use sanitized_Request [novamov] Use sanitized_Request [nowness] Use sanitized_Request [nuvid] Use sanitized_Request [played] Use sanitized_Request [pluralsight] Use sanitized_Request [pornhub] Use sanitized_Request [pornotube] Use sanitized_Request [primesharetv] Use sanitized_Request [promptfile] Use sanitized_Request [qqmusic] Use sanitized_Request [rtve] Use sanitized_Request [safari] Use sanitized_Request [sandia] Use sanitized_Request [shared] Use sanitized_Request [sharesix] Use sanitized_Request [sina] Use sanitized_Request [smotri] Use sanitized_Request [sohu] Use sanitized_Request [spankwire] Use sanitized_Request [sportdeutschland] Use sanitized_Request [streamcloud] Use sanitized_Request [streamcz] Use sanitized_Request [tapely] Use sanitized_Request [tube8] Use sanitized_Request [tubitv] Use sanitized_Request [twitch] Use sanitized_Request [twitter] Use sanitized_Request [udemy] Use sanitized_Request [vbox7] Use sanitized_Request [veoh] Use sanitized_Request [vessel] Use sanitized_Request [vevo] Use sanitized_Request [viddler] Use sanitized_Request [videomega] Use sanitized_Request [viewvster] Use sanitized_Request [viki] Use sanitized_Request [vk] Use sanitized_Request [vodlocker] Use sanitized_Request [voicerepublic] Use sanitized_Request [wistia] Use sanitized_Request [xfileshare] Use sanitized_Request [xtube] Use sanitized_Request [xvideos] Use sanitized_Request [yandexmusic] Use sanitized_Request [youku] Use sanitized_Request [youporn] Use sanitized_Request [youtube] Use sanitized_Request [patreon] Use sanitized_Request [extractor/common] Remove unused import [nfb] PEP 8
9 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
Switch codebase to use sanitized_Request instead of compat_urllib_request.Request [downloader/dash] Use sanitized_Request [downloader/http] Use sanitized_Request [atresplayer] Use sanitized_Request [bambuser] Use sanitized_Request [bliptv] Use sanitized_Request [brightcove] Use sanitized_Request [cbs] Use sanitized_Request [ceskatelevize] Use sanitized_Request [collegerama] Use sanitized_Request [extractor/common] Use sanitized_Request [crunchyroll] Use sanitized_Request [dailymotion] Use sanitized_Request [dcn] Use sanitized_Request [dramafever] Use sanitized_Request [dumpert] Use sanitized_Request [eitb] Use sanitized_Request [escapist] Use sanitized_Request [everyonesmixtape] Use sanitized_Request [extremetube] Use sanitized_Request [facebook] Use sanitized_Request [fc2] Use sanitized_Request [flickr] Use sanitized_Request [4tube] Use sanitized_Request [gdcvault] Use sanitized_Request [extractor/generic] Use sanitized_Request [hearthisat] Use sanitized_Request [hotnewhiphop] Use sanitized_Request [hypem] Use sanitized_Request [iprima] Use sanitized_Request [ivi] Use sanitized_Request [keezmovies] Use sanitized_Request [letv] Use sanitized_Request [lynda] Use sanitized_Request [metacafe] Use sanitized_Request [minhateca] Use sanitized_Request [miomio] Use sanitized_Request [meovideo] Use sanitized_Request [mofosex] Use sanitized_Request [moniker] Use sanitized_Request [mooshare] Use sanitized_Request [movieclips] Use sanitized_Request [mtv] Use sanitized_Request [myvideo] Use sanitized_Request [neteasemusic] Use sanitized_Request [nfb] Use sanitized_Request [niconico] Use sanitized_Request [noco] Use sanitized_Request [nosvideo] Use sanitized_Request [novamov] Use sanitized_Request [nowness] Use sanitized_Request [nuvid] Use sanitized_Request [played] Use sanitized_Request [pluralsight] Use sanitized_Request [pornhub] Use sanitized_Request [pornotube] Use sanitized_Request [primesharetv] Use sanitized_Request [promptfile] Use sanitized_Request [qqmusic] Use sanitized_Request [rtve] Use sanitized_Request [safari] Use sanitized_Request [sandia] Use sanitized_Request [shared] Use sanitized_Request [sharesix] Use sanitized_Request [sina] Use sanitized_Request [smotri] Use sanitized_Request [sohu] Use sanitized_Request [spankwire] Use sanitized_Request [sportdeutschland] Use sanitized_Request [streamcloud] Use sanitized_Request [streamcz] Use sanitized_Request [tapely] Use sanitized_Request [tube8] Use sanitized_Request [tubitv] Use sanitized_Request [twitch] Use sanitized_Request [twitter] Use sanitized_Request [udemy] Use sanitized_Request [vbox7] Use sanitized_Request [veoh] Use sanitized_Request [vessel] Use sanitized_Request [vevo] Use sanitized_Request [viddler] Use sanitized_Request [videomega] Use sanitized_Request [viewvster] Use sanitized_Request [viki] Use sanitized_Request [vk] Use sanitized_Request [vodlocker] Use sanitized_Request [voicerepublic] Use sanitized_Request [wistia] Use sanitized_Request [xfileshare] Use sanitized_Request [xtube] Use sanitized_Request [xvideos] Use sanitized_Request [yandexmusic] Use sanitized_Request [youku] Use sanitized_Request [youporn] Use sanitized_Request [youtube] Use sanitized_Request [patreon] Use sanitized_Request [extractor/common] Remove unused import [nfb] PEP 8
9 years ago
  1. from __future__ import unicode_literals
  2. import base64
  3. import datetime
  4. import hashlib
  5. import json
  6. import netrc
  7. import os
  8. import re
  9. import socket
  10. import sys
  11. import time
  12. import math
  13. from ..compat import (
  14. compat_cookiejar,
  15. compat_cookies,
  16. compat_etree_fromstring,
  17. compat_getpass,
  18. compat_http_client,
  19. compat_os_name,
  20. compat_str,
  21. compat_urllib_error,
  22. compat_urllib_parse_urlencode,
  23. compat_urllib_request,
  24. compat_urlparse,
  25. )
  26. from ..downloader.f4m import remove_encrypted_media
  27. from ..utils import (
  28. NO_DEFAULT,
  29. age_restricted,
  30. bug_reports_message,
  31. clean_html,
  32. compiled_regex_type,
  33. determine_ext,
  34. error_to_compat_str,
  35. ExtractorError,
  36. fix_xml_ampersands,
  37. float_or_none,
  38. int_or_none,
  39. parse_iso8601,
  40. RegexNotFoundError,
  41. sanitize_filename,
  42. sanitized_Request,
  43. unescapeHTML,
  44. unified_strdate,
  45. unified_timestamp,
  46. url_basename,
  47. xpath_element,
  48. xpath_text,
  49. xpath_with_ns,
  50. determine_protocol,
  51. parse_duration,
  52. mimetype2ext,
  53. update_Request,
  54. update_url_query,
  55. parse_m3u8_attributes,
  56. extract_attributes,
  57. parse_codecs,
  58. )
  59. class InfoExtractor(object):
  60. """Information Extractor class.
  61. Information extractors are the classes that, given a URL, extract
  62. information about the video (or videos) the URL refers to. This
  63. information includes the real video URL, the video title, author and
  64. others. The information is stored in a dictionary which is then
  65. passed to the YoutubeDL. The YoutubeDL processes this
  66. information possibly downloading the video to the file system, among
  67. other possible outcomes.
  68. The type field determines the type of the result.
  69. By far the most common value (and the default if _type is missing) is
  70. "video", which indicates a single video.
  71. For a video, the dictionaries must include the following fields:
  72. id: Video identifier.
  73. title: Video title, unescaped.
  74. Additionally, it must contain either a formats entry or a url one:
  75. formats: A list of dictionaries for each format available, ordered
  76. from worst to best quality.
  77. Potential fields:
  78. * url Mandatory. The URL of the video file
  79. * manifest_url
  80. The URL of the manifest file in case of
  81. fragmented media (DASH, hls, hds)
  82. * ext Will be calculated from URL if missing
  83. * format A human-readable description of the format
  84. ("mp4 container with h264/opus").
  85. Calculated from the format_id, width, height.
  86. and format_note fields if missing.
  87. * format_id A short description of the format
  88. ("mp4_h264_opus" or "19").
  89. Technically optional, but strongly recommended.
  90. * format_note Additional info about the format
  91. ("3D" or "DASH video")
  92. * width Width of the video, if known
  93. * height Height of the video, if known
  94. * resolution Textual description of width and height
  95. * tbr Average bitrate of audio and video in KBit/s
  96. * abr Average audio bitrate in KBit/s
  97. * acodec Name of the audio codec in use
  98. * asr Audio sampling rate in Hertz
  99. * vbr Average video bitrate in KBit/s
  100. * fps Frame rate
  101. * vcodec Name of the video codec in use
  102. * container Name of the container format
  103. * filesize The number of bytes, if known in advance
  104. * filesize_approx An estimate for the number of bytes
  105. * player_url SWF Player URL (used for rtmpdump).
  106. * protocol The protocol that will be used for the actual
  107. download, lower-case.
  108. "http", "https", "rtsp", "rtmp", "rtmpe",
  109. "m3u8", "m3u8_native" or "http_dash_segments".
  110. * fragments A list of fragments of the fragmented media,
  111. with the following entries:
  112. * "url" (mandatory) - fragment's URL
  113. * "duration" (optional, int or float)
  114. * "filesize" (optional, int)
  115. * preference Order number of this format. If this field is
  116. present and not None, the formats get sorted
  117. by this field, regardless of all other values.
  118. -1 for default (order by other properties),
  119. -2 or smaller for less than default.
  120. < -1000 to hide the format (if there is
  121. another one which is strictly better)
  122. * language Language code, e.g. "de" or "en-US".
  123. * language_preference Is this in the language mentioned in
  124. the URL?
  125. 10 if it's what the URL is about,
  126. -1 for default (don't know),
  127. -10 otherwise, other values reserved for now.
  128. * quality Order number of the video quality of this
  129. format, irrespective of the file format.
  130. -1 for default (order by other properties),
  131. -2 or smaller for less than default.
  132. * source_preference Order number for this video source
  133. (quality takes higher priority)
  134. -1 for default (order by other properties),
  135. -2 or smaller for less than default.
  136. * http_headers A dictionary of additional HTTP headers
  137. to add to the request.
  138. * stretched_ratio If given and not 1, indicates that the
  139. video's pixels are not square.
  140. width : height ratio as float.
  141. * no_resume The server does not support resuming the
  142. (HTTP or RTMP) download. Boolean.
  143. url: Final video URL.
  144. ext: Video filename extension.
  145. format: The video format, defaults to ext (used for --get-format)
  146. player_url: SWF Player URL (used for rtmpdump).
  147. The following fields are optional:
  148. alt_title: A secondary title of the video.
  149. display_id An alternative identifier for the video, not necessarily
  150. unique, but available before title. Typically, id is
  151. something like "4234987", title "Dancing naked mole rats",
  152. and display_id "dancing-naked-mole-rats"
  153. thumbnails: A list of dictionaries, with the following entries:
  154. * "id" (optional, string) - Thumbnail format ID
  155. * "url"
  156. * "preference" (optional, int) - quality of the image
  157. * "width" (optional, int)
  158. * "height" (optional, int)
  159. * "resolution" (optional, string "{width}x{height"},
  160. deprecated)
  161. * "filesize" (optional, int)
  162. thumbnail: Full URL to a video thumbnail image.
  163. description: Full video description.
  164. uploader: Full name of the video uploader.
  165. license: License name the video is licensed under.
  166. creator: The creator of the video.
  167. release_date: The date (YYYYMMDD) when the video was released.
  168. timestamp: UNIX timestamp of the moment the video became available.
  169. upload_date: Video upload date (YYYYMMDD).
  170. If not explicitly set, calculated from timestamp.
  171. uploader_id: Nickname or id of the video uploader.
  172. uploader_url: Full URL to a personal webpage of the video uploader.
  173. location: Physical location where the video was filmed.
  174. subtitles: The available subtitles as a dictionary in the format
  175. {language: subformats}. "subformats" is a list sorted from
  176. lower to higher preference, each element is a dictionary
  177. with the "ext" entry and one of:
  178. * "data": The subtitles file contents
  179. * "url": A URL pointing to the subtitles file
  180. "ext" will be calculated from URL if missing
  181. automatic_captions: Like 'subtitles', used by the YoutubeIE for
  182. automatically generated captions
  183. duration: Length of the video in seconds, as an integer or float.
  184. view_count: How many users have watched the video on the platform.
  185. like_count: Number of positive ratings of the video
  186. dislike_count: Number of negative ratings of the video
  187. repost_count: Number of reposts of the video
  188. average_rating: Average rating give by users, the scale used depends on the webpage
  189. comment_count: Number of comments on the video
  190. comments: A list of comments, each with one or more of the following
  191. properties (all but one of text or html optional):
  192. * "author" - human-readable name of the comment author
  193. * "author_id" - user ID of the comment author
  194. * "id" - Comment ID
  195. * "html" - Comment as HTML
  196. * "text" - Plain text of the comment
  197. * "timestamp" - UNIX timestamp of comment
  198. * "parent" - ID of the comment this one is replying to.
  199. Set to "root" to indicate that this is a
  200. comment to the original video.
  201. age_limit: Age restriction for the video, as an integer (years)
  202. webpage_url: The URL to the video webpage, if given to youtube-dl it
  203. should allow to get the same result again. (It will be set
  204. by YoutubeDL if it's missing)
  205. categories: A list of categories that the video falls in, for example
  206. ["Sports", "Berlin"]
  207. tags: A list of tags assigned to the video, e.g. ["sweden", "pop music"]
  208. is_live: True, False, or None (=unknown). Whether this video is a
  209. live stream that goes on instead of a fixed-length video.
  210. start_time: Time in seconds where the reproduction should start, as
  211. specified in the URL.
  212. end_time: Time in seconds where the reproduction should end, as
  213. specified in the URL.
  214. The following fields should only be used when the video belongs to some logical
  215. chapter or section:
  216. chapter: Name or title of the chapter the video belongs to.
  217. chapter_number: Number of the chapter the video belongs to, as an integer.
  218. chapter_id: Id of the chapter the video belongs to, as a unicode string.
  219. The following fields should only be used when the video is an episode of some
  220. series or programme:
  221. series: Title of the series or programme the video episode belongs to.
  222. season: Title of the season the video episode belongs to.
  223. season_number: Number of the season the video episode belongs to, as an integer.
  224. season_id: Id of the season the video episode belongs to, as a unicode string.
  225. episode: Title of the video episode. Unlike mandatory video title field,
  226. this field should denote the exact title of the video episode
  227. without any kind of decoration.
  228. episode_number: Number of the video episode within a season, as an integer.
  229. episode_id: Id of the video episode, as a unicode string.
  230. The following fields should only be used when the media is a track or a part of
  231. a music album:
  232. track: Title of the track.
  233. track_number: Number of the track within an album or a disc, as an integer.
  234. track_id: Id of the track (useful in case of custom indexing, e.g. 6.iii),
  235. as a unicode string.
  236. artist: Artist(s) of the track.
  237. genre: Genre(s) of the track.
  238. album: Title of the album the track belongs to.
  239. album_type: Type of the album (e.g. "Demo", "Full-length", "Split", "Compilation", etc).
  240. album_artist: List of all artists appeared on the album (e.g.
  241. "Ash Borer / Fell Voices" or "Various Artists", useful for splits
  242. and compilations).
  243. disc_number: Number of the disc or other physical medium the track belongs to,
  244. as an integer.
  245. release_year: Year (YYYY) when the album was released.
  246. Unless mentioned otherwise, the fields should be Unicode strings.
  247. Unless mentioned otherwise, None is equivalent to absence of information.
  248. _type "playlist" indicates multiple videos.
  249. There must be a key "entries", which is a list, an iterable, or a PagedList
  250. object, each element of which is a valid dictionary by this specification.
  251. Additionally, playlists can have "title", "description" and "id" attributes
  252. with the same semantics as videos (see above).
  253. _type "multi_video" indicates that there are multiple videos that
  254. form a single show, for examples multiple acts of an opera or TV episode.
  255. It must have an entries key like a playlist and contain all the keys
  256. required for a video at the same time.
  257. _type "url" indicates that the video must be extracted from another
  258. location, possibly by a different extractor. Its only required key is:
  259. "url" - the next URL to extract.
  260. The key "ie_key" can be set to the class name (minus the trailing "IE",
  261. e.g. "Youtube") if the extractor class is known in advance.
  262. Additionally, the dictionary may have any properties of the resolved entity
  263. known in advance, for example "title" if the title of the referred video is
  264. known ahead of time.
  265. _type "url_transparent" entities have the same specification as "url", but
  266. indicate that the given additional information is more precise than the one
  267. associated with the resolved URL.
  268. This is useful when a site employs a video service that hosts the video and
  269. its technical metadata, but that video service does not embed a useful
  270. title, description etc.
  271. Subclasses of this one should re-define the _real_initialize() and
  272. _real_extract() methods and define a _VALID_URL regexp.
  273. Probably, they should also be added to the list of extractors.
  274. Finally, the _WORKING attribute should be set to False for broken IEs
  275. in order to warn the users and skip the tests.
  276. """
  277. _ready = False
  278. _downloader = None
  279. _WORKING = True
  280. def __init__(self, downloader=None):
  281. """Constructor. Receives an optional downloader."""
  282. self._ready = False
  283. self.set_downloader(downloader)
  284. @classmethod
  285. def suitable(cls, url):
  286. """Receives a URL and returns True if suitable for this IE."""
  287. # This does not use has/getattr intentionally - we want to know whether
  288. # we have cached the regexp for *this* class, whereas getattr would also
  289. # match the superclass
  290. if '_VALID_URL_RE' not in cls.__dict__:
  291. cls._VALID_URL_RE = re.compile(cls._VALID_URL)
  292. return cls._VALID_URL_RE.match(url) is not None
  293. @classmethod
  294. def _match_id(cls, url):
  295. if '_VALID_URL_RE' not in cls.__dict__:
  296. cls._VALID_URL_RE = re.compile(cls._VALID_URL)
  297. m = cls._VALID_URL_RE.match(url)
  298. assert m
  299. return m.group('id')
  300. @classmethod
  301. def working(cls):
  302. """Getter method for _WORKING."""
  303. return cls._WORKING
  304. def initialize(self):
  305. """Initializes an instance (authentication, etc)."""
  306. if not self._ready:
  307. self._real_initialize()
  308. self._ready = True
  309. def extract(self, url):
  310. """Extracts URL information and returns it in list of dicts."""
  311. try:
  312. self.initialize()
  313. return self._real_extract(url)
  314. except ExtractorError:
  315. raise
  316. except compat_http_client.IncompleteRead as e:
  317. raise ExtractorError('A network error has occurred.', cause=e, expected=True)
  318. except (KeyError, StopIteration) as e:
  319. raise ExtractorError('An extractor error has occurred.', cause=e)
  320. def set_downloader(self, downloader):
  321. """Sets the downloader for this IE."""
  322. self._downloader = downloader
  323. def _real_initialize(self):
  324. """Real initialization process. Redefine in subclasses."""
  325. pass
  326. def _real_extract(self, url):
  327. """Real extraction process. Redefine in subclasses."""
  328. pass
  329. @classmethod
  330. def ie_key(cls):
  331. """A string for getting the InfoExtractor with get_info_extractor"""
  332. return compat_str(cls.__name__[:-2])
  333. @property
  334. def IE_NAME(self):
  335. return compat_str(type(self).__name__[:-2])
  336. def _request_webpage(self, url_or_request, video_id, note=None, errnote=None, fatal=True, data=None, headers={}, query={}):
  337. """ Returns the response handle """
  338. if note is None:
  339. self.report_download_webpage(video_id)
  340. elif note is not False:
  341. if video_id is None:
  342. self.to_screen('%s' % (note,))
  343. else:
  344. self.to_screen('%s: %s' % (video_id, note))
  345. if isinstance(url_or_request, compat_urllib_request.Request):
  346. url_or_request = update_Request(
  347. url_or_request, data=data, headers=headers, query=query)
  348. else:
  349. if query:
  350. url_or_request = update_url_query(url_or_request, query)
  351. if data is not None or headers:
  352. url_or_request = sanitized_Request(url_or_request, data, headers)
  353. try:
  354. return self._downloader.urlopen(url_or_request)
  355. except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
  356. if errnote is False:
  357. return False
  358. if errnote is None:
  359. errnote = 'Unable to download webpage'
  360. errmsg = '%s: %s' % (errnote, error_to_compat_str(err))
  361. if fatal:
  362. raise ExtractorError(errmsg, sys.exc_info()[2], cause=err)
  363. else:
  364. self._downloader.report_warning(errmsg)
  365. return False
  366. def _download_webpage_handle(self, url_or_request, video_id, note=None, errnote=None, fatal=True, encoding=None, data=None, headers={}, query={}):
  367. """ Returns a tuple (page content as string, URL handle) """
  368. # Strip hashes from the URL (#1038)
  369. if isinstance(url_or_request, (compat_str, str)):
  370. url_or_request = url_or_request.partition('#')[0]
  371. urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query)
  372. if urlh is False:
  373. assert not fatal
  374. return False
  375. content = self._webpage_read_content(urlh, url_or_request, video_id, note, errnote, fatal, encoding=encoding)
  376. return (content, urlh)
  377. @staticmethod
  378. def _guess_encoding_from_content(content_type, webpage_bytes):
  379. m = re.match(r'[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+\s*;\s*charset=(.+)', content_type)
  380. if m:
  381. encoding = m.group(1)
  382. else:
  383. m = re.search(br'<meta[^>]+charset=[\'"]?([^\'")]+)[ /\'">]',
  384. webpage_bytes[:1024])
  385. if m:
  386. encoding = m.group(1).decode('ascii')
  387. elif webpage_bytes.startswith(b'\xff\xfe'):
  388. encoding = 'utf-16'
  389. else:
  390. encoding = 'utf-8'
  391. return encoding
  392. def _webpage_read_content(self, urlh, url_or_request, video_id, note=None, errnote=None, fatal=True, prefix=None, encoding=None):
  393. content_type = urlh.headers.get('Content-Type', '')
  394. webpage_bytes = urlh.read()
  395. if prefix is not None:
  396. webpage_bytes = prefix + webpage_bytes
  397. if not encoding:
  398. encoding = self._guess_encoding_from_content(content_type, webpage_bytes)
  399. if self._downloader.params.get('dump_intermediate_pages', False):
  400. try:
  401. url = url_or_request.get_full_url()
  402. except AttributeError:
  403. url = url_or_request
  404. self.to_screen('Dumping request to ' + url)
  405. dump = base64.b64encode(webpage_bytes).decode('ascii')
  406. self._downloader.to_screen(dump)
  407. if self._downloader.params.get('write_pages', False):
  408. try:
  409. url = url_or_request.get_full_url()
  410. except AttributeError:
  411. url = url_or_request
  412. basen = '%s_%s' % (video_id, url)
  413. if len(basen) > 240:
  414. h = '___' + hashlib.md5(basen.encode('utf-8')).hexdigest()
  415. basen = basen[:240 - len(h)] + h
  416. raw_filename = basen + '.dump'
  417. filename = sanitize_filename(raw_filename, restricted=True)
  418. self.to_screen('Saving request to ' + filename)
  419. # Working around MAX_PATH limitation on Windows (see
  420. # http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx)
  421. if compat_os_name == 'nt':
  422. absfilepath = os.path.abspath(filename)
  423. if len(absfilepath) > 259:
  424. filename = '\\\\?\\' + absfilepath
  425. with open(filename, 'wb') as outf:
  426. outf.write(webpage_bytes)
  427. try:
  428. content = webpage_bytes.decode(encoding, 'replace')
  429. except LookupError:
  430. content = webpage_bytes.decode('utf-8', 'replace')
  431. if ('<title>Access to this site is blocked</title>' in content and
  432. 'Websense' in content[:512]):
  433. msg = 'Access to this webpage has been blocked by Websense filtering software in your network.'
  434. blocked_iframe = self._html_search_regex(
  435. r'<iframe src="([^"]+)"', content,
  436. 'Websense information URL', default=None)
  437. if blocked_iframe:
  438. msg += ' Visit %s for more details' % blocked_iframe
  439. raise ExtractorError(msg, expected=True)
  440. if '<title>The URL you requested has been blocked</title>' in content[:512]:
  441. msg = (
  442. 'Access to this webpage has been blocked by Indian censorship. '
  443. 'Use a VPN or proxy server (with --proxy) to route around it.')
  444. block_msg = self._html_search_regex(
  445. r'</h1><p>(.*?)</p>',
  446. content, 'block message', default=None)
  447. if block_msg:
  448. msg += ' (Message: "%s")' % block_msg.replace('\n', ' ')
  449. raise ExtractorError(msg, expected=True)
  450. return content
  451. def _download_webpage(self, url_or_request, video_id, note=None, errnote=None, fatal=True, tries=1, timeout=5, encoding=None, data=None, headers={}, query={}):
  452. """ Returns the data of the page as a string """
  453. success = False
  454. try_count = 0
  455. while success is False:
  456. try:
  457. res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding, data=data, headers=headers, query=query)
  458. success = True
  459. except compat_http_client.IncompleteRead as e:
  460. try_count += 1
  461. if try_count >= tries:
  462. raise e
  463. self._sleep(timeout, video_id)
  464. if res is False:
  465. return res
  466. else:
  467. content, _ = res
  468. return content
  469. def _download_xml(self, url_or_request, video_id,
  470. note='Downloading XML', errnote='Unable to download XML',
  471. transform_source=None, fatal=True, encoding=None, data=None, headers={}, query={}):
  472. """Return the xml as an xml.etree.ElementTree.Element"""
  473. xml_string = self._download_webpage(
  474. url_or_request, video_id, note, errnote, fatal=fatal, encoding=encoding, data=data, headers=headers, query=query)
  475. if xml_string is False:
  476. return xml_string
  477. if transform_source:
  478. xml_string = transform_source(xml_string)
  479. return compat_etree_fromstring(xml_string.encode('utf-8'))
  480. def _download_json(self, url_or_request, video_id,
  481. note='Downloading JSON metadata',
  482. errnote='Unable to download JSON metadata',
  483. transform_source=None,
  484. fatal=True, encoding=None, data=None, headers={}, query={}):
  485. json_string = self._download_webpage(
  486. url_or_request, video_id, note, errnote, fatal=fatal,
  487. encoding=encoding, data=data, headers=headers, query=query)
  488. if (not fatal) and json_string is False:
  489. return None
  490. return self._parse_json(
  491. json_string, video_id, transform_source=transform_source, fatal=fatal)
  492. def _parse_json(self, json_string, video_id, transform_source=None, fatal=True):
  493. if transform_source:
  494. json_string = transform_source(json_string)
  495. try:
  496. return json.loads(json_string)
  497. except ValueError as ve:
  498. errmsg = '%s: Failed to parse JSON ' % video_id
  499. if fatal:
  500. raise ExtractorError(errmsg, cause=ve)
  501. else:
  502. self.report_warning(errmsg + str(ve))
  503. def report_warning(self, msg, video_id=None):
  504. idstr = '' if video_id is None else '%s: ' % video_id
  505. self._downloader.report_warning(
  506. '[%s] %s%s' % (self.IE_NAME, idstr, msg))
  507. def to_screen(self, msg):
  508. """Print msg to screen, prefixing it with '[ie_name]'"""
  509. self._downloader.to_screen('[%s] %s' % (self.IE_NAME, msg))
  510. def report_extraction(self, id_or_name):
  511. """Report information extraction."""
  512. self.to_screen('%s: Extracting information' % id_or_name)
  513. def report_download_webpage(self, video_id):
  514. """Report webpage download."""
  515. self.to_screen('%s: Downloading webpage' % video_id)
  516. def report_age_confirmation(self):
  517. """Report attempt to confirm age."""
  518. self.to_screen('Confirming age')
  519. def report_login(self):
  520. """Report attempt to log in."""
  521. self.to_screen('Logging in')
  522. @staticmethod
  523. def raise_login_required(msg='This video is only available for registered users'):
  524. raise ExtractorError(
  525. '%s. Use --username and --password or --netrc to provide account credentials.' % msg,
  526. expected=True)
  527. @staticmethod
  528. def raise_geo_restricted(msg='This video is not available from your location due to geo restriction'):
  529. raise ExtractorError(
  530. '%s. You might want to use --proxy to workaround.' % msg,
  531. expected=True)
  532. # Methods for following #608
  533. @staticmethod
  534. def url_result(url, ie=None, video_id=None, video_title=None):
  535. """Returns a URL that points to a page that should be processed"""
  536. # TODO: ie should be the class used for getting the info
  537. video_info = {'_type': 'url',
  538. 'url': url,
  539. 'ie_key': ie}
  540. if video_id is not None:
  541. video_info['id'] = video_id
  542. if video_title is not None:
  543. video_info['title'] = video_title
  544. return video_info
  545. @staticmethod
  546. def playlist_result(entries, playlist_id=None, playlist_title=None, playlist_description=None):
  547. """Returns a playlist"""
  548. video_info = {'_type': 'playlist',
  549. 'entries': entries}
  550. if playlist_id:
  551. video_info['id'] = playlist_id
  552. if playlist_title:
  553. video_info['title'] = playlist_title
  554. if playlist_description:
  555. video_info['description'] = playlist_description
  556. return video_info
  557. def _search_regex(self, pattern, string, name, default=NO_DEFAULT, fatal=True, flags=0, group=None):
  558. """
  559. Perform a regex search on the given string, using a single or a list of
  560. patterns returning the first matching group.
  561. In case of failure return a default value or raise a WARNING or a
  562. RegexNotFoundError, depending on fatal, specifying the field name.
  563. """
  564. if isinstance(pattern, (str, compat_str, compiled_regex_type)):
  565. mobj = re.search(pattern, string, flags)
  566. else:
  567. for p in pattern:
  568. mobj = re.search(p, string, flags)
  569. if mobj:
  570. break
  571. if not self._downloader.params.get('no_color') and compat_os_name != 'nt' and sys.stderr.isatty():
  572. _name = '\033[0;34m%s\033[0m' % name
  573. else:
  574. _name = name
  575. if mobj:
  576. if group is None:
  577. # return the first matching group
  578. return next(g for g in mobj.groups() if g is not None)
  579. else:
  580. return mobj.group(group)
  581. elif default is not NO_DEFAULT:
  582. return default
  583. elif fatal:
  584. raise RegexNotFoundError('Unable to extract %s' % _name)
  585. else:
  586. self._downloader.report_warning('unable to extract %s' % _name + bug_reports_message())
  587. return None
  588. def _html_search_regex(self, pattern, string, name, default=NO_DEFAULT, fatal=True, flags=0, group=None):
  589. """
  590. Like _search_regex, but strips HTML tags and unescapes entities.
  591. """
  592. res = self._search_regex(pattern, string, name, default, fatal, flags, group)
  593. if res:
  594. return clean_html(res).strip()
  595. else:
  596. return res
  597. def _get_netrc_login_info(self, netrc_machine=None):
  598. username = None
  599. password = None
  600. netrc_machine = netrc_machine or self._NETRC_MACHINE
  601. if self._downloader.params.get('usenetrc', False):
  602. try:
  603. info = netrc.netrc().authenticators(netrc_machine)
  604. if info is not None:
  605. username = info[0]
  606. password = info[2]
  607. else:
  608. raise netrc.NetrcParseError(
  609. 'No authenticators for %s' % netrc_machine)
  610. except (IOError, netrc.NetrcParseError) as err:
  611. self._downloader.report_warning(
  612. 'parsing .netrc: %s' % error_to_compat_str(err))
  613. return username, password
  614. def _get_login_info(self, username_option='username', password_option='password', netrc_machine=None):
  615. """
  616. Get the login info as (username, password)
  617. First look for the manually specified credentials using username_option
  618. and password_option as keys in params dictionary. If no such credentials
  619. available look in the netrc file using the netrc_machine or _NETRC_MACHINE
  620. value.
  621. If there's no info available, return (None, None)
  622. """
  623. if self._downloader is None:
  624. return (None, None)
  625. downloader_params = self._downloader.params
  626. # Attempt to use provided username and password or .netrc data
  627. if downloader_params.get(username_option) is not None:
  628. username = downloader_params[username_option]
  629. password = downloader_params[password_option]
  630. else:
  631. username, password = self._get_netrc_login_info(netrc_machine)
  632. return username, password
  633. def _get_tfa_info(self, note='two-factor verification code'):
  634. """
  635. Get the two-factor authentication info
  636. TODO - asking the user will be required for sms/phone verify
  637. currently just uses the command line option
  638. If there's no info available, return None
  639. """
  640. if self._downloader is None:
  641. return None
  642. downloader_params = self._downloader.params
  643. if downloader_params.get('twofactor') is not None:
  644. return downloader_params['twofactor']
  645. return compat_getpass('Type %s and press [Return]: ' % note)
  646. # Helper functions for extracting OpenGraph info
  647. @staticmethod
  648. def _og_regexes(prop):
  649. content_re = r'content=(?:"([^"]+?)"|\'([^\']+?)\'|\s*([^\s"\'=<>`]+?))'
  650. property_re = (r'(?:name|property)=(?:\'og:%(prop)s\'|"og:%(prop)s"|\s*og:%(prop)s\b)'
  651. % {'prop': re.escape(prop)})
  652. template = r'<meta[^>]+?%s[^>]+?%s'
  653. return [
  654. template % (property_re, content_re),
  655. template % (content_re, property_re),
  656. ]
  657. @staticmethod
  658. def _meta_regex(prop):
  659. return r'''(?isx)<meta
  660. (?=[^>]+(?:itemprop|name|property|id|http-equiv)=(["\']?)%s\1)
  661. [^>]+?content=(["\'])(?P<content>.*?)\2''' % re.escape(prop)
  662. def _og_search_property(self, prop, html, name=None, **kargs):
  663. if not isinstance(prop, (list, tuple)):
  664. prop = [prop]
  665. if name is None:
  666. name = 'OpenGraph %s' % prop[0]
  667. og_regexes = []
  668. for p in prop:
  669. og_regexes.extend(self._og_regexes(p))
  670. escaped = self._search_regex(og_regexes, html, name, flags=re.DOTALL, **kargs)
  671. if escaped is None:
  672. return None
  673. return unescapeHTML(escaped)
  674. def _og_search_thumbnail(self, html, **kargs):
  675. return self._og_search_property('image', html, 'thumbnail URL', fatal=False, **kargs)
  676. def _og_search_description(self, html, **kargs):
  677. return self._og_search_property('description', html, fatal=False, **kargs)
  678. def _og_search_title(self, html, **kargs):
  679. return self._og_search_property('title', html, **kargs)
  680. def _og_search_video_url(self, html, name='video url', secure=True, **kargs):
  681. regexes = self._og_regexes('video') + self._og_regexes('video:url')
  682. if secure:
  683. regexes = self._og_regexes('video:secure_url') + regexes
  684. return self._html_search_regex(regexes, html, name, **kargs)
  685. def _og_search_url(self, html, **kargs):
  686. return self._og_search_property('url', html, **kargs)
  687. def _html_search_meta(self, name, html, display_name=None, fatal=False, **kwargs):
  688. if not isinstance(name, (list, tuple)):
  689. name = [name]
  690. if display_name is None:
  691. display_name = name[0]
  692. return self._html_search_regex(
  693. [self._meta_regex(n) for n in name],
  694. html, display_name, fatal=fatal, group='content', **kwargs)
  695. def _dc_search_uploader(self, html):
  696. return self._html_search_meta('dc.creator', html, 'uploader')
  697. def _rta_search(self, html):
  698. # See http://www.rtalabel.org/index.php?content=howtofaq#single
  699. if re.search(r'(?ix)<meta\s+name="rating"\s+'
  700. r' content="RTA-5042-1996-1400-1577-RTA"',
  701. html):
  702. return 18
  703. return 0
  704. def _media_rating_search(self, html):
  705. # See http://www.tjg-designs.com/WP/metadata-code-examples-adding-metadata-to-your-web-pages/
  706. rating = self._html_search_meta('rating', html)
  707. if not rating:
  708. return None
  709. RATING_TABLE = {
  710. 'safe for kids': 0,
  711. 'general': 8,
  712. '14 years': 14,
  713. 'mature': 17,
  714. 'restricted': 19,
  715. }
  716. return RATING_TABLE.get(rating.lower())
  717. def _family_friendly_search(self, html):
  718. # See http://schema.org/VideoObject
  719. family_friendly = self._html_search_meta('isFamilyFriendly', html)
  720. if not family_friendly:
  721. return None
  722. RATING_TABLE = {
  723. '1': 0,
  724. 'true': 0,
  725. '0': 18,
  726. 'false': 18,
  727. }
  728. return RATING_TABLE.get(family_friendly.lower())
  729. def _twitter_search_player(self, html):
  730. return self._html_search_meta('twitter:player', html,
  731. 'twitter card player')
  732. def _search_json_ld(self, html, video_id, expected_type=None, **kwargs):
  733. json_ld = self._search_regex(
  734. r'(?s)<script[^>]+type=(["\'])application/ld\+json\1[^>]*>(?P<json_ld>.+?)</script>',
  735. html, 'JSON-LD', group='json_ld', **kwargs)
  736. default = kwargs.get('default', NO_DEFAULT)
  737. if not json_ld:
  738. return default if default is not NO_DEFAULT else {}
  739. # JSON-LD may be malformed and thus `fatal` should be respected.
  740. # At the same time `default` may be passed that assumes `fatal=False`
  741. # for _search_regex. Let's simulate the same behavior here as well.
  742. fatal = kwargs.get('fatal', True) if default == NO_DEFAULT else False
  743. return self._json_ld(json_ld, video_id, fatal=fatal, expected_type=expected_type)
  744. def _json_ld(self, json_ld, video_id, fatal=True, expected_type=None):
  745. if isinstance(json_ld, compat_str):
  746. json_ld = self._parse_json(json_ld, video_id, fatal=fatal)
  747. if not json_ld:
  748. return {}
  749. info = {}
  750. if not isinstance(json_ld, (list, tuple, dict)):
  751. return info
  752. if isinstance(json_ld, dict):
  753. json_ld = [json_ld]
  754. for e in json_ld:
  755. if e.get('@context') == 'http://schema.org':
  756. item_type = e.get('@type')
  757. if expected_type is not None and expected_type != item_type:
  758. return info
  759. if item_type == 'TVEpisode':
  760. info.update({
  761. 'episode': unescapeHTML(e.get('name')),
  762. 'episode_number': int_or_none(e.get('episodeNumber')),
  763. 'description': unescapeHTML(e.get('description')),
  764. })
  765. part_of_season = e.get('partOfSeason')
  766. if isinstance(part_of_season, dict) and part_of_season.get('@type') == 'TVSeason':
  767. info['season_number'] = int_or_none(part_of_season.get('seasonNumber'))
  768. part_of_series = e.get('partOfSeries') or e.get('partOfTVSeries')
  769. if isinstance(part_of_series, dict) and part_of_series.get('@type') == 'TVSeries':
  770. info['series'] = unescapeHTML(part_of_series.get('name'))
  771. elif item_type == 'Article':
  772. info.update({
  773. 'timestamp': parse_iso8601(e.get('datePublished')),
  774. 'title': unescapeHTML(e.get('headline')),
  775. 'description': unescapeHTML(e.get('articleBody')),
  776. })
  777. elif item_type == 'VideoObject':
  778. info.update({
  779. 'url': e.get('contentUrl'),
  780. 'title': unescapeHTML(e.get('name')),
  781. 'description': unescapeHTML(e.get('description')),
  782. 'thumbnail': e.get('thumbnailUrl'),
  783. 'duration': parse_duration(e.get('duration')),
  784. 'timestamp': unified_timestamp(e.get('uploadDate')),
  785. 'filesize': float_or_none(e.get('contentSize')),
  786. 'tbr': int_or_none(e.get('bitrate')),
  787. 'width': int_or_none(e.get('width')),
  788. 'height': int_or_none(e.get('height')),
  789. })
  790. break
  791. return dict((k, v) for k, v in info.items() if v is not None)
  792. @staticmethod
  793. def _hidden_inputs(html):
  794. html = re.sub(r'<!--(?:(?!<!--).)*-->', '', html)
  795. hidden_inputs = {}
  796. for input in re.findall(r'(?i)(<input[^>]+>)', html):
  797. attrs = extract_attributes(input)
  798. if not input:
  799. continue
  800. if attrs.get('type') not in ('hidden', 'submit'):
  801. continue
  802. name = attrs.get('name') or attrs.get('id')
  803. value = attrs.get('value')
  804. if name and value is not None:
  805. hidden_inputs[name] = value
  806. return hidden_inputs
  807. def _form_hidden_inputs(self, form_id, html):
  808. form = self._search_regex(
  809. r'(?is)<form[^>]+?id=(["\'])%s\1[^>]*>(?P<form>.+?)</form>' % form_id,
  810. html, '%s form' % form_id, group='form')
  811. return self._hidden_inputs(form)
  812. def _sort_formats(self, formats, field_preference=None):
  813. if not formats:
  814. raise ExtractorError('No video formats found')
  815. for f in formats:
  816. # Automatically determine tbr when missing based on abr and vbr (improves
  817. # formats sorting in some cases)
  818. if 'tbr' not in f and f.get('abr') is not None and f.get('vbr') is not None:
  819. f['tbr'] = f['abr'] + f['vbr']
  820. def _formats_key(f):
  821. # TODO remove the following workaround
  822. from ..utils import determine_ext
  823. if not f.get('ext') and 'url' in f:
  824. f['ext'] = determine_ext(f['url'])
  825. if isinstance(field_preference, (list, tuple)):
  826. return tuple(
  827. f.get(field)
  828. if f.get(field) is not None
  829. else ('' if field == 'format_id' else -1)
  830. for field in field_preference)
  831. preference = f.get('preference')
  832. if preference is None:
  833. preference = 0
  834. if f.get('ext') in ['f4f', 'f4m']: # Not yet supported
  835. preference -= 0.5
  836. protocol = f.get('protocol') or determine_protocol(f)
  837. proto_preference = 0 if protocol in ['http', 'https'] else (-0.5 if protocol == 'rtsp' else -0.1)
  838. if f.get('vcodec') == 'none': # audio only
  839. preference -= 50
  840. if self._downloader.params.get('prefer_free_formats'):
  841. ORDER = ['aac', 'mp3', 'm4a', 'webm', 'ogg', 'opus']
  842. else:
  843. ORDER = ['webm', 'opus', 'ogg', 'mp3', 'aac', 'm4a']
  844. ext_preference = 0
  845. try:
  846. audio_ext_preference = ORDER.index(f['ext'])
  847. except ValueError:
  848. audio_ext_preference = -1
  849. else:
  850. if f.get('acodec') == 'none': # video only
  851. preference -= 40
  852. if self._downloader.params.get('prefer_free_formats'):
  853. ORDER = ['flv', 'mp4', 'webm']
  854. else:
  855. ORDER = ['webm', 'flv', 'mp4']
  856. try:
  857. ext_preference = ORDER.index(f['ext'])
  858. except ValueError:
  859. ext_preference = -1
  860. audio_ext_preference = 0
  861. return (
  862. preference,
  863. f.get('language_preference') if f.get('language_preference') is not None else -1,
  864. f.get('quality') if f.get('quality') is not None else -1,
  865. f.get('tbr') if f.get('tbr') is not None else -1,
  866. f.get('filesize') if f.get('filesize') is not None else -1,
  867. f.get('vbr') if f.get('vbr') is not None else -1,
  868. f.get('height') if f.get('height') is not None else -1,
  869. f.get('width') if f.get('width') is not None else -1,
  870. proto_preference,
  871. ext_preference,
  872. f.get('abr') if f.get('abr') is not None else -1,
  873. audio_ext_preference,
  874. f.get('fps') if f.get('fps') is not None else -1,
  875. f.get('filesize_approx') if f.get('filesize_approx') is not None else -1,
  876. f.get('source_preference') if f.get('source_preference') is not None else -1,
  877. f.get('format_id') if f.get('format_id') is not None else '',
  878. )
  879. formats.sort(key=_formats_key)
  880. def _check_formats(self, formats, video_id):
  881. if formats:
  882. formats[:] = filter(
  883. lambda f: self._is_valid_url(
  884. f['url'], video_id,
  885. item='%s video format' % f.get('format_id') if f.get('format_id') else 'video'),
  886. formats)
  887. @staticmethod
  888. def _remove_duplicate_formats(formats):
  889. format_urls = set()
  890. unique_formats = []
  891. for f in formats:
  892. if f['url'] not in format_urls:
  893. format_urls.add(f['url'])
  894. unique_formats.append(f)
  895. formats[:] = unique_formats
  896. def _is_valid_url(self, url, video_id, item='video'):
  897. url = self._proto_relative_url(url, scheme='http:')
  898. # For now assume non HTTP(S) URLs always valid
  899. if not (url.startswith('http://') or url.startswith('https://')):
  900. return True
  901. try:
  902. self._request_webpage(url, video_id, 'Checking %s URL' % item)
  903. return True
  904. except ExtractorError as e:
  905. if isinstance(e.cause, compat_urllib_error.URLError):
  906. self.to_screen(
  907. '%s: %s URL is invalid, skipping' % (video_id, item))
  908. return False
  909. raise
  910. def http_scheme(self):
  911. """ Either "http:" or "https:", depending on the user's preferences """
  912. return (
  913. 'http:'
  914. if self._downloader.params.get('prefer_insecure', False)
  915. else 'https:')
  916. def _proto_relative_url(self, url, scheme=None):
  917. if url is None:
  918. return url
  919. if url.startswith('//'):
  920. if scheme is None:
  921. scheme = self.http_scheme()
  922. return scheme + url
  923. else:
  924. return url
  925. def _sleep(self, timeout, video_id, msg_template=None):
  926. if msg_template is None:
  927. msg_template = '%(video_id)s: Waiting for %(timeout)s seconds'
  928. msg = msg_template % {'video_id': video_id, 'timeout': timeout}
  929. self.to_screen(msg)
  930. time.sleep(timeout)
  931. def _extract_f4m_formats(self, manifest_url, video_id, preference=None, f4m_id=None,
  932. transform_source=lambda s: fix_xml_ampersands(s).strip(),
  933. fatal=True, m3u8_id=None):
  934. manifest = self._download_xml(
  935. manifest_url, video_id, 'Downloading f4m manifest',
  936. 'Unable to download f4m manifest',
  937. # Some manifests may be malformed, e.g. prosiebensat1 generated manifests
  938. # (see https://github.com/rg3/youtube-dl/issues/6215#issuecomment-121704244)
  939. transform_source=transform_source,
  940. fatal=fatal)
  941. if manifest is False:
  942. return []
  943. return self._parse_f4m_formats(
  944. manifest, manifest_url, video_id, preference=preference, f4m_id=f4m_id,
  945. transform_source=transform_source, fatal=fatal, m3u8_id=m3u8_id)
  946. def _parse_f4m_formats(self, manifest, manifest_url, video_id, preference=None, f4m_id=None,
  947. transform_source=lambda s: fix_xml_ampersands(s).strip(),
  948. fatal=True, m3u8_id=None):
  949. # currently youtube-dl cannot decode the playerVerificationChallenge as Akamai uses Adobe Alchemy
  950. akamai_pv = manifest.find('{http://ns.adobe.com/f4m/1.0}pv-2.0')
  951. if akamai_pv is not None and ';' in akamai_pv.text:
  952. playerVerificationChallenge = akamai_pv.text.split(';')[0]
  953. if playerVerificationChallenge.strip() != '':
  954. return []
  955. formats = []
  956. manifest_version = '1.0'
  957. media_nodes = manifest.findall('{http://ns.adobe.com/f4m/1.0}media')
  958. if not media_nodes:
  959. manifest_version = '2.0'
  960. media_nodes = manifest.findall('{http://ns.adobe.com/f4m/2.0}media')
  961. # Remove unsupported DRM protected media from final formats
  962. # rendition (see https://github.com/rg3/youtube-dl/issues/8573).
  963. media_nodes = remove_encrypted_media(media_nodes)
  964. if not media_nodes:
  965. return formats
  966. base_url = xpath_text(
  967. manifest, ['{http://ns.adobe.com/f4m/1.0}baseURL', '{http://ns.adobe.com/f4m/2.0}baseURL'],
  968. 'base URL', default=None)
  969. if base_url:
  970. base_url = base_url.strip()
  971. bootstrap_info = xpath_element(
  972. manifest, ['{http://ns.adobe.com/f4m/1.0}bootstrapInfo', '{http://ns.adobe.com/f4m/2.0}bootstrapInfo'],
  973. 'bootstrap info', default=None)
  974. for i, media_el in enumerate(media_nodes):
  975. tbr = int_or_none(media_el.attrib.get('bitrate'))
  976. width = int_or_none(media_el.attrib.get('width'))
  977. height = int_or_none(media_el.attrib.get('height'))
  978. format_id = '-'.join(filter(None, [f4m_id, compat_str(i if tbr is None else tbr)]))
  979. # If <bootstrapInfo> is present, the specified f4m is a
  980. # stream-level manifest, and only set-level manifests may refer to
  981. # external resources. See section 11.4 and section 4 of F4M spec
  982. if bootstrap_info is None:
  983. media_url = None
  984. # @href is introduced in 2.0, see section 11.6 of F4M spec
  985. if manifest_version == '2.0':
  986. media_url = media_el.attrib.get('href')
  987. if media_url is None:
  988. media_url = media_el.attrib.get('url')
  989. if not media_url:
  990. continue
  991. manifest_url = (
  992. media_url if media_url.startswith('http://') or media_url.startswith('https://')
  993. else ((base_url or '/'.join(manifest_url.split('/')[:-1])) + '/' + media_url))
  994. # If media_url is itself a f4m manifest do the recursive extraction
  995. # since bitrates in parent manifest (this one) and media_url manifest
  996. # may differ leading to inability to resolve the format by requested
  997. # bitrate in f4m downloader
  998. ext = determine_ext(manifest_url)
  999. if ext == 'f4m':
  1000. f4m_formats = self._extract_f4m_formats(
  1001. manifest_url, video_id, preference=preference, f4m_id=f4m_id,
  1002. transform_source=transform_source, fatal=fatal)
  1003. # Sometimes stream-level manifest contains single media entry that
  1004. # does not contain any quality metadata (e.g. http://matchtv.ru/#live-player).
  1005. # At the same time parent's media entry in set-level manifest may
  1006. # contain it. We will copy it from parent in such cases.
  1007. if len(f4m_formats) == 1:
  1008. f = f4m_formats[0]
  1009. f.update({
  1010. 'tbr': f.get('tbr') or tbr,
  1011. 'width': f.get('width') or width,
  1012. 'height': f.get('height') or height,
  1013. 'format_id': f.get('format_id') if not tbr else format_id,
  1014. })
  1015. formats.extend(f4m_formats)
  1016. continue
  1017. elif ext == 'm3u8':
  1018. formats.extend(self._extract_m3u8_formats(
  1019. manifest_url, video_id, 'mp4', preference=preference,
  1020. m3u8_id=m3u8_id, fatal=fatal))
  1021. continue
  1022. formats.append({
  1023. 'format_id': format_id,
  1024. 'url': manifest_url,
  1025. 'manifest_url': manifest_url,
  1026. 'ext': 'flv' if bootstrap_info is not None else None,
  1027. 'tbr': tbr,
  1028. 'width': width,
  1029. 'height': height,
  1030. 'preference': preference,
  1031. })
  1032. return formats
  1033. def _m3u8_meta_format(self, m3u8_url, ext=None, preference=None, m3u8_id=None):
  1034. return {
  1035. 'format_id': '-'.join(filter(None, [m3u8_id, 'meta'])),
  1036. 'url': m3u8_url,
  1037. 'ext': ext,
  1038. 'protocol': 'm3u8',
  1039. 'preference': preference - 100 if preference else -100,
  1040. 'resolution': 'multiple',
  1041. 'format_note': 'Quality selection URL',
  1042. }
  1043. def _extract_m3u8_formats(self, m3u8_url, video_id, ext=None,
  1044. entry_protocol='m3u8', preference=None,
  1045. m3u8_id=None, note=None, errnote=None,
  1046. fatal=True, live=False):
  1047. res = self._download_webpage_handle(
  1048. m3u8_url, video_id,
  1049. note=note or 'Downloading m3u8 information',
  1050. errnote=errnote or 'Failed to download m3u8 information',
  1051. fatal=fatal)
  1052. if res is False:
  1053. return []
  1054. m3u8_doc, urlh = res
  1055. m3u8_url = urlh.geturl()
  1056. formats = [self._m3u8_meta_format(m3u8_url, ext, preference, m3u8_id)]
  1057. format_url = lambda u: (
  1058. u
  1059. if re.match(r'^https?://', u)
  1060. else compat_urlparse.urljoin(m3u8_url, u))
  1061. # We should try extracting formats only from master playlists [1], i.e.
  1062. # playlists that describe available qualities. On the other hand media
  1063. # playlists [2] should be returned as is since they contain just the media
  1064. # without qualities renditions.
  1065. # Fortunately, master playlist can be easily distinguished from media
  1066. # playlist based on particular tags availability. As of [1, 2] master
  1067. # playlist tags MUST NOT appear in a media playist and vice versa.
  1068. # As of [3] #EXT-X-TARGETDURATION tag is REQUIRED for every media playlist
  1069. # and MUST NOT appear in master playlist thus we can clearly detect media
  1070. # playlist with this criterion.
  1071. # 1. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.4
  1072. # 2. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3
  1073. # 3. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3.1
  1074. if '#EXT-X-TARGETDURATION' in m3u8_doc: # media playlist, return as is
  1075. return [{
  1076. 'url': m3u8_url,
  1077. 'format_id': m3u8_id,
  1078. 'ext': ext,
  1079. 'protocol': entry_protocol,
  1080. 'preference': preference,
  1081. }]
  1082. last_info = {}
  1083. last_media = {}
  1084. for line in m3u8_doc.splitlines():
  1085. if line.startswith('#EXT-X-STREAM-INF:'):
  1086. last_info = parse_m3u8_attributes(line)
  1087. elif line.startswith('#EXT-X-MEDIA:'):
  1088. media = parse_m3u8_attributes(line)
  1089. media_type = media.get('TYPE')
  1090. if media_type in ('VIDEO', 'AUDIO'):
  1091. media_url = media.get('URI')
  1092. if media_url:
  1093. format_id = []
  1094. for v in (media.get('GROUP-ID'), media.get('NAME')):
  1095. if v:
  1096. format_id.append(v)
  1097. formats.append({
  1098. 'format_id': '-'.join(format_id),
  1099. 'url': format_url(media_url),
  1100. 'language': media.get('LANGUAGE'),
  1101. 'vcodec': 'none' if media_type == 'AUDIO' else None,
  1102. 'ext': ext,
  1103. 'protocol': entry_protocol,
  1104. 'preference': preference,
  1105. })
  1106. else:
  1107. # When there is no URI in EXT-X-MEDIA let this tag's
  1108. # data be used by regular URI lines below
  1109. last_media = media
  1110. elif line.startswith('#') or not line.strip():
  1111. continue
  1112. else:
  1113. tbr = int_or_none(last_info.get('AVERAGE-BANDWIDTH') or last_info.get('BANDWIDTH'), scale=1000)
  1114. format_id = []
  1115. if m3u8_id:
  1116. format_id.append(m3u8_id)
  1117. # Despite specification does not mention NAME attribute for
  1118. # EXT-X-STREAM-INF it still sometimes may be present
  1119. stream_name = last_info.get('NAME') or last_media.get('NAME')
  1120. # Bandwidth of live streams may differ over time thus making
  1121. # format_id unpredictable. So it's better to keep provided
  1122. # format_id intact.
  1123. if not live:
  1124. format_id.append(stream_name if stream_name else '%d' % (tbr if tbr else len(formats)))
  1125. manifest_url = format_url(line.strip())
  1126. f = {
  1127. 'format_id': '-'.join(format_id),
  1128. 'url': manifest_url,
  1129. 'manifest_url': manifest_url,
  1130. 'tbr': tbr,
  1131. 'ext': ext,
  1132. 'fps': float_or_none(last_info.get('FRAME-RATE')),
  1133. 'protocol': entry_protocol,
  1134. 'preference': preference,
  1135. }
  1136. resolution = last_info.get('RESOLUTION')
  1137. if resolution:
  1138. width_str, height_str = resolution.split('x')
  1139. f['width'] = int(width_str)
  1140. f['height'] = int(height_str)
  1141. # Unified Streaming Platform
  1142. mobj = re.search(
  1143. r'audio.*?(?:%3D|=)(\d+)(?:-video.*?(?:%3D|=)(\d+))?', f['url'])
  1144. if mobj:
  1145. abr, vbr = mobj.groups()
  1146. abr, vbr = float_or_none(abr, 1000), float_or_none(vbr, 1000)
  1147. f.update({
  1148. 'vbr': vbr,
  1149. 'abr': abr,
  1150. })
  1151. f.update(parse_codecs(last_info.get('CODECS')))
  1152. formats.append(f)
  1153. last_info = {}
  1154. last_media = {}
  1155. return formats
  1156. @staticmethod
  1157. def _xpath_ns(path, namespace=None):
  1158. if not namespace:
  1159. return path
  1160. out = []
  1161. for c in path.split('/'):
  1162. if not c or c == '.':
  1163. out.append(c)
  1164. else:
  1165. out.append('{%s}%s' % (namespace, c))
  1166. return '/'.join(out)
  1167. def _extract_smil_formats(self, smil_url, video_id, fatal=True, f4m_params=None, transform_source=None):
  1168. smil = self._download_smil(smil_url, video_id, fatal=fatal, transform_source=transform_source)
  1169. if smil is False:
  1170. assert not fatal
  1171. return []
  1172. namespace = self._parse_smil_namespace(smil)
  1173. return self._parse_smil_formats(
  1174. smil, smil_url, video_id, namespace=namespace, f4m_params=f4m_params)
  1175. def _extract_smil_info(self, smil_url, video_id, fatal=True, f4m_params=None):
  1176. smil = self._download_smil(smil_url, video_id, fatal=fatal)
  1177. if smil is False:
  1178. return {}
  1179. return self._parse_smil(smil, smil_url, video_id, f4m_params=f4m_params)
  1180. def _download_smil(self, smil_url, video_id, fatal=True, transform_source=None):
  1181. return self._download_xml(
  1182. smil_url, video_id, 'Downloading SMIL file',
  1183. 'Unable to download SMIL file', fatal=fatal, transform_source=transform_source)
  1184. def _parse_smil(self, smil, smil_url, video_id, f4m_params=None):
  1185. namespace = self._parse_smil_namespace(smil)
  1186. formats = self._parse_smil_formats(
  1187. smil, smil_url, video_id, namespace=namespace, f4m_params=f4m_params)
  1188. subtitles = self._parse_smil_subtitles(smil, namespace=namespace)
  1189. video_id = os.path.splitext(url_basename(smil_url))[0]
  1190. title = None
  1191. description = None
  1192. upload_date = None
  1193. for meta in smil.findall(self._xpath_ns('./head/meta', namespace)):
  1194. name = meta.attrib.get('name')
  1195. content = meta.attrib.get('content')
  1196. if not name or not content:
  1197. continue
  1198. if not title and name == 'title':
  1199. title = content
  1200. elif not description and name in ('description', 'abstract'):
  1201. description = content
  1202. elif not upload_date and name == 'date':
  1203. upload_date = unified_strdate(content)
  1204. thumbnails = [{
  1205. 'id': image.get('type'),
  1206. 'url': image.get('src'),
  1207. 'width': int_or_none(image.get('width')),
  1208. 'height': int_or_none(image.get('height')),
  1209. } for image in smil.findall(self._xpath_ns('.//image', namespace)) if image.get('src')]
  1210. return {
  1211. 'id': video_id,
  1212. 'title': title or video_id,
  1213. 'description': description,
  1214. 'upload_date': upload_date,
  1215. 'thumbnails': thumbnails,
  1216. 'formats': formats,
  1217. 'subtitles': subtitles,
  1218. }
  1219. def _parse_smil_namespace(self, smil):
  1220. return self._search_regex(
  1221. r'(?i)^{([^}]+)?}smil$', smil.tag, 'namespace', default=None)
  1222. def _parse_smil_formats(self, smil, smil_url, video_id, namespace=None, f4m_params=None, transform_rtmp_url=None):
  1223. base = smil_url
  1224. for meta in smil.findall(self._xpath_ns('./head/meta', namespace)):
  1225. b = meta.get('base') or meta.get('httpBase')
  1226. if b:
  1227. base = b
  1228. break
  1229. formats = []
  1230. rtmp_count = 0
  1231. http_count = 0
  1232. m3u8_count = 0
  1233. srcs = []
  1234. media = smil.findall(self._xpath_ns('.//video', namespace)) + smil.findall(self._xpath_ns('.//audio', namespace))
  1235. for medium in media:
  1236. src = medium.get('src')
  1237. if not src or src in srcs:
  1238. continue
  1239. srcs.append(src)
  1240. bitrate = float_or_none(medium.get('system-bitrate') or medium.get('systemBitrate'), 1000)
  1241. filesize = int_or_none(medium.get('size') or medium.get('fileSize'))
  1242. width = int_or_none(medium.get('width'))
  1243. height = int_or_none(medium.get('height'))
  1244. proto = medium.get('proto')
  1245. ext = medium.get('ext')
  1246. src_ext = determine_ext(src)
  1247. streamer = medium.get('streamer') or base
  1248. if proto == 'rtmp' or streamer.startswith('rtmp'):
  1249. rtmp_count += 1
  1250. formats.append({
  1251. 'url': streamer,
  1252. 'play_path': src,
  1253. 'ext': 'flv',
  1254. 'format_id': 'rtmp-%d' % (rtmp_count if bitrate is None else bitrate),
  1255. 'tbr': bitrate,
  1256. 'filesize': filesize,
  1257. 'width': width,
  1258. 'height': height,
  1259. })
  1260. if transform_rtmp_url:
  1261. streamer, src = transform_rtmp_url(streamer, src)
  1262. formats[-1].update({
  1263. 'url': streamer,
  1264. 'play_path': src,
  1265. })
  1266. continue
  1267. src_url = src if src.startswith('http') else compat_urlparse.urljoin(base, src)
  1268. src_url = src_url.strip()
  1269. if proto == 'm3u8' or src_ext == 'm3u8':
  1270. m3u8_formats = self._extract_m3u8_formats(
  1271. src_url, video_id, ext or 'mp4', m3u8_id='hls', fatal=False)
  1272. if len(m3u8_formats) == 1:
  1273. m3u8_count += 1
  1274. m3u8_formats[0].update({
  1275. 'format_id': 'hls-%d' % (m3u8_count if bitrate is None else bitrate),
  1276. 'tbr': bitrate,
  1277. 'width': width,
  1278. 'height': height,
  1279. })
  1280. formats.extend(m3u8_formats)
  1281. continue
  1282. if src_ext == 'f4m':
  1283. f4m_url = src_url
  1284. if not f4m_params:
  1285. f4m_params = {
  1286. 'hdcore': '3.2.0',
  1287. 'plugin': 'flowplayer-3.2.0.1',
  1288. }
  1289. f4m_url += '&' if '?' in f4m_url else '?'
  1290. f4m_url += compat_urllib_parse_urlencode(f4m_params)
  1291. formats.extend(self._extract_f4m_formats(f4m_url, video_id, f4m_id='hds', fatal=False))
  1292. continue
  1293. if src_url.startswith('http') and self._is_valid_url(src, video_id):
  1294. http_count += 1
  1295. formats.append({
  1296. 'url': src_url,
  1297. 'ext': ext or src_ext or 'flv',
  1298. 'format_id': 'http-%d' % (bitrate or http_count),
  1299. 'tbr': bitrate,
  1300. 'filesize': filesize,
  1301. 'width': width,
  1302. 'height': height,
  1303. })
  1304. continue
  1305. return formats
  1306. def _parse_smil_subtitles(self, smil, namespace=None, subtitles_lang='en'):
  1307. urls = []
  1308. subtitles = {}
  1309. for num, textstream in enumerate(smil.findall(self._xpath_ns('.//textstream', namespace))):
  1310. src = textstream.get('src')
  1311. if not src or src in urls:
  1312. continue
  1313. urls.append(src)
  1314. ext = textstream.get('ext') or mimetype2ext(textstream.get('type')) or determine_ext(src)
  1315. lang = textstream.get('systemLanguage') or textstream.get('systemLanguageName') or textstream.get('lang') or subtitles_lang
  1316. subtitles.setdefault(lang, []).append({
  1317. 'url': src,
  1318. 'ext': ext,
  1319. })
  1320. return subtitles
  1321. def _extract_xspf_playlist(self, playlist_url, playlist_id, fatal=True):
  1322. xspf = self._download_xml(
  1323. playlist_url, playlist_id, 'Downloading xpsf playlist',
  1324. 'Unable to download xspf manifest', fatal=fatal)
  1325. if xspf is False:
  1326. return []
  1327. return self._parse_xspf(xspf, playlist_id)
  1328. def _parse_xspf(self, playlist, playlist_id):
  1329. NS_MAP = {
  1330. 'xspf': 'http://xspf.org/ns/0/',
  1331. 's1': 'http://static.streamone.nl/player/ns/0',
  1332. }
  1333. entries = []
  1334. for track in playlist.findall(xpath_with_ns('./xspf:trackList/xspf:track', NS_MAP)):
  1335. title = xpath_text(
  1336. track, xpath_with_ns('./xspf:title', NS_MAP), 'title', default=playlist_id)
  1337. description = xpath_text(
  1338. track, xpath_with_ns('./xspf:annotation', NS_MAP), 'description')
  1339. thumbnail = xpath_text(
  1340. track, xpath_with_ns('./xspf:image', NS_MAP), 'thumbnail')
  1341. duration = float_or_none(
  1342. xpath_text(track, xpath_with_ns('./xspf:duration', NS_MAP), 'duration'), 1000)
  1343. formats = [{
  1344. 'url': location.text,
  1345. 'format_id': location.get(xpath_with_ns('s1:label', NS_MAP)),
  1346. 'width': int_or_none(location.get(xpath_with_ns('s1:width', NS_MAP))),
  1347. 'height': int_or_none(location.get(xpath_with_ns('s1:height', NS_MAP))),
  1348. } for location in track.findall(xpath_with_ns('./xspf:location', NS_MAP))]
  1349. self._sort_formats(formats)
  1350. entries.append({
  1351. 'id': playlist_id,
  1352. 'title': title,
  1353. 'description': description,
  1354. 'thumbnail': thumbnail,
  1355. 'duration': duration,
  1356. 'formats': formats,
  1357. })
  1358. return entries
  1359. def _extract_mpd_formats(self, mpd_url, video_id, mpd_id=None, note=None, errnote=None, fatal=True, formats_dict={}):
  1360. res = self._download_webpage_handle(
  1361. mpd_url, video_id,
  1362. note=note or 'Downloading MPD manifest',
  1363. errnote=errnote or 'Failed to download MPD manifest',
  1364. fatal=fatal)
  1365. if res is False:
  1366. return []
  1367. mpd, urlh = res
  1368. mpd_base_url = re.match(r'https?://.+/', urlh.geturl()).group()
  1369. return self._parse_mpd_formats(
  1370. compat_etree_fromstring(mpd.encode('utf-8')), mpd_id, mpd_base_url,
  1371. formats_dict=formats_dict, mpd_url=mpd_url)
  1372. def _parse_mpd_formats(self, mpd_doc, mpd_id=None, mpd_base_url='', formats_dict={}, mpd_url=None):
  1373. """
  1374. Parse formats from MPD manifest.
  1375. References:
  1376. 1. MPEG-DASH Standard, ISO/IEC 23009-1:2014(E),
  1377. http://standards.iso.org/ittf/PubliclyAvailableStandards/c065274_ISO_IEC_23009-1_2014.zip
  1378. 2. https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP
  1379. """
  1380. if mpd_doc.get('type') == 'dynamic':
  1381. return []
  1382. namespace = self._search_regex(r'(?i)^{([^}]+)?}MPD$', mpd_doc.tag, 'namespace', default=None)
  1383. def _add_ns(path):
  1384. return self._xpath_ns(path, namespace)
  1385. def is_drm_protected(element):
  1386. return element.find(_add_ns('ContentProtection')) is not None
  1387. def extract_multisegment_info(element, ms_parent_info):
  1388. ms_info = ms_parent_info.copy()
  1389. # As per [1, 5.3.9.2.2] SegmentList and SegmentTemplate share some
  1390. # common attributes and elements. We will only extract relevant
  1391. # for us.
  1392. def extract_common(source):
  1393. segment_timeline = source.find(_add_ns('SegmentTimeline'))
  1394. if segment_timeline is not None:
  1395. s_e = segment_timeline.findall(_add_ns('S'))
  1396. if s_e:
  1397. ms_info['total_number'] = 0
  1398. ms_info['s'] = []
  1399. for s in s_e:
  1400. r = int(s.get('r', 0))
  1401. ms_info['total_number'] += 1 + r
  1402. ms_info['s'].append({
  1403. 't': int(s.get('t', 0)),
  1404. # @d is mandatory (see [1, 5.3.9.6.2, Table 17, page 60])
  1405. 'd': int(s.attrib['d']),
  1406. 'r': r,
  1407. })
  1408. start_number = source.get('startNumber')
  1409. if start_number:
  1410. ms_info['start_number'] = int(start_number)
  1411. timescale = source.get('timescale')
  1412. if timescale:
  1413. ms_info['timescale'] = int(timescale)
  1414. segment_duration = source.get('duration')
  1415. if segment_duration:
  1416. ms_info['segment_duration'] = int(segment_duration)
  1417. def extract_Initialization(source):
  1418. initialization = source.find(_add_ns('Initialization'))
  1419. if initialization is not None:
  1420. ms_info['initialization_url'] = initialization.attrib['sourceURL']
  1421. segment_list = element.find(_add_ns('SegmentList'))
  1422. if segment_list is not None:
  1423. extract_common(segment_list)
  1424. extract_Initialization(segment_list)
  1425. segment_urls_e = segment_list.findall(_add_ns('SegmentURL'))
  1426. if segment_urls_e:
  1427. ms_info['segment_urls'] = [segment.attrib['media'] for segment in segment_urls_e]
  1428. else:
  1429. segment_template = element.find(_add_ns('SegmentTemplate'))
  1430. if segment_template is not None:
  1431. extract_common(segment_template)
  1432. media_template = segment_template.get('media')
  1433. if media_template:
  1434. ms_info['media_template'] = media_template
  1435. initialization = segment_template.get('initialization')
  1436. if initialization:
  1437. ms_info['initialization_url'] = initialization
  1438. else:
  1439. extract_Initialization(segment_template)
  1440. return ms_info
  1441. def combine_url(base_url, target_url):
  1442. if re.match(r'^https?://', target_url):
  1443. return target_url
  1444. return '%s%s%s' % (base_url, '' if base_url.endswith('/') else '/', target_url)
  1445. mpd_duration = parse_duration(mpd_doc.get('mediaPresentationDuration'))
  1446. formats = []
  1447. for period in mpd_doc.findall(_add_ns('Period')):
  1448. period_duration = parse_duration(period.get('duration')) or mpd_duration
  1449. period_ms_info = extract_multisegment_info(period, {
  1450. 'start_number': 1,
  1451. 'timescale': 1,
  1452. })
  1453. for adaptation_set in period.findall(_add_ns('AdaptationSet')):
  1454. if is_drm_protected(adaptation_set):
  1455. continue
  1456. adaption_set_ms_info = extract_multisegment_info(adaptation_set, period_ms_info)
  1457. for representation in adaptation_set.findall(_add_ns('Representation')):
  1458. if is_drm_protected(representation):
  1459. continue
  1460. representation_attrib = adaptation_set.attrib.copy()
  1461. representation_attrib.update(representation.attrib)
  1462. # According to [1, 5.3.7.2, Table 9, page 41], @mimeType is mandatory
  1463. mime_type = representation_attrib['mimeType']
  1464. content_type = mime_type.split('/')[0]
  1465. if content_type == 'text':
  1466. # TODO implement WebVTT downloading
  1467. pass
  1468. elif content_type == 'video' or content_type == 'audio':
  1469. base_url = ''
  1470. for element in (representation, adaptation_set, period, mpd_doc):
  1471. base_url_e = element.find(_add_ns('BaseURL'))
  1472. if base_url_e is not None:
  1473. base_url = base_url_e.text + base_url
  1474. if re.match(r'^https?://', base_url):
  1475. break
  1476. if mpd_base_url and not re.match(r'^https?://', base_url):
  1477. if not mpd_base_url.endswith('/') and not base_url.startswith('/'):
  1478. mpd_base_url += '/'
  1479. base_url = mpd_base_url + base_url
  1480. representation_id = representation_attrib.get('id')
  1481. lang = representation_attrib.get('lang')
  1482. url_el = representation.find(_add_ns('BaseURL'))
  1483. filesize = int_or_none(url_el.attrib.get('{http://youtube.com/yt/2012/10/10}contentLength') if url_el is not None else None)
  1484. f = {
  1485. 'format_id': '%s-%s' % (mpd_id, representation_id) if mpd_id else representation_id,
  1486. 'url': base_url,
  1487. 'manifest_url': mpd_url,
  1488. 'ext': mimetype2ext(mime_type),
  1489. 'width': int_or_none(representation_attrib.get('width')),
  1490. 'height': int_or_none(representation_attrib.get('height')),
  1491. 'tbr': int_or_none(representation_attrib.get('bandwidth'), 1000),
  1492. 'asr': int_or_none(representation_attrib.get('audioSamplingRate')),
  1493. 'fps': int_or_none(representation_attrib.get('frameRate')),
  1494. 'vcodec': 'none' if content_type == 'audio' else representation_attrib.get('codecs'),
  1495. 'acodec': 'none' if content_type == 'video' else representation_attrib.get('codecs'),
  1496. 'language': lang if lang not in ('mul', 'und', 'zxx', 'mis') else None,
  1497. 'format_note': 'DASH %s' % content_type,
  1498. 'filesize': filesize,
  1499. }
  1500. representation_ms_info = extract_multisegment_info(representation, adaption_set_ms_info)
  1501. if 'segment_urls' not in representation_ms_info and 'media_template' in representation_ms_info:
  1502. media_template = representation_ms_info['media_template']
  1503. media_template = media_template.replace('$RepresentationID$', representation_id)
  1504. media_template = re.sub(r'\$(Number|Bandwidth|Time)\$', r'%(\1)d', media_template)
  1505. media_template = re.sub(r'\$(Number|Bandwidth|Time)%([^$]+)\$', r'%(\1)\2', media_template)
  1506. media_template.replace('$$', '$')
  1507. # As per [1, 5.3.9.4.4, Table 16, page 55] $Number$ and $Time$
  1508. # can't be used at the same time
  1509. if '%(Number' in media_template and 's' not in representation_ms_info:
  1510. segment_duration = None
  1511. if 'total_number' not in representation_ms_info and 'segment_duration':
  1512. segment_duration = float_or_none(representation_ms_info['segment_duration'], representation_ms_info['timescale'])
  1513. representation_ms_info['total_number'] = int(math.ceil(float(period_duration) / segment_duration))
  1514. representation_ms_info['fragments'] = [{
  1515. 'url': media_template % {
  1516. 'Number': segment_number,
  1517. 'Bandwidth': representation_attrib.get('bandwidth'),
  1518. },
  1519. 'duration': segment_duration,
  1520. } for segment_number in range(
  1521. representation_ms_info['start_number'],
  1522. representation_ms_info['total_number'] + representation_ms_info['start_number'])]
  1523. else:
  1524. # $Number*$ or $Time$ in media template with S list available
  1525. # Example $Number*$: http://www.svtplay.se/klipp/9023742/stopptid-om-bjorn-borg
  1526. # Example $Time$: https://play.arkena.com/embed/avp/v2/player/media/b41dda37-d8e7-4d3f-b1b5-9a9db578bdfe/1/129411
  1527. representation_ms_info['fragments'] = []
  1528. segment_time = 0
  1529. segment_d = None
  1530. segment_number = representation_ms_info['start_number']
  1531. def add_segment_url():
  1532. segment_url = media_template % {
  1533. 'Time': segment_time,
  1534. 'Bandwidth': representation_attrib.get('bandwidth'),
  1535. 'Number': segment_number,
  1536. }
  1537. representation_ms_info['fragments'].append({
  1538. 'url': segment_url,
  1539. 'duration': float_or_none(segment_d, representation_ms_info['timescale']),
  1540. })
  1541. for num, s in enumerate(representation_ms_info['s']):
  1542. segment_time = s.get('t') or segment_time
  1543. segment_d = s['d']
  1544. add_segment_url()
  1545. segment_number += 1
  1546. for r in range(s.get('r', 0)):
  1547. segment_time += segment_d
  1548. add_segment_url()
  1549. segment_number += 1
  1550. segment_time += segment_d
  1551. elif 'segment_urls' in representation_ms_info and 's' in representation_ms_info:
  1552. # No media template
  1553. # Example: https://www.youtube.com/watch?v=iXZV5uAYMJI
  1554. # or any YouTube dashsegments video
  1555. fragments = []
  1556. s_num = 0
  1557. for segment_url in representation_ms_info['segment_urls']:
  1558. s = representation_ms_info['s'][s_num]
  1559. for r in range(s.get('r', 0) + 1):
  1560. fragments.append({
  1561. 'url': segment_url,
  1562. 'duration': float_or_none(s['d'], representation_ms_info['timescale']),
  1563. })
  1564. representation_ms_info['fragments'] = fragments
  1565. # NB: MPD manifest may contain direct URLs to unfragmented media.
  1566. # No fragments key is present in this case.
  1567. if 'fragments' in representation_ms_info:
  1568. f.update({
  1569. 'fragments': [],
  1570. 'protocol': 'http_dash_segments',
  1571. })
  1572. if 'initialization_url' in representation_ms_info:
  1573. initialization_url = representation_ms_info['initialization_url'].replace('$RepresentationID$', representation_id)
  1574. if not f.get('url'):
  1575. f['url'] = initialization_url
  1576. f['fragments'].append({'url': initialization_url})
  1577. f['fragments'].extend(representation_ms_info['fragments'])
  1578. for fragment in f['fragments']:
  1579. fragment['url'] = combine_url(base_url, fragment['url'])
  1580. try:
  1581. existing_format = next(
  1582. fo for fo in formats
  1583. if fo['format_id'] == representation_id)
  1584. except StopIteration:
  1585. full_info = formats_dict.get(representation_id, {}).copy()
  1586. full_info.update(f)
  1587. formats.append(full_info)
  1588. else:
  1589. existing_format.update(f)
  1590. else:
  1591. self.report_warning('Unknown MIME type %s in DASH manifest' % mime_type)
  1592. return formats
  1593. def _parse_html5_media_entries(self, base_url, webpage, video_id, m3u8_id=None, m3u8_entry_protocol='m3u8'):
  1594. def absolute_url(video_url):
  1595. return compat_urlparse.urljoin(base_url, video_url)
  1596. def parse_content_type(content_type):
  1597. if not content_type:
  1598. return {}
  1599. ctr = re.search(r'(?P<mimetype>[^/]+/[^;]+)(?:;\s*codecs="?(?P<codecs>[^"]+))?', content_type)
  1600. if ctr:
  1601. mimetype, codecs = ctr.groups()
  1602. f = parse_codecs(codecs)
  1603. f['ext'] = mimetype2ext(mimetype)
  1604. return f
  1605. return {}
  1606. def _media_formats(src, cur_media_type):
  1607. full_url = absolute_url(src)
  1608. if determine_ext(full_url) == 'm3u8':
  1609. is_plain_url = False
  1610. formats = self._extract_m3u8_formats(
  1611. full_url, video_id, ext='mp4',
  1612. entry_protocol=m3u8_entry_protocol, m3u8_id=m3u8_id)
  1613. else:
  1614. is_plain_url = True
  1615. formats = [{
  1616. 'url': full_url,
  1617. 'vcodec': 'none' if cur_media_type == 'audio' else None,
  1618. }]
  1619. return is_plain_url, formats
  1620. entries = []
  1621. for media_tag, media_type, media_content in re.findall(r'(?s)(<(?P<tag>video|audio)[^>]*>)(.*?)</(?P=tag)>', webpage):
  1622. media_info = {
  1623. 'formats': [],
  1624. 'subtitles': {},
  1625. }
  1626. media_attributes = extract_attributes(media_tag)
  1627. src = media_attributes.get('src')
  1628. if src:
  1629. _, formats = _media_formats(src, media_type)
  1630. media_info['formats'].extend(formats)
  1631. media_info['thumbnail'] = media_attributes.get('poster')
  1632. if media_content:
  1633. for source_tag in re.findall(r'<source[^>]+>', media_content):
  1634. source_attributes = extract_attributes(source_tag)
  1635. src = source_attributes.get('src')
  1636. if not src:
  1637. continue
  1638. is_plain_url, formats = _media_formats(src, media_type)
  1639. if is_plain_url:
  1640. f = parse_content_type(source_attributes.get('type'))
  1641. f.update(formats[0])
  1642. media_info['formats'].append(f)
  1643. else:
  1644. media_info['formats'].extend(formats)
  1645. for track_tag in re.findall(r'<track[^>]+>', media_content):
  1646. track_attributes = extract_attributes(track_tag)
  1647. kind = track_attributes.get('kind')
  1648. if not kind or kind == 'subtitles':
  1649. src = track_attributes.get('src')
  1650. if not src:
  1651. continue
  1652. lang = track_attributes.get('srclang') or track_attributes.get('lang') or track_attributes.get('label')
  1653. media_info['subtitles'].setdefault(lang, []).append({
  1654. 'url': absolute_url(src),
  1655. })
  1656. if media_info['formats']:
  1657. entries.append(media_info)
  1658. return entries
  1659. def _extract_akamai_formats(self, manifest_url, video_id):
  1660. formats = []
  1661. f4m_url = re.sub(r'(https?://.+?)/i/', r'\1/z/', manifest_url).replace('/master.m3u8', '/manifest.f4m')
  1662. formats.extend(self._extract_f4m_formats(
  1663. update_url_query(f4m_url, {'hdcore': '3.7.0'}),
  1664. video_id, f4m_id='hds', fatal=False))
  1665. m3u8_url = re.sub(r'(https?://.+?)/z/', r'\1/i/', manifest_url).replace('/manifest.f4m', '/master.m3u8')
  1666. formats.extend(self._extract_m3u8_formats(
  1667. m3u8_url, video_id, 'mp4', 'm3u8_native',
  1668. m3u8_id='hls', fatal=False))
  1669. return formats
  1670. def _extract_wowza_formats(self, url, video_id, m3u8_entry_protocol='m3u8_native', skip_protocols=[]):
  1671. url = re.sub(r'/(?:manifest|playlist|jwplayer)\.(?:m3u8|f4m|mpd|smil)', '', url)
  1672. url_base = self._search_regex(r'(?:https?|rtmp|rtsp)(://[^?]+)', url, 'format url')
  1673. http_base_url = 'http' + url_base
  1674. formats = []
  1675. if 'm3u8' not in skip_protocols:
  1676. formats.extend(self._extract_m3u8_formats(
  1677. http_base_url + '/playlist.m3u8', video_id, 'mp4',
  1678. m3u8_entry_protocol, m3u8_id='hls', fatal=False))
  1679. if 'f4m' not in skip_protocols:
  1680. formats.extend(self._extract_f4m_formats(
  1681. http_base_url + '/manifest.f4m',
  1682. video_id, f4m_id='hds', fatal=False))
  1683. if re.search(r'(?:/smil:|\.smil)', url_base):
  1684. if 'dash' not in skip_protocols:
  1685. formats.extend(self._extract_mpd_formats(
  1686. http_base_url + '/manifest.mpd',
  1687. video_id, mpd_id='dash', fatal=False))
  1688. if 'smil' not in skip_protocols:
  1689. rtmp_formats = self._extract_smil_formats(
  1690. http_base_url + '/jwplayer.smil',
  1691. video_id, fatal=False)
  1692. for rtmp_format in rtmp_formats:
  1693. rtsp_format = rtmp_format.copy()
  1694. rtsp_format['url'] = '%s/%s' % (rtmp_format['url'], rtmp_format['play_path'])
  1695. del rtsp_format['play_path']
  1696. del rtsp_format['ext']
  1697. rtsp_format.update({
  1698. 'url': rtsp_format['url'].replace('rtmp://', 'rtsp://'),
  1699. 'format_id': rtmp_format['format_id'].replace('rtmp', 'rtsp'),
  1700. 'protocol': 'rtsp',
  1701. })
  1702. formats.extend([rtmp_format, rtsp_format])
  1703. else:
  1704. for protocol in ('rtmp', 'rtsp'):
  1705. if protocol not in skip_protocols:
  1706. formats.append({
  1707. 'url': protocol + url_base,
  1708. 'format_id': protocol,
  1709. 'protocol': protocol,
  1710. })
  1711. return formats
  1712. def _live_title(self, name):
  1713. """ Generate the title for a live video """
  1714. now = datetime.datetime.now()
  1715. now_str = now.strftime('%Y-%m-%d %H:%M')
  1716. return name + ' ' + now_str
  1717. def _int(self, v, name, fatal=False, **kwargs):
  1718. res = int_or_none(v, **kwargs)
  1719. if 'get_attr' in kwargs:
  1720. print(getattr(v, kwargs['get_attr']))
  1721. if res is None:
  1722. msg = 'Failed to extract %s: Could not parse value %r' % (name, v)
  1723. if fatal:
  1724. raise ExtractorError(msg)
  1725. else:
  1726. self._downloader.report_warning(msg)
  1727. return res
  1728. def _float(self, v, name, fatal=False, **kwargs):
  1729. res = float_or_none(v, **kwargs)
  1730. if res is None:
  1731. msg = 'Failed to extract %s: Could not parse value %r' % (name, v)
  1732. if fatal:
  1733. raise ExtractorError(msg)
  1734. else:
  1735. self._downloader.report_warning(msg)
  1736. return res
  1737. def _set_cookie(self, domain, name, value, expire_time=None):
  1738. cookie = compat_cookiejar.Cookie(
  1739. 0, name, value, None, None, domain, None,
  1740. None, '/', True, False, expire_time, '', None, None, None)
  1741. self._downloader.cookiejar.set_cookie(cookie)
  1742. def _get_cookies(self, url):
  1743. """ Return a compat_cookies.SimpleCookie with the cookies for the url """
  1744. req = sanitized_Request(url)
  1745. self._downloader.cookiejar.add_cookie_header(req)
  1746. return compat_cookies.SimpleCookie(req.get_header('Cookie'))
  1747. def get_testcases(self, include_onlymatching=False):
  1748. t = getattr(self, '_TEST', None)
  1749. if t:
  1750. assert not hasattr(self, '_TESTS'), \
  1751. '%s has _TEST and _TESTS' % type(self).__name__
  1752. tests = [t]
  1753. else:
  1754. tests = getattr(self, '_TESTS', [])
  1755. for t in tests:
  1756. if not include_onlymatching and t.get('only_matching', False):
  1757. continue
  1758. t['name'] = type(self).__name__[:-len('IE')]
  1759. yield t
  1760. def is_suitable(self, age_limit):
  1761. """ Test whether the extractor is generally suitable for the given
  1762. age limit (i.e. pornographic sites are not, all others usually are) """
  1763. any_restricted = False
  1764. for tc in self.get_testcases(include_onlymatching=False):
  1765. if tc.get('playlist', []):
  1766. tc = tc['playlist'][0]
  1767. is_restricted = age_restricted(
  1768. tc.get('info_dict', {}).get('age_limit'), age_limit)
  1769. if not is_restricted:
  1770. return True
  1771. any_restricted = any_restricted or is_restricted
  1772. return not any_restricted
  1773. def extract_subtitles(self, *args, **kwargs):
  1774. if (self._downloader.params.get('writesubtitles', False) or
  1775. self._downloader.params.get('listsubtitles')):
  1776. return self._get_subtitles(*args, **kwargs)
  1777. return {}
  1778. def _get_subtitles(self, *args, **kwargs):
  1779. raise NotImplementedError('This method must be implemented by subclasses')
  1780. @staticmethod
  1781. def _merge_subtitle_items(subtitle_list1, subtitle_list2):
  1782. """ Merge subtitle items for one language. Items with duplicated URLs
  1783. will be dropped. """
  1784. list1_urls = set([item['url'] for item in subtitle_list1])
  1785. ret = list(subtitle_list1)
  1786. ret.extend([item for item in subtitle_list2 if item['url'] not in list1_urls])
  1787. return ret
  1788. @classmethod
  1789. def _merge_subtitles(cls, subtitle_dict1, subtitle_dict2):
  1790. """ Merge two subtitle dictionaries, language by language. """
  1791. ret = dict(subtitle_dict1)
  1792. for lang in subtitle_dict2:
  1793. ret[lang] = cls._merge_subtitle_items(subtitle_dict1.get(lang, []), subtitle_dict2[lang])
  1794. return ret
  1795. def extract_automatic_captions(self, *args, **kwargs):
  1796. if (self._downloader.params.get('writeautomaticsub', False) or
  1797. self._downloader.params.get('listsubtitles')):
  1798. return self._get_automatic_captions(*args, **kwargs)
  1799. return {}
  1800. def _get_automatic_captions(self, *args, **kwargs):
  1801. raise NotImplementedError('This method must be implemented by subclasses')
  1802. def mark_watched(self, *args, **kwargs):
  1803. if (self._downloader.params.get('mark_watched', False) and
  1804. (self._get_login_info()[0] is not None or
  1805. self._downloader.params.get('cookiefile') is not None)):
  1806. self._mark_watched(*args, **kwargs)
  1807. def _mark_watched(self, *args, **kwargs):
  1808. raise NotImplementedError('This method must be implemented by subclasses')
  1809. def geo_verification_headers(self):
  1810. headers = {}
  1811. geo_verification_proxy = self._downloader.params.get('geo_verification_proxy')
  1812. if geo_verification_proxy:
  1813. headers['Ytdl-request-proxy'] = geo_verification_proxy
  1814. return headers
  1815. class SearchInfoExtractor(InfoExtractor):
  1816. """
  1817. Base class for paged search queries extractors.
  1818. They accept URLs in the format _SEARCH_KEY(|all|[0-9]):{query}
  1819. Instances should define _SEARCH_KEY and _MAX_RESULTS.
  1820. """
  1821. @classmethod
  1822. def _make_valid_url(cls):
  1823. return r'%s(?P<prefix>|[1-9][0-9]*|all):(?P<query>[\s\S]+)' % cls._SEARCH_KEY
  1824. @classmethod
  1825. def suitable(cls, url):
  1826. return re.match(cls._make_valid_url(), url) is not None
  1827. def _real_extract(self, query):
  1828. mobj = re.match(self._make_valid_url(), query)
  1829. if mobj is None:
  1830. raise ExtractorError('Invalid search query "%s"' % query)
  1831. prefix = mobj.group('prefix')
  1832. query = mobj.group('query')
  1833. if prefix == '':
  1834. return self._get_n_results(query, 1)
  1835. elif prefix == 'all':
  1836. return self._get_n_results(query, self._MAX_RESULTS)
  1837. else:
  1838. n = int(prefix)
  1839. if n <= 0:
  1840. raise ExtractorError('invalid download number %s for query "%s"' % (n, query))
  1841. elif n > self._MAX_RESULTS:
  1842. self._downloader.report_warning('%s returns max %i results (you requested %i)' % (self._SEARCH_KEY, self._MAX_RESULTS, n))
  1843. n = self._MAX_RESULTS
  1844. return self._get_n_results(query, n)
  1845. def _get_n_results(self, query, n):
  1846. """Get a specified number of results for a query"""
  1847. raise NotImplementedError('This method must be implemented by subclasses')
  1848. @property
  1849. def SEARCH_KEY(self):
  1850. return self._SEARCH_KEY