You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1633 lines
71 KiB

Switch codebase to use sanitized_Request instead of compat_urllib_request.Request [downloader/dash] Use sanitized_Request [downloader/http] Use sanitized_Request [atresplayer] Use sanitized_Request [bambuser] Use sanitized_Request [bliptv] Use sanitized_Request [brightcove] Use sanitized_Request [cbs] Use sanitized_Request [ceskatelevize] Use sanitized_Request [collegerama] Use sanitized_Request [extractor/common] Use sanitized_Request [crunchyroll] Use sanitized_Request [dailymotion] Use sanitized_Request [dcn] Use sanitized_Request [dramafever] Use sanitized_Request [dumpert] Use sanitized_Request [eitb] Use sanitized_Request [escapist] Use sanitized_Request [everyonesmixtape] Use sanitized_Request [extremetube] Use sanitized_Request [facebook] Use sanitized_Request [fc2] Use sanitized_Request [flickr] Use sanitized_Request [4tube] Use sanitized_Request [gdcvault] Use sanitized_Request [extractor/generic] Use sanitized_Request [hearthisat] Use sanitized_Request [hotnewhiphop] Use sanitized_Request [hypem] Use sanitized_Request [iprima] Use sanitized_Request [ivi] Use sanitized_Request [keezmovies] Use sanitized_Request [letv] Use sanitized_Request [lynda] Use sanitized_Request [metacafe] Use sanitized_Request [minhateca] Use sanitized_Request [miomio] Use sanitized_Request [meovideo] Use sanitized_Request [mofosex] Use sanitized_Request [moniker] Use sanitized_Request [mooshare] Use sanitized_Request [movieclips] Use sanitized_Request [mtv] Use sanitized_Request [myvideo] Use sanitized_Request [neteasemusic] Use sanitized_Request [nfb] Use sanitized_Request [niconico] Use sanitized_Request [noco] Use sanitized_Request [nosvideo] Use sanitized_Request [novamov] Use sanitized_Request [nowness] Use sanitized_Request [nuvid] Use sanitized_Request [played] Use sanitized_Request [pluralsight] Use sanitized_Request [pornhub] Use sanitized_Request [pornotube] Use sanitized_Request [primesharetv] Use sanitized_Request [promptfile] Use sanitized_Request [qqmusic] Use sanitized_Request [rtve] Use sanitized_Request [safari] Use sanitized_Request [sandia] Use sanitized_Request [shared] Use sanitized_Request [sharesix] Use sanitized_Request [sina] Use sanitized_Request [smotri] Use sanitized_Request [sohu] Use sanitized_Request [spankwire] Use sanitized_Request [sportdeutschland] Use sanitized_Request [streamcloud] Use sanitized_Request [streamcz] Use sanitized_Request [tapely] Use sanitized_Request [tube8] Use sanitized_Request [tubitv] Use sanitized_Request [twitch] Use sanitized_Request [twitter] Use sanitized_Request [udemy] Use sanitized_Request [vbox7] Use sanitized_Request [veoh] Use sanitized_Request [vessel] Use sanitized_Request [vevo] Use sanitized_Request [viddler] Use sanitized_Request [videomega] Use sanitized_Request [viewvster] Use sanitized_Request [viki] Use sanitized_Request [vk] Use sanitized_Request [vodlocker] Use sanitized_Request [voicerepublic] Use sanitized_Request [wistia] Use sanitized_Request [xfileshare] Use sanitized_Request [xtube] Use sanitized_Request [xvideos] Use sanitized_Request [yandexmusic] Use sanitized_Request [youku] Use sanitized_Request [youporn] Use sanitized_Request [youtube] Use sanitized_Request [patreon] Use sanitized_Request [extractor/common] Remove unused import [nfb] PEP 8
9 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
10 years ago
Switch codebase to use sanitized_Request instead of compat_urllib_request.Request [downloader/dash] Use sanitized_Request [downloader/http] Use sanitized_Request [atresplayer] Use sanitized_Request [bambuser] Use sanitized_Request [bliptv] Use sanitized_Request [brightcove] Use sanitized_Request [cbs] Use sanitized_Request [ceskatelevize] Use sanitized_Request [collegerama] Use sanitized_Request [extractor/common] Use sanitized_Request [crunchyroll] Use sanitized_Request [dailymotion] Use sanitized_Request [dcn] Use sanitized_Request [dramafever] Use sanitized_Request [dumpert] Use sanitized_Request [eitb] Use sanitized_Request [escapist] Use sanitized_Request [everyonesmixtape] Use sanitized_Request [extremetube] Use sanitized_Request [facebook] Use sanitized_Request [fc2] Use sanitized_Request [flickr] Use sanitized_Request [4tube] Use sanitized_Request [gdcvault] Use sanitized_Request [extractor/generic] Use sanitized_Request [hearthisat] Use sanitized_Request [hotnewhiphop] Use sanitized_Request [hypem] Use sanitized_Request [iprima] Use sanitized_Request [ivi] Use sanitized_Request [keezmovies] Use sanitized_Request [letv] Use sanitized_Request [lynda] Use sanitized_Request [metacafe] Use sanitized_Request [minhateca] Use sanitized_Request [miomio] Use sanitized_Request [meovideo] Use sanitized_Request [mofosex] Use sanitized_Request [moniker] Use sanitized_Request [mooshare] Use sanitized_Request [movieclips] Use sanitized_Request [mtv] Use sanitized_Request [myvideo] Use sanitized_Request [neteasemusic] Use sanitized_Request [nfb] Use sanitized_Request [niconico] Use sanitized_Request [noco] Use sanitized_Request [nosvideo] Use sanitized_Request [novamov] Use sanitized_Request [nowness] Use sanitized_Request [nuvid] Use sanitized_Request [played] Use sanitized_Request [pluralsight] Use sanitized_Request [pornhub] Use sanitized_Request [pornotube] Use sanitized_Request [primesharetv] Use sanitized_Request [promptfile] Use sanitized_Request [qqmusic] Use sanitized_Request [rtve] Use sanitized_Request [safari] Use sanitized_Request [sandia] Use sanitized_Request [shared] Use sanitized_Request [sharesix] Use sanitized_Request [sina] Use sanitized_Request [smotri] Use sanitized_Request [sohu] Use sanitized_Request [spankwire] Use sanitized_Request [sportdeutschland] Use sanitized_Request [streamcloud] Use sanitized_Request [streamcz] Use sanitized_Request [tapely] Use sanitized_Request [tube8] Use sanitized_Request [tubitv] Use sanitized_Request [twitch] Use sanitized_Request [twitter] Use sanitized_Request [udemy] Use sanitized_Request [vbox7] Use sanitized_Request [veoh] Use sanitized_Request [vessel] Use sanitized_Request [vevo] Use sanitized_Request [viddler] Use sanitized_Request [videomega] Use sanitized_Request [viewvster] Use sanitized_Request [viki] Use sanitized_Request [vk] Use sanitized_Request [vodlocker] Use sanitized_Request [voicerepublic] Use sanitized_Request [wistia] Use sanitized_Request [xfileshare] Use sanitized_Request [xtube] Use sanitized_Request [xvideos] Use sanitized_Request [yandexmusic] Use sanitized_Request [youku] Use sanitized_Request [youporn] Use sanitized_Request [youtube] Use sanitized_Request [patreon] Use sanitized_Request [extractor/common] Remove unused import [nfb] PEP 8
9 years ago
11 years ago
  1. from __future__ import unicode_literals
  2. import base64
  3. import datetime
  4. import hashlib
  5. import json
  6. import netrc
  7. import os
  8. import re
  9. import socket
  10. import sys
  11. import time
  12. import math
  13. from ..compat import (
  14. compat_cookiejar,
  15. compat_cookies,
  16. compat_getpass,
  17. compat_http_client,
  18. compat_urllib_error,
  19. compat_urllib_parse,
  20. compat_urlparse,
  21. compat_str,
  22. compat_etree_fromstring,
  23. )
  24. from ..utils import (
  25. NO_DEFAULT,
  26. age_restricted,
  27. bug_reports_message,
  28. clean_html,
  29. compiled_regex_type,
  30. determine_ext,
  31. error_to_compat_str,
  32. ExtractorError,
  33. fix_xml_ampersands,
  34. float_or_none,
  35. int_or_none,
  36. parse_iso8601,
  37. RegexNotFoundError,
  38. sanitize_filename,
  39. sanitized_Request,
  40. unescapeHTML,
  41. unified_strdate,
  42. url_basename,
  43. xpath_text,
  44. xpath_with_ns,
  45. determine_protocol,
  46. parse_duration,
  47. )
  48. class InfoExtractor(object):
  49. """Information Extractor class.
  50. Information extractors are the classes that, given a URL, extract
  51. information about the video (or videos) the URL refers to. This
  52. information includes the real video URL, the video title, author and
  53. others. The information is stored in a dictionary which is then
  54. passed to the YoutubeDL. The YoutubeDL processes this
  55. information possibly downloading the video to the file system, among
  56. other possible outcomes.
  57. The type field determines the type of the result.
  58. By far the most common value (and the default if _type is missing) is
  59. "video", which indicates a single video.
  60. For a video, the dictionaries must include the following fields:
  61. id: Video identifier.
  62. title: Video title, unescaped.
  63. Additionally, it must contain either a formats entry or a url one:
  64. formats: A list of dictionaries for each format available, ordered
  65. from worst to best quality.
  66. Potential fields:
  67. * url Mandatory. The URL of the video file
  68. * ext Will be calculated from URL if missing
  69. * format A human-readable description of the format
  70. ("mp4 container with h264/opus").
  71. Calculated from the format_id, width, height.
  72. and format_note fields if missing.
  73. * format_id A short description of the format
  74. ("mp4_h264_opus" or "19").
  75. Technically optional, but strongly recommended.
  76. * format_note Additional info about the format
  77. ("3D" or "DASH video")
  78. * width Width of the video, if known
  79. * height Height of the video, if known
  80. * resolution Textual description of width and height
  81. * tbr Average bitrate of audio and video in KBit/s
  82. * abr Average audio bitrate in KBit/s
  83. * acodec Name of the audio codec in use
  84. * asr Audio sampling rate in Hertz
  85. * vbr Average video bitrate in KBit/s
  86. * fps Frame rate
  87. * vcodec Name of the video codec in use
  88. * container Name of the container format
  89. * filesize The number of bytes, if known in advance
  90. * filesize_approx An estimate for the number of bytes
  91. * player_url SWF Player URL (used for rtmpdump).
  92. * protocol The protocol that will be used for the actual
  93. download, lower-case.
  94. "http", "https", "rtsp", "rtmp", "rtmpe",
  95. "m3u8", or "m3u8_native".
  96. * preference Order number of this format. If this field is
  97. present and not None, the formats get sorted
  98. by this field, regardless of all other values.
  99. -1 for default (order by other properties),
  100. -2 or smaller for less than default.
  101. < -1000 to hide the format (if there is
  102. another one which is strictly better)
  103. * language Language code, e.g. "de" or "en-US".
  104. * language_preference Is this in the language mentioned in
  105. the URL?
  106. 10 if it's what the URL is about,
  107. -1 for default (don't know),
  108. -10 otherwise, other values reserved for now.
  109. * quality Order number of the video quality of this
  110. format, irrespective of the file format.
  111. -1 for default (order by other properties),
  112. -2 or smaller for less than default.
  113. * source_preference Order number for this video source
  114. (quality takes higher priority)
  115. -1 for default (order by other properties),
  116. -2 or smaller for less than default.
  117. * http_headers A dictionary of additional HTTP headers
  118. to add to the request.
  119. * stretched_ratio If given and not 1, indicates that the
  120. video's pixels are not square.
  121. width : height ratio as float.
  122. * no_resume The server does not support resuming the
  123. (HTTP or RTMP) download. Boolean.
  124. url: Final video URL.
  125. ext: Video filename extension.
  126. format: The video format, defaults to ext (used for --get-format)
  127. player_url: SWF Player URL (used for rtmpdump).
  128. The following fields are optional:
  129. alt_title: A secondary title of the video.
  130. display_id An alternative identifier for the video, not necessarily
  131. unique, but available before title. Typically, id is
  132. something like "4234987", title "Dancing naked mole rats",
  133. and display_id "dancing-naked-mole-rats"
  134. thumbnails: A list of dictionaries, with the following entries:
  135. * "id" (optional, string) - Thumbnail format ID
  136. * "url"
  137. * "preference" (optional, int) - quality of the image
  138. * "width" (optional, int)
  139. * "height" (optional, int)
  140. * "resolution" (optional, string "{width}x{height"},
  141. deprecated)
  142. thumbnail: Full URL to a video thumbnail image.
  143. description: Full video description.
  144. uploader: Full name of the video uploader.
  145. creator: The main artist who created the video.
  146. release_date: The date (YYYYMMDD) when the video was released.
  147. timestamp: UNIX timestamp of the moment the video became available.
  148. upload_date: Video upload date (YYYYMMDD).
  149. If not explicitly set, calculated from timestamp.
  150. uploader_id: Nickname or id of the video uploader.
  151. location: Physical location where the video was filmed.
  152. subtitles: The available subtitles as a dictionary in the format
  153. {language: subformats}. "subformats" is a list sorted from
  154. lower to higher preference, each element is a dictionary
  155. with the "ext" entry and one of:
  156. * "data": The subtitles file contents
  157. * "url": A URL pointing to the subtitles file
  158. "ext" will be calculated from URL if missing
  159. automatic_captions: Like 'subtitles', used by the YoutubeIE for
  160. automatically generated captions
  161. duration: Length of the video in seconds, as an integer or float.
  162. view_count: How many users have watched the video on the platform.
  163. like_count: Number of positive ratings of the video
  164. dislike_count: Number of negative ratings of the video
  165. repost_count: Number of reposts of the video
  166. average_rating: Average rating give by users, the scale used depends on the webpage
  167. comment_count: Number of comments on the video
  168. comments: A list of comments, each with one or more of the following
  169. properties (all but one of text or html optional):
  170. * "author" - human-readable name of the comment author
  171. * "author_id" - user ID of the comment author
  172. * "id" - Comment ID
  173. * "html" - Comment as HTML
  174. * "text" - Plain text of the comment
  175. * "timestamp" - UNIX timestamp of comment
  176. * "parent" - ID of the comment this one is replying to.
  177. Set to "root" to indicate that this is a
  178. comment to the original video.
  179. age_limit: Age restriction for the video, as an integer (years)
  180. webpage_url: The URL to the video webpage, if given to youtube-dl it
  181. should allow to get the same result again. (It will be set
  182. by YoutubeDL if it's missing)
  183. categories: A list of categories that the video falls in, for example
  184. ["Sports", "Berlin"]
  185. tags: A list of tags assigned to the video, e.g. ["sweden", "pop music"]
  186. is_live: True, False, or None (=unknown). Whether this video is a
  187. live stream that goes on instead of a fixed-length video.
  188. start_time: Time in seconds where the reproduction should start, as
  189. specified in the URL.
  190. end_time: Time in seconds where the reproduction should end, as
  191. specified in the URL.
  192. The following fields should only be used when the video belongs to some logical
  193. chapter or section:
  194. chapter: Name or title of the chapter the video belongs to.
  195. chapter_number: Number of the chapter the video belongs to, as an integer.
  196. chapter_id: Id of the chapter the video belongs to, as a unicode string.
  197. The following fields should only be used when the video is an episode of some
  198. series or programme:
  199. series: Title of the series or programme the video episode belongs to.
  200. season: Title of the season the video episode belongs to.
  201. season_number: Number of the season the video episode belongs to, as an integer.
  202. season_id: Id of the season the video episode belongs to, as a unicode string.
  203. episode: Title of the video episode. Unlike mandatory video title field,
  204. this field should denote the exact title of the video episode
  205. without any kind of decoration.
  206. episode_number: Number of the video episode within a season, as an integer.
  207. episode_id: Id of the video episode, as a unicode string.
  208. Unless mentioned otherwise, the fields should be Unicode strings.
  209. Unless mentioned otherwise, None is equivalent to absence of information.
  210. _type "playlist" indicates multiple videos.
  211. There must be a key "entries", which is a list, an iterable, or a PagedList
  212. object, each element of which is a valid dictionary by this specification.
  213. Additionally, playlists can have "title", "description" and "id" attributes
  214. with the same semantics as videos (see above).
  215. _type "multi_video" indicates that there are multiple videos that
  216. form a single show, for examples multiple acts of an opera or TV episode.
  217. It must have an entries key like a playlist and contain all the keys
  218. required for a video at the same time.
  219. _type "url" indicates that the video must be extracted from another
  220. location, possibly by a different extractor. Its only required key is:
  221. "url" - the next URL to extract.
  222. The key "ie_key" can be set to the class name (minus the trailing "IE",
  223. e.g. "Youtube") if the extractor class is known in advance.
  224. Additionally, the dictionary may have any properties of the resolved entity
  225. known in advance, for example "title" if the title of the referred video is
  226. known ahead of time.
  227. _type "url_transparent" entities have the same specification as "url", but
  228. indicate that the given additional information is more precise than the one
  229. associated with the resolved URL.
  230. This is useful when a site employs a video service that hosts the video and
  231. its technical metadata, but that video service does not embed a useful
  232. title, description etc.
  233. Subclasses of this one should re-define the _real_initialize() and
  234. _real_extract() methods and define a _VALID_URL regexp.
  235. Probably, they should also be added to the list of extractors.
  236. Finally, the _WORKING attribute should be set to False for broken IEs
  237. in order to warn the users and skip the tests.
  238. """
  239. _ready = False
  240. _downloader = None
  241. _WORKING = True
  242. def __init__(self, downloader=None):
  243. """Constructor. Receives an optional downloader."""
  244. self._ready = False
  245. self.set_downloader(downloader)
  246. @classmethod
  247. def suitable(cls, url):
  248. """Receives a URL and returns True if suitable for this IE."""
  249. # This does not use has/getattr intentionally - we want to know whether
  250. # we have cached the regexp for *this* class, whereas getattr would also
  251. # match the superclass
  252. if '_VALID_URL_RE' not in cls.__dict__:
  253. cls._VALID_URL_RE = re.compile(cls._VALID_URL)
  254. return cls._VALID_URL_RE.match(url) is not None
  255. @classmethod
  256. def _match_id(cls, url):
  257. if '_VALID_URL_RE' not in cls.__dict__:
  258. cls._VALID_URL_RE = re.compile(cls._VALID_URL)
  259. m = cls._VALID_URL_RE.match(url)
  260. assert m
  261. return m.group('id')
  262. @classmethod
  263. def working(cls):
  264. """Getter method for _WORKING."""
  265. return cls._WORKING
  266. def initialize(self):
  267. """Initializes an instance (authentication, etc)."""
  268. if not self._ready:
  269. self._real_initialize()
  270. self._ready = True
  271. def extract(self, url):
  272. """Extracts URL information and returns it in list of dicts."""
  273. try:
  274. self.initialize()
  275. return self._real_extract(url)
  276. except ExtractorError:
  277. raise
  278. except compat_http_client.IncompleteRead as e:
  279. raise ExtractorError('A network error has occurred.', cause=e, expected=True)
  280. except (KeyError, StopIteration) as e:
  281. raise ExtractorError('An extractor error has occurred.', cause=e)
  282. def set_downloader(self, downloader):
  283. """Sets the downloader for this IE."""
  284. self._downloader = downloader
  285. def _real_initialize(self):
  286. """Real initialization process. Redefine in subclasses."""
  287. pass
  288. def _real_extract(self, url):
  289. """Real extraction process. Redefine in subclasses."""
  290. pass
  291. @classmethod
  292. def ie_key(cls):
  293. """A string for getting the InfoExtractor with get_info_extractor"""
  294. return compat_str(cls.__name__[:-2])
  295. @property
  296. def IE_NAME(self):
  297. return compat_str(type(self).__name__[:-2])
  298. def _request_webpage(self, url_or_request, video_id, note=None, errnote=None, fatal=True):
  299. """ Returns the response handle """
  300. if note is None:
  301. self.report_download_webpage(video_id)
  302. elif note is not False:
  303. if video_id is None:
  304. self.to_screen('%s' % (note,))
  305. else:
  306. self.to_screen('%s: %s' % (video_id, note))
  307. try:
  308. return self._downloader.urlopen(url_or_request)
  309. except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
  310. if errnote is False:
  311. return False
  312. if errnote is None:
  313. errnote = 'Unable to download webpage'
  314. errmsg = '%s: %s' % (errnote, error_to_compat_str(err))
  315. if fatal:
  316. raise ExtractorError(errmsg, sys.exc_info()[2], cause=err)
  317. else:
  318. self._downloader.report_warning(errmsg)
  319. return False
  320. def _download_webpage_handle(self, url_or_request, video_id, note=None, errnote=None, fatal=True, encoding=None):
  321. """ Returns a tuple (page content as string, URL handle) """
  322. # Strip hashes from the URL (#1038)
  323. if isinstance(url_or_request, (compat_str, str)):
  324. url_or_request = url_or_request.partition('#')[0]
  325. urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal)
  326. if urlh is False:
  327. assert not fatal
  328. return False
  329. content = self._webpage_read_content(urlh, url_or_request, video_id, note, errnote, fatal, encoding=encoding)
  330. return (content, urlh)
  331. @staticmethod
  332. def _guess_encoding_from_content(content_type, webpage_bytes):
  333. m = re.match(r'[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+\s*;\s*charset=(.+)', content_type)
  334. if m:
  335. encoding = m.group(1)
  336. else:
  337. m = re.search(br'<meta[^>]+charset=[\'"]?([^\'")]+)[ /\'">]',
  338. webpage_bytes[:1024])
  339. if m:
  340. encoding = m.group(1).decode('ascii')
  341. elif webpage_bytes.startswith(b'\xff\xfe'):
  342. encoding = 'utf-16'
  343. else:
  344. encoding = 'utf-8'
  345. return encoding
  346. def _webpage_read_content(self, urlh, url_or_request, video_id, note=None, errnote=None, fatal=True, prefix=None, encoding=None):
  347. content_type = urlh.headers.get('Content-Type', '')
  348. webpage_bytes = urlh.read()
  349. if prefix is not None:
  350. webpage_bytes = prefix + webpage_bytes
  351. if not encoding:
  352. encoding = self._guess_encoding_from_content(content_type, webpage_bytes)
  353. if self._downloader.params.get('dump_intermediate_pages', False):
  354. try:
  355. url = url_or_request.get_full_url()
  356. except AttributeError:
  357. url = url_or_request
  358. self.to_screen('Dumping request to ' + url)
  359. dump = base64.b64encode(webpage_bytes).decode('ascii')
  360. self._downloader.to_screen(dump)
  361. if self._downloader.params.get('write_pages', False):
  362. try:
  363. url = url_or_request.get_full_url()
  364. except AttributeError:
  365. url = url_or_request
  366. basen = '%s_%s' % (video_id, url)
  367. if len(basen) > 240:
  368. h = '___' + hashlib.md5(basen.encode('utf-8')).hexdigest()
  369. basen = basen[:240 - len(h)] + h
  370. raw_filename = basen + '.dump'
  371. filename = sanitize_filename(raw_filename, restricted=True)
  372. self.to_screen('Saving request to ' + filename)
  373. # Working around MAX_PATH limitation on Windows (see
  374. # http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx)
  375. if os.name == 'nt':
  376. absfilepath = os.path.abspath(filename)
  377. if len(absfilepath) > 259:
  378. filename = '\\\\?\\' + absfilepath
  379. with open(filename, 'wb') as outf:
  380. outf.write(webpage_bytes)
  381. try:
  382. content = webpage_bytes.decode(encoding, 'replace')
  383. except LookupError:
  384. content = webpage_bytes.decode('utf-8', 'replace')
  385. if ('<title>Access to this site is blocked</title>' in content and
  386. 'Websense' in content[:512]):
  387. msg = 'Access to this webpage has been blocked by Websense filtering software in your network.'
  388. blocked_iframe = self._html_search_regex(
  389. r'<iframe src="([^"]+)"', content,
  390. 'Websense information URL', default=None)
  391. if blocked_iframe:
  392. msg += ' Visit %s for more details' % blocked_iframe
  393. raise ExtractorError(msg, expected=True)
  394. if '<title>The URL you requested has been blocked</title>' in content[:512]:
  395. msg = (
  396. 'Access to this webpage has been blocked by Indian censorship. '
  397. 'Use a VPN or proxy server (with --proxy) to route around it.')
  398. block_msg = self._html_search_regex(
  399. r'</h1><p>(.*?)</p>',
  400. content, 'block message', default=None)
  401. if block_msg:
  402. msg += ' (Message: "%s")' % block_msg.replace('\n', ' ')
  403. raise ExtractorError(msg, expected=True)
  404. return content
  405. def _download_webpage(self, url_or_request, video_id, note=None, errnote=None, fatal=True, tries=1, timeout=5, encoding=None):
  406. """ Returns the data of the page as a string """
  407. success = False
  408. try_count = 0
  409. while success is False:
  410. try:
  411. res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding)
  412. success = True
  413. except compat_http_client.IncompleteRead as e:
  414. try_count += 1
  415. if try_count >= tries:
  416. raise e
  417. self._sleep(timeout, video_id)
  418. if res is False:
  419. return res
  420. else:
  421. content, _ = res
  422. return content
  423. def _download_xml(self, url_or_request, video_id,
  424. note='Downloading XML', errnote='Unable to download XML',
  425. transform_source=None, fatal=True, encoding=None):
  426. """Return the xml as an xml.etree.ElementTree.Element"""
  427. xml_string = self._download_webpage(
  428. url_or_request, video_id, note, errnote, fatal=fatal, encoding=encoding)
  429. if xml_string is False:
  430. return xml_string
  431. if transform_source:
  432. xml_string = transform_source(xml_string)
  433. return compat_etree_fromstring(xml_string.encode('utf-8'))
  434. def _download_json(self, url_or_request, video_id,
  435. note='Downloading JSON metadata',
  436. errnote='Unable to download JSON metadata',
  437. transform_source=None,
  438. fatal=True, encoding=None):
  439. json_string = self._download_webpage(
  440. url_or_request, video_id, note, errnote, fatal=fatal,
  441. encoding=encoding)
  442. if (not fatal) and json_string is False:
  443. return None
  444. return self._parse_json(
  445. json_string, video_id, transform_source=transform_source, fatal=fatal)
  446. def _parse_json(self, json_string, video_id, transform_source=None, fatal=True):
  447. if transform_source:
  448. json_string = transform_source(json_string)
  449. try:
  450. return json.loads(json_string)
  451. except ValueError as ve:
  452. errmsg = '%s: Failed to parse JSON ' % video_id
  453. if fatal:
  454. raise ExtractorError(errmsg, cause=ve)
  455. else:
  456. self.report_warning(errmsg + str(ve))
  457. def report_warning(self, msg, video_id=None):
  458. idstr = '' if video_id is None else '%s: ' % video_id
  459. self._downloader.report_warning(
  460. '[%s] %s%s' % (self.IE_NAME, idstr, msg))
  461. def to_screen(self, msg):
  462. """Print msg to screen, prefixing it with '[ie_name]'"""
  463. self._downloader.to_screen('[%s] %s' % (self.IE_NAME, msg))
  464. def report_extraction(self, id_or_name):
  465. """Report information extraction."""
  466. self.to_screen('%s: Extracting information' % id_or_name)
  467. def report_download_webpage(self, video_id):
  468. """Report webpage download."""
  469. self.to_screen('%s: Downloading webpage' % video_id)
  470. def report_age_confirmation(self):
  471. """Report attempt to confirm age."""
  472. self.to_screen('Confirming age')
  473. def report_login(self):
  474. """Report attempt to log in."""
  475. self.to_screen('Logging in')
  476. @staticmethod
  477. def raise_login_required(msg='This video is only available for registered users'):
  478. raise ExtractorError(
  479. '%s. Use --username and --password or --netrc to provide account credentials.' % msg,
  480. expected=True)
  481. @staticmethod
  482. def raise_geo_restricted(msg='This video is not available from your location due to geo restriction'):
  483. raise ExtractorError(
  484. '%s. You might want to use --proxy to workaround.' % msg,
  485. expected=True)
  486. # Methods for following #608
  487. @staticmethod
  488. def url_result(url, ie=None, video_id=None, video_title=None):
  489. """Returns a URL that points to a page that should be processed"""
  490. # TODO: ie should be the class used for getting the info
  491. video_info = {'_type': 'url',
  492. 'url': url,
  493. 'ie_key': ie}
  494. if video_id is not None:
  495. video_info['id'] = video_id
  496. if video_title is not None:
  497. video_info['title'] = video_title
  498. return video_info
  499. @staticmethod
  500. def playlist_result(entries, playlist_id=None, playlist_title=None, playlist_description=None):
  501. """Returns a playlist"""
  502. video_info = {'_type': 'playlist',
  503. 'entries': entries}
  504. if playlist_id:
  505. video_info['id'] = playlist_id
  506. if playlist_title:
  507. video_info['title'] = playlist_title
  508. if playlist_description:
  509. video_info['description'] = playlist_description
  510. return video_info
  511. def _search_regex(self, pattern, string, name, default=NO_DEFAULT, fatal=True, flags=0, group=None):
  512. """
  513. Perform a regex search on the given string, using a single or a list of
  514. patterns returning the first matching group.
  515. In case of failure return a default value or raise a WARNING or a
  516. RegexNotFoundError, depending on fatal, specifying the field name.
  517. """
  518. if isinstance(pattern, (str, compat_str, compiled_regex_type)):
  519. mobj = re.search(pattern, string, flags)
  520. else:
  521. for p in pattern:
  522. mobj = re.search(p, string, flags)
  523. if mobj:
  524. break
  525. if not self._downloader.params.get('no_color') and os.name != 'nt' and sys.stderr.isatty():
  526. _name = '\033[0;34m%s\033[0m' % name
  527. else:
  528. _name = name
  529. if mobj:
  530. if group is None:
  531. # return the first matching group
  532. return next(g for g in mobj.groups() if g is not None)
  533. else:
  534. return mobj.group(group)
  535. elif default is not NO_DEFAULT:
  536. return default
  537. elif fatal:
  538. raise RegexNotFoundError('Unable to extract %s' % _name)
  539. else:
  540. self._downloader.report_warning('unable to extract %s' % _name + bug_reports_message())
  541. return None
  542. def _html_search_regex(self, pattern, string, name, default=NO_DEFAULT, fatal=True, flags=0, group=None):
  543. """
  544. Like _search_regex, but strips HTML tags and unescapes entities.
  545. """
  546. res = self._search_regex(pattern, string, name, default, fatal, flags, group)
  547. if res:
  548. return clean_html(res).strip()
  549. else:
  550. return res
  551. def _get_login_info(self):
  552. """
  553. Get the login info as (username, password)
  554. It will look in the netrc file using the _NETRC_MACHINE value
  555. If there's no info available, return (None, None)
  556. """
  557. if self._downloader is None:
  558. return (None, None)
  559. username = None
  560. password = None
  561. downloader_params = self._downloader.params
  562. # Attempt to use provided username and password or .netrc data
  563. if downloader_params.get('username', None) is not None:
  564. username = downloader_params['username']
  565. password = downloader_params['password']
  566. elif downloader_params.get('usenetrc', False):
  567. try:
  568. info = netrc.netrc().authenticators(self._NETRC_MACHINE)
  569. if info is not None:
  570. username = info[0]
  571. password = info[2]
  572. else:
  573. raise netrc.NetrcParseError('No authenticators for %s' % self._NETRC_MACHINE)
  574. except (IOError, netrc.NetrcParseError) as err:
  575. self._downloader.report_warning('parsing .netrc: %s' % error_to_compat_str(err))
  576. return (username, password)
  577. def _get_tfa_info(self, note='two-factor verification code'):
  578. """
  579. Get the two-factor authentication info
  580. TODO - asking the user will be required for sms/phone verify
  581. currently just uses the command line option
  582. If there's no info available, return None
  583. """
  584. if self._downloader is None:
  585. return None
  586. downloader_params = self._downloader.params
  587. if downloader_params.get('twofactor', None) is not None:
  588. return downloader_params['twofactor']
  589. return compat_getpass('Type %s and press [Return]: ' % note)
  590. # Helper functions for extracting OpenGraph info
  591. @staticmethod
  592. def _og_regexes(prop):
  593. content_re = r'content=(?:"([^"]+?)"|\'([^\']+?)\'|\s*([^\s"\'=<>`]+?))'
  594. property_re = (r'(?:name|property)=(?:\'og:%(prop)s\'|"og:%(prop)s"|\s*og:%(prop)s\b)'
  595. % {'prop': re.escape(prop)})
  596. template = r'<meta[^>]+?%s[^>]+?%s'
  597. return [
  598. template % (property_re, content_re),
  599. template % (content_re, property_re),
  600. ]
  601. @staticmethod
  602. def _meta_regex(prop):
  603. return r'''(?isx)<meta
  604. (?=[^>]+(?:itemprop|name|property|id|http-equiv)=(["\']?)%s\1)
  605. [^>]+?content=(["\'])(?P<content>.*?)\2''' % re.escape(prop)
  606. def _og_search_property(self, prop, html, name=None, **kargs):
  607. if name is None:
  608. name = 'OpenGraph %s' % prop
  609. escaped = self._search_regex(self._og_regexes(prop), html, name, flags=re.DOTALL, **kargs)
  610. if escaped is None:
  611. return None
  612. return unescapeHTML(escaped)
  613. def _og_search_thumbnail(self, html, **kargs):
  614. return self._og_search_property('image', html, 'thumbnail URL', fatal=False, **kargs)
  615. def _og_search_description(self, html, **kargs):
  616. return self._og_search_property('description', html, fatal=False, **kargs)
  617. def _og_search_title(self, html, **kargs):
  618. return self._og_search_property('title', html, **kargs)
  619. def _og_search_video_url(self, html, name='video url', secure=True, **kargs):
  620. regexes = self._og_regexes('video') + self._og_regexes('video:url')
  621. if secure:
  622. regexes = self._og_regexes('video:secure_url') + regexes
  623. return self._html_search_regex(regexes, html, name, **kargs)
  624. def _og_search_url(self, html, **kargs):
  625. return self._og_search_property('url', html, **kargs)
  626. def _html_search_meta(self, name, html, display_name=None, fatal=False, **kwargs):
  627. if display_name is None:
  628. display_name = name
  629. return self._html_search_regex(
  630. self._meta_regex(name),
  631. html, display_name, fatal=fatal, group='content', **kwargs)
  632. def _dc_search_uploader(self, html):
  633. return self._html_search_meta('dc.creator', html, 'uploader')
  634. def _rta_search(self, html):
  635. # See http://www.rtalabel.org/index.php?content=howtofaq#single
  636. if re.search(r'(?ix)<meta\s+name="rating"\s+'
  637. r' content="RTA-5042-1996-1400-1577-RTA"',
  638. html):
  639. return 18
  640. return 0
  641. def _media_rating_search(self, html):
  642. # See http://www.tjg-designs.com/WP/metadata-code-examples-adding-metadata-to-your-web-pages/
  643. rating = self._html_search_meta('rating', html)
  644. if not rating:
  645. return None
  646. RATING_TABLE = {
  647. 'safe for kids': 0,
  648. 'general': 8,
  649. '14 years': 14,
  650. 'mature': 17,
  651. 'restricted': 19,
  652. }
  653. return RATING_TABLE.get(rating.lower(), None)
  654. def _family_friendly_search(self, html):
  655. # See http://schema.org/VideoObject
  656. family_friendly = self._html_search_meta('isFamilyFriendly', html)
  657. if not family_friendly:
  658. return None
  659. RATING_TABLE = {
  660. '1': 0,
  661. 'true': 0,
  662. '0': 18,
  663. 'false': 18,
  664. }
  665. return RATING_TABLE.get(family_friendly.lower(), None)
  666. def _twitter_search_player(self, html):
  667. return self._html_search_meta('twitter:player', html,
  668. 'twitter card player')
  669. def _search_json_ld(self, html, video_id, **kwargs):
  670. json_ld = self._search_regex(
  671. r'(?s)<script[^>]+type=(["\'])application/ld\+json\1[^>]*>(?P<json_ld>.+?)</script>',
  672. html, 'JSON-LD', group='json_ld', **kwargs)
  673. if not json_ld:
  674. return {}
  675. return self._json_ld(json_ld, video_id, fatal=kwargs.get('fatal', True))
  676. def _json_ld(self, json_ld, video_id, fatal=True):
  677. if isinstance(json_ld, compat_str):
  678. json_ld = self._parse_json(json_ld, video_id, fatal=fatal)
  679. if not json_ld:
  680. return {}
  681. info = {}
  682. if json_ld.get('@context') == 'http://schema.org':
  683. item_type = json_ld.get('@type')
  684. if item_type == 'TVEpisode':
  685. info.update({
  686. 'episode': unescapeHTML(json_ld.get('name')),
  687. 'episode_number': int_or_none(json_ld.get('episodeNumber')),
  688. 'description': unescapeHTML(json_ld.get('description')),
  689. })
  690. part_of_season = json_ld.get('partOfSeason')
  691. if isinstance(part_of_season, dict) and part_of_season.get('@type') == 'TVSeason':
  692. info['season_number'] = int_or_none(part_of_season.get('seasonNumber'))
  693. part_of_series = json_ld.get('partOfSeries')
  694. if isinstance(part_of_series, dict) and part_of_series.get('@type') == 'TVSeries':
  695. info['series'] = unescapeHTML(part_of_series.get('name'))
  696. elif item_type == 'Article':
  697. info.update({
  698. 'timestamp': parse_iso8601(json_ld.get('datePublished')),
  699. 'title': unescapeHTML(json_ld.get('headline')),
  700. 'description': unescapeHTML(json_ld.get('articleBody')),
  701. })
  702. return dict((k, v) for k, v in info.items() if v is not None)
  703. @staticmethod
  704. def _hidden_inputs(html):
  705. html = re.sub(r'<!--(?:(?!<!--).)*-->', '', html)
  706. hidden_inputs = {}
  707. for input in re.findall(r'(?i)<input([^>]+)>', html):
  708. if not re.search(r'type=(["\'])(?:hidden|submit)\1', input):
  709. continue
  710. name = re.search(r'name=(["\'])(?P<value>.+?)\1', input)
  711. if not name:
  712. continue
  713. value = re.search(r'value=(["\'])(?P<value>.*?)\1', input)
  714. if not value:
  715. continue
  716. hidden_inputs[name.group('value')] = value.group('value')
  717. return hidden_inputs
  718. def _form_hidden_inputs(self, form_id, html):
  719. form = self._search_regex(
  720. r'(?is)<form[^>]+?id=(["\'])%s\1[^>]*>(?P<form>.+?)</form>' % form_id,
  721. html, '%s form' % form_id, group='form')
  722. return self._hidden_inputs(form)
  723. def _sort_formats(self, formats, field_preference=None):
  724. if not formats:
  725. raise ExtractorError('No video formats found')
  726. for f in formats:
  727. # Automatically determine tbr when missing based on abr and vbr (improves
  728. # formats sorting in some cases)
  729. if 'tbr' not in f and f.get('abr') is not None and f.get('vbr') is not None:
  730. f['tbr'] = f['abr'] + f['vbr']
  731. def _formats_key(f):
  732. # TODO remove the following workaround
  733. from ..utils import determine_ext
  734. if not f.get('ext') and 'url' in f:
  735. f['ext'] = determine_ext(f['url'])
  736. if isinstance(field_preference, (list, tuple)):
  737. return tuple(f.get(field) if f.get(field) is not None else -1 for field in field_preference)
  738. preference = f.get('preference')
  739. if preference is None:
  740. preference = 0
  741. if f.get('ext') in ['f4f', 'f4m']: # Not yet supported
  742. preference -= 0.5
  743. proto_preference = 0 if determine_protocol(f) in ['http', 'https'] else -0.1
  744. if f.get('vcodec') == 'none': # audio only
  745. if self._downloader.params.get('prefer_free_formats'):
  746. ORDER = ['aac', 'mp3', 'm4a', 'webm', 'ogg', 'opus']
  747. else:
  748. ORDER = ['webm', 'opus', 'ogg', 'mp3', 'aac', 'm4a']
  749. ext_preference = 0
  750. try:
  751. audio_ext_preference = ORDER.index(f['ext'])
  752. except ValueError:
  753. audio_ext_preference = -1
  754. else:
  755. if self._downloader.params.get('prefer_free_formats'):
  756. ORDER = ['flv', 'mp4', 'webm']
  757. else:
  758. ORDER = ['webm', 'flv', 'mp4']
  759. try:
  760. ext_preference = ORDER.index(f['ext'])
  761. except ValueError:
  762. ext_preference = -1
  763. audio_ext_preference = 0
  764. return (
  765. preference,
  766. f.get('language_preference') if f.get('language_preference') is not None else -1,
  767. f.get('quality') if f.get('quality') is not None else -1,
  768. f.get('tbr') if f.get('tbr') is not None else -1,
  769. f.get('filesize') if f.get('filesize') is not None else -1,
  770. f.get('vbr') if f.get('vbr') is not None else -1,
  771. f.get('height') if f.get('height') is not None else -1,
  772. f.get('width') if f.get('width') is not None else -1,
  773. proto_preference,
  774. ext_preference,
  775. f.get('abr') if f.get('abr') is not None else -1,
  776. audio_ext_preference,
  777. f.get('fps') if f.get('fps') is not None else -1,
  778. f.get('filesize_approx') if f.get('filesize_approx') is not None else -1,
  779. f.get('source_preference') if f.get('source_preference') is not None else -1,
  780. f.get('format_id') if f.get('format_id') is not None else '',
  781. )
  782. formats.sort(key=_formats_key)
  783. def _check_formats(self, formats, video_id):
  784. if formats:
  785. formats[:] = filter(
  786. lambda f: self._is_valid_url(
  787. f['url'], video_id,
  788. item='%s video format' % f.get('format_id') if f.get('format_id') else 'video'),
  789. formats)
  790. def _is_valid_url(self, url, video_id, item='video'):
  791. url = self._proto_relative_url(url, scheme='http:')
  792. # For now assume non HTTP(S) URLs always valid
  793. if not (url.startswith('http://') or url.startswith('https://')):
  794. return True
  795. try:
  796. self._request_webpage(url, video_id, 'Checking %s URL' % item)
  797. return True
  798. except ExtractorError as e:
  799. if isinstance(e.cause, compat_urllib_error.URLError):
  800. self.to_screen(
  801. '%s: %s URL is invalid, skipping' % (video_id, item))
  802. return False
  803. raise
  804. def http_scheme(self):
  805. """ Either "http:" or "https:", depending on the user's preferences """
  806. return (
  807. 'http:'
  808. if self._downloader.params.get('prefer_insecure', False)
  809. else 'https:')
  810. def _proto_relative_url(self, url, scheme=None):
  811. if url is None:
  812. return url
  813. if url.startswith('//'):
  814. if scheme is None:
  815. scheme = self.http_scheme()
  816. return scheme + url
  817. else:
  818. return url
  819. def _sleep(self, timeout, video_id, msg_template=None):
  820. if msg_template is None:
  821. msg_template = '%(video_id)s: Waiting for %(timeout)s seconds'
  822. msg = msg_template % {'video_id': video_id, 'timeout': timeout}
  823. self.to_screen(msg)
  824. time.sleep(timeout)
  825. def _extract_f4m_formats(self, manifest_url, video_id, preference=None, f4m_id=None,
  826. transform_source=lambda s: fix_xml_ampersands(s).strip(),
  827. fatal=True):
  828. manifest = self._download_xml(
  829. manifest_url, video_id, 'Downloading f4m manifest',
  830. 'Unable to download f4m manifest',
  831. # Some manifests may be malformed, e.g. prosiebensat1 generated manifests
  832. # (see https://github.com/rg3/youtube-dl/issues/6215#issuecomment-121704244)
  833. transform_source=transform_source,
  834. fatal=fatal)
  835. if manifest is False:
  836. return []
  837. formats = []
  838. manifest_version = '1.0'
  839. media_nodes = manifest.findall('{http://ns.adobe.com/f4m/1.0}media')
  840. if not media_nodes:
  841. manifest_version = '2.0'
  842. media_nodes = manifest.findall('{http://ns.adobe.com/f4m/2.0}media')
  843. base_url = xpath_text(
  844. manifest, ['{http://ns.adobe.com/f4m/1.0}baseURL', '{http://ns.adobe.com/f4m/2.0}baseURL'],
  845. 'base URL', default=None)
  846. if base_url:
  847. base_url = base_url.strip()
  848. for i, media_el in enumerate(media_nodes):
  849. if manifest_version == '2.0':
  850. media_url = media_el.attrib.get('href') or media_el.attrib.get('url')
  851. if not media_url:
  852. continue
  853. manifest_url = (
  854. media_url if media_url.startswith('http://') or media_url.startswith('https://')
  855. else ((base_url or '/'.join(manifest_url.split('/')[:-1])) + '/' + media_url))
  856. # If media_url is itself a f4m manifest do the recursive extraction
  857. # since bitrates in parent manifest (this one) and media_url manifest
  858. # may differ leading to inability to resolve the format by requested
  859. # bitrate in f4m downloader
  860. if determine_ext(manifest_url) == 'f4m':
  861. formats.extend(self._extract_f4m_formats(
  862. manifest_url, video_id, preference, f4m_id, fatal=fatal))
  863. continue
  864. tbr = int_or_none(media_el.attrib.get('bitrate'))
  865. formats.append({
  866. 'format_id': '-'.join(filter(None, [f4m_id, compat_str(i if tbr is None else tbr)])),
  867. 'url': manifest_url,
  868. 'ext': 'flv',
  869. 'tbr': tbr,
  870. 'width': int_or_none(media_el.attrib.get('width')),
  871. 'height': int_or_none(media_el.attrib.get('height')),
  872. 'preference': preference,
  873. })
  874. self._sort_formats(formats)
  875. return formats
  876. def _extract_m3u8_formats(self, m3u8_url, video_id, ext=None,
  877. entry_protocol='m3u8', preference=None,
  878. m3u8_id=None, note=None, errnote=None,
  879. fatal=True):
  880. formats = [{
  881. 'format_id': '-'.join(filter(None, [m3u8_id, 'meta'])),
  882. 'url': m3u8_url,
  883. 'ext': ext,
  884. 'protocol': 'm3u8',
  885. 'preference': preference - 1 if preference else -1,
  886. 'resolution': 'multiple',
  887. 'format_note': 'Quality selection URL',
  888. }]
  889. format_url = lambda u: (
  890. u
  891. if re.match(r'^https?://', u)
  892. else compat_urlparse.urljoin(m3u8_url, u))
  893. res = self._download_webpage_handle(
  894. m3u8_url, video_id,
  895. note=note or 'Downloading m3u8 information',
  896. errnote=errnote or 'Failed to download m3u8 information',
  897. fatal=fatal)
  898. if res is False:
  899. return []
  900. m3u8_doc, urlh = res
  901. m3u8_url = urlh.geturl()
  902. # A Media Playlist Tag MUST NOT appear in a Master Playlist
  903. # https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3
  904. # The EXT-X-TARGETDURATION tag is REQUIRED for every M3U8 Media Playlists
  905. # https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3.1
  906. if '#EXT-X-TARGETDURATION' in m3u8_doc:
  907. return [{
  908. 'url': m3u8_url,
  909. 'format_id': m3u8_id,
  910. 'ext': ext,
  911. 'protocol': entry_protocol,
  912. 'preference': preference,
  913. }]
  914. last_info = None
  915. last_media = None
  916. kv_rex = re.compile(
  917. r'(?P<key>[a-zA-Z_-]+)=(?P<val>"[^"]+"|[^",]+)(?:,|$)')
  918. for line in m3u8_doc.splitlines():
  919. if line.startswith('#EXT-X-STREAM-INF:'):
  920. last_info = {}
  921. for m in kv_rex.finditer(line):
  922. v = m.group('val')
  923. if v.startswith('"'):
  924. v = v[1:-1]
  925. last_info[m.group('key')] = v
  926. elif line.startswith('#EXT-X-MEDIA:'):
  927. last_media = {}
  928. for m in kv_rex.finditer(line):
  929. v = m.group('val')
  930. if v.startswith('"'):
  931. v = v[1:-1]
  932. last_media[m.group('key')] = v
  933. elif line.startswith('#') or not line.strip():
  934. continue
  935. else:
  936. if last_info is None:
  937. formats.append({'url': format_url(line)})
  938. continue
  939. tbr = int_or_none(last_info.get('BANDWIDTH'), scale=1000)
  940. format_id = []
  941. if m3u8_id:
  942. format_id.append(m3u8_id)
  943. last_media_name = last_media.get('NAME') if last_media and last_media.get('TYPE') != 'SUBTITLES' else None
  944. format_id.append(last_media_name if last_media_name else '%d' % (tbr if tbr else len(formats)))
  945. f = {
  946. 'format_id': '-'.join(format_id),
  947. 'url': format_url(line.strip()),
  948. 'tbr': tbr,
  949. 'ext': ext,
  950. 'protocol': entry_protocol,
  951. 'preference': preference,
  952. }
  953. codecs = last_info.get('CODECS')
  954. if codecs:
  955. # TODO: looks like video codec is not always necessarily goes first
  956. va_codecs = codecs.split(',')
  957. if va_codecs[0]:
  958. f['vcodec'] = va_codecs[0]
  959. if len(va_codecs) > 1 and va_codecs[1]:
  960. f['acodec'] = va_codecs[1]
  961. resolution = last_info.get('RESOLUTION')
  962. if resolution:
  963. width_str, height_str = resolution.split('x')
  964. f['width'] = int(width_str)
  965. f['height'] = int(height_str)
  966. if last_media is not None:
  967. f['m3u8_media'] = last_media
  968. last_media = None
  969. formats.append(f)
  970. last_info = {}
  971. self._sort_formats(formats)
  972. return formats
  973. @staticmethod
  974. def _xpath_ns(path, namespace=None):
  975. if not namespace:
  976. return path
  977. out = []
  978. for c in path.split('/'):
  979. if not c or c == '.':
  980. out.append(c)
  981. else:
  982. out.append('{%s}%s' % (namespace, c))
  983. return '/'.join(out)
  984. def _extract_smil_formats(self, smil_url, video_id, fatal=True, f4m_params=None):
  985. smil = self._download_smil(smil_url, video_id, fatal=fatal)
  986. if smil is False:
  987. assert not fatal
  988. return []
  989. namespace = self._parse_smil_namespace(smil)
  990. return self._parse_smil_formats(
  991. smil, smil_url, video_id, namespace=namespace, f4m_params=f4m_params)
  992. def _extract_smil_info(self, smil_url, video_id, fatal=True, f4m_params=None):
  993. smil = self._download_smil(smil_url, video_id, fatal=fatal)
  994. if smil is False:
  995. return {}
  996. return self._parse_smil(smil, smil_url, video_id, f4m_params=f4m_params)
  997. def _download_smil(self, smil_url, video_id, fatal=True):
  998. return self._download_xml(
  999. smil_url, video_id, 'Downloading SMIL file',
  1000. 'Unable to download SMIL file', fatal=fatal)
  1001. def _parse_smil(self, smil, smil_url, video_id, f4m_params=None):
  1002. namespace = self._parse_smil_namespace(smil)
  1003. formats = self._parse_smil_formats(
  1004. smil, smil_url, video_id, namespace=namespace, f4m_params=f4m_params)
  1005. subtitles = self._parse_smil_subtitles(smil, namespace=namespace)
  1006. video_id = os.path.splitext(url_basename(smil_url))[0]
  1007. title = None
  1008. description = None
  1009. upload_date = None
  1010. for meta in smil.findall(self._xpath_ns('./head/meta', namespace)):
  1011. name = meta.attrib.get('name')
  1012. content = meta.attrib.get('content')
  1013. if not name or not content:
  1014. continue
  1015. if not title and name == 'title':
  1016. title = content
  1017. elif not description and name in ('description', 'abstract'):
  1018. description = content
  1019. elif not upload_date and name == 'date':
  1020. upload_date = unified_strdate(content)
  1021. thumbnails = [{
  1022. 'id': image.get('type'),
  1023. 'url': image.get('src'),
  1024. 'width': int_or_none(image.get('width')),
  1025. 'height': int_or_none(image.get('height')),
  1026. } for image in smil.findall(self._xpath_ns('.//image', namespace)) if image.get('src')]
  1027. return {
  1028. 'id': video_id,
  1029. 'title': title or video_id,
  1030. 'description': description,
  1031. 'upload_date': upload_date,
  1032. 'thumbnails': thumbnails,
  1033. 'formats': formats,
  1034. 'subtitles': subtitles,
  1035. }
  1036. def _parse_smil_namespace(self, smil):
  1037. return self._search_regex(
  1038. r'(?i)^{([^}]+)?}smil$', smil.tag, 'namespace', default=None)
  1039. def _parse_smil_formats(self, smil, smil_url, video_id, namespace=None, f4m_params=None, transform_rtmp_url=None):
  1040. base = smil_url
  1041. for meta in smil.findall(self._xpath_ns('./head/meta', namespace)):
  1042. b = meta.get('base') or meta.get('httpBase')
  1043. if b:
  1044. base = b
  1045. break
  1046. formats = []
  1047. rtmp_count = 0
  1048. http_count = 0
  1049. m3u8_count = 0
  1050. videos = smil.findall(self._xpath_ns('.//video', namespace))
  1051. for video in videos:
  1052. src = video.get('src')
  1053. if not src:
  1054. continue
  1055. bitrate = float_or_none(video.get('system-bitrate') or video.get('systemBitrate'), 1000)
  1056. filesize = int_or_none(video.get('size') or video.get('fileSize'))
  1057. width = int_or_none(video.get('width'))
  1058. height = int_or_none(video.get('height'))
  1059. proto = video.get('proto')
  1060. ext = video.get('ext')
  1061. src_ext = determine_ext(src)
  1062. streamer = video.get('streamer') or base
  1063. if proto == 'rtmp' or streamer.startswith('rtmp'):
  1064. rtmp_count += 1
  1065. formats.append({
  1066. 'url': streamer,
  1067. 'play_path': src,
  1068. 'ext': 'flv',
  1069. 'format_id': 'rtmp-%d' % (rtmp_count if bitrate is None else bitrate),
  1070. 'tbr': bitrate,
  1071. 'filesize': filesize,
  1072. 'width': width,
  1073. 'height': height,
  1074. })
  1075. if transform_rtmp_url:
  1076. streamer, src = transform_rtmp_url(streamer, src)
  1077. formats[-1].update({
  1078. 'url': streamer,
  1079. 'play_path': src,
  1080. })
  1081. continue
  1082. src_url = src if src.startswith('http') else compat_urlparse.urljoin(base, src)
  1083. if proto == 'm3u8' or src_ext == 'm3u8':
  1084. m3u8_formats = self._extract_m3u8_formats(
  1085. src_url, video_id, ext or 'mp4', m3u8_id='hls', fatal=False)
  1086. if len(m3u8_formats) == 1:
  1087. m3u8_count += 1
  1088. m3u8_formats[0].update({
  1089. 'format_id': 'hls-%d' % (m3u8_count if bitrate is None else bitrate),
  1090. 'tbr': bitrate,
  1091. 'width': width,
  1092. 'height': height,
  1093. })
  1094. formats.extend(m3u8_formats)
  1095. continue
  1096. if src_ext == 'f4m':
  1097. f4m_url = src_url
  1098. if not f4m_params:
  1099. f4m_params = {
  1100. 'hdcore': '3.2.0',
  1101. 'plugin': 'flowplayer-3.2.0.1',
  1102. }
  1103. f4m_url += '&' if '?' in f4m_url else '?'
  1104. f4m_url += compat_urllib_parse.urlencode(f4m_params)
  1105. formats.extend(self._extract_f4m_formats(f4m_url, video_id, f4m_id='hds', fatal=False))
  1106. continue
  1107. if src_url.startswith('http') and self._is_valid_url(src, video_id):
  1108. http_count += 1
  1109. formats.append({
  1110. 'url': src_url,
  1111. 'ext': ext or src_ext or 'flv',
  1112. 'format_id': 'http-%d' % (bitrate or http_count),
  1113. 'tbr': bitrate,
  1114. 'filesize': filesize,
  1115. 'width': width,
  1116. 'height': height,
  1117. })
  1118. continue
  1119. self._sort_formats(formats)
  1120. return formats
  1121. def _parse_smil_subtitles(self, smil, namespace=None, subtitles_lang='en'):
  1122. subtitles = {}
  1123. for num, textstream in enumerate(smil.findall(self._xpath_ns('.//textstream', namespace))):
  1124. src = textstream.get('src')
  1125. if not src:
  1126. continue
  1127. ext = textstream.get('ext') or determine_ext(src)
  1128. if not ext:
  1129. type_ = textstream.get('type')
  1130. SUBTITLES_TYPES = {
  1131. 'text/vtt': 'vtt',
  1132. 'text/srt': 'srt',
  1133. 'application/smptett+xml': 'tt',
  1134. }
  1135. if type_ in SUBTITLES_TYPES:
  1136. ext = SUBTITLES_TYPES[type_]
  1137. lang = textstream.get('systemLanguage') or textstream.get('systemLanguageName') or textstream.get('lang') or subtitles_lang
  1138. subtitles.setdefault(lang, []).append({
  1139. 'url': src,
  1140. 'ext': ext,
  1141. })
  1142. return subtitles
  1143. def _extract_xspf_playlist(self, playlist_url, playlist_id, fatal=True):
  1144. xspf = self._download_xml(
  1145. playlist_url, playlist_id, 'Downloading xpsf playlist',
  1146. 'Unable to download xspf manifest', fatal=fatal)
  1147. if xspf is False:
  1148. return []
  1149. return self._parse_xspf(xspf, playlist_id)
  1150. def _parse_xspf(self, playlist, playlist_id):
  1151. NS_MAP = {
  1152. 'xspf': 'http://xspf.org/ns/0/',
  1153. 's1': 'http://static.streamone.nl/player/ns/0',
  1154. }
  1155. entries = []
  1156. for track in playlist.findall(xpath_with_ns('./xspf:trackList/xspf:track', NS_MAP)):
  1157. title = xpath_text(
  1158. track, xpath_with_ns('./xspf:title', NS_MAP), 'title', default=playlist_id)
  1159. description = xpath_text(
  1160. track, xpath_with_ns('./xspf:annotation', NS_MAP), 'description')
  1161. thumbnail = xpath_text(
  1162. track, xpath_with_ns('./xspf:image', NS_MAP), 'thumbnail')
  1163. duration = float_or_none(
  1164. xpath_text(track, xpath_with_ns('./xspf:duration', NS_MAP), 'duration'), 1000)
  1165. formats = [{
  1166. 'url': location.text,
  1167. 'format_id': location.get(xpath_with_ns('s1:label', NS_MAP)),
  1168. 'width': int_or_none(location.get(xpath_with_ns('s1:width', NS_MAP))),
  1169. 'height': int_or_none(location.get(xpath_with_ns('s1:height', NS_MAP))),
  1170. } for location in track.findall(xpath_with_ns('./xspf:location', NS_MAP))]
  1171. self._sort_formats(formats)
  1172. entries.append({
  1173. 'id': playlist_id,
  1174. 'title': title,
  1175. 'description': description,
  1176. 'thumbnail': thumbnail,
  1177. 'duration': duration,
  1178. 'formats': formats,
  1179. })
  1180. return entries
  1181. def _extract_mpd_formats(self, mpd_url, video_id, mpd_id=None, note=None, errnote=None, fatal=True, formats_dict={}):
  1182. res = self._download_webpage_handle(
  1183. mpd_url, video_id,
  1184. note=note or 'Downloading MPD manifest',
  1185. errnote=errnote or 'Failed to download MPD manifest',
  1186. fatal=fatal)
  1187. if res is False:
  1188. return []
  1189. mpd, urlh = res
  1190. mpd_base_url = re.match(r'https?://.+/', urlh.geturl()).group()
  1191. return self._parse_mpd_formats(
  1192. compat_etree_fromstring(mpd.encode('utf-8')), mpd_id, mpd_base_url, formats_dict=formats_dict)
  1193. def _parse_mpd_formats(self, mpd_doc, mpd_id=None, mpd_base_url='', formats_dict={}):
  1194. if mpd_doc.get('type') == 'dynamic':
  1195. return []
  1196. namespace = self._search_regex(r'(?i)^{([^}]+)?}MPD$', mpd_doc.tag, 'namespace', default=None)
  1197. def _add_ns(path):
  1198. return self._xpath_ns(path, namespace)
  1199. def is_drm_protected(element):
  1200. return element.find(_add_ns('ContentProtection')) is not None
  1201. def extract_multisegment_info(element, ms_parent_info):
  1202. ms_info = ms_parent_info.copy()
  1203. segment_list = element.find(_add_ns('SegmentList'))
  1204. if segment_list is not None:
  1205. segment_urls_e = segment_list.findall(_add_ns('SegmentURL'))
  1206. if segment_urls_e:
  1207. ms_info['segment_urls'] = [segment.attrib['media'] for segment in segment_urls_e]
  1208. initialization = segment_list.find(_add_ns('Initialization'))
  1209. if initialization is not None:
  1210. ms_info['initialization_url'] = initialization.attrib['sourceURL']
  1211. else:
  1212. segment_template = element.find(_add_ns('SegmentTemplate'))
  1213. if segment_template is not None:
  1214. start_number = segment_template.get('startNumber')
  1215. if start_number:
  1216. ms_info['start_number'] = int(start_number)
  1217. segment_timeline = segment_template.find(_add_ns('SegmentTimeline'))
  1218. if segment_timeline is not None:
  1219. s_e = segment_timeline.findall(_add_ns('S'))
  1220. if s_e:
  1221. ms_info['total_number'] = 0
  1222. for s in s_e:
  1223. ms_info['total_number'] += 1 + int(s.get('r', '0'))
  1224. else:
  1225. timescale = segment_template.get('timescale')
  1226. if timescale:
  1227. ms_info['timescale'] = int(timescale)
  1228. segment_duration = segment_template.get('duration')
  1229. if segment_duration:
  1230. ms_info['segment_duration'] = int(segment_duration)
  1231. media_template = segment_template.get('media')
  1232. if media_template:
  1233. ms_info['media_template'] = media_template
  1234. initialization = segment_template.get('initialization')
  1235. if initialization:
  1236. ms_info['initialization_url'] = initialization
  1237. else:
  1238. initialization = segment_template.find(_add_ns('Initialization'))
  1239. if initialization is not None:
  1240. ms_info['initialization_url'] = initialization.attrib['sourceURL']
  1241. return ms_info
  1242. mpd_duration = parse_duration(mpd_doc.get('mediaPresentationDuration'))
  1243. formats = []
  1244. for period in mpd_doc.findall(_add_ns('Period')):
  1245. period_duration = parse_duration(period.get('duration')) or mpd_duration
  1246. period_ms_info = extract_multisegment_info(period, {
  1247. 'start_number': 1,
  1248. 'timescale': 1,
  1249. })
  1250. for adaptation_set in period.findall(_add_ns('AdaptationSet')):
  1251. if is_drm_protected(adaptation_set):
  1252. continue
  1253. adaption_set_ms_info = extract_multisegment_info(adaptation_set, period_ms_info)
  1254. for representation in adaptation_set.findall(_add_ns('Representation')):
  1255. if is_drm_protected(representation):
  1256. continue
  1257. representation_attrib = adaptation_set.attrib.copy()
  1258. representation_attrib.update(representation.attrib)
  1259. mime_type = representation_attrib.get('mimeType')
  1260. content_type = mime_type.split('/')[0] if mime_type else representation_attrib.get('contentType')
  1261. if content_type == 'text':
  1262. # TODO implement WebVTT downloading
  1263. pass
  1264. elif content_type == 'video' or content_type == 'audio':
  1265. base_url = ''
  1266. for element in (representation, adaptation_set, period, mpd_doc):
  1267. base_url_e = element.find(_add_ns('BaseURL'))
  1268. if base_url_e is not None:
  1269. base_url = base_url_e.text + base_url
  1270. if re.match(r'^https?://', base_url):
  1271. break
  1272. if not re.match(r'^https?://', base_url):
  1273. base_url = mpd_base_url + base_url
  1274. representation_id = representation_attrib.get('id')
  1275. lang = representation_attrib.get('lang')
  1276. f = {
  1277. 'format_id': mpd_id or representation_id,
  1278. 'url': base_url,
  1279. 'width': int_or_none(representation_attrib.get('width')),
  1280. 'height': int_or_none(representation_attrib.get('height')),
  1281. 'tbr': int_or_none(representation_attrib.get('bandwidth'), 1000),
  1282. 'asr': int_or_none(representation_attrib.get('audioSamplingRate')),
  1283. 'fps': int_or_none(representation_attrib.get('frameRate')),
  1284. 'vcodec': 'none' if content_type == 'audio' else representation_attrib.get('codecs'),
  1285. 'acodec': 'none' if content_type == 'video' else representation_attrib.get('codecs'),
  1286. 'language': lang if lang not in ('mul', 'und', 'zxx', 'mis') else None,
  1287. 'format_note': 'DASH %s' % content_type,
  1288. }
  1289. representation_ms_info = extract_multisegment_info(representation, adaption_set_ms_info)
  1290. if 'segment_urls' not in representation_ms_info and 'media_template' in representation_ms_info:
  1291. if 'total_number' not in representation_ms_info and 'segment_duration':
  1292. segment_duration = float(representation_ms_info['segment_duration']) / float(representation_ms_info['timescale'])
  1293. representation_ms_info['total_number'] = int(math.ceil(float(period_duration) / segment_duration))
  1294. media_template = representation_ms_info['media_template']
  1295. media_template = media_template.replace('$RepresentationID$', representation_id)
  1296. media_template = re.sub(r'\$(Number|Bandwidth)(?:%(0\d+)d)?\$', r'%(\1)\2d', media_template)
  1297. media_template.replace('$$', '$')
  1298. representation_ms_info['segment_urls'] = [media_template % {'Number': segment_number, 'Bandwidth': representation_attrib.get('bandwidth')} for segment_number in range(representation_ms_info['start_number'], representation_ms_info['total_number'] + representation_ms_info['start_number'])]
  1299. if 'segment_urls' in representation_ms_info:
  1300. f.update({
  1301. 'segment_urls': representation_ms_info['segment_urls'],
  1302. 'protocol': 'http_dash_segments',
  1303. })
  1304. if 'initialization_url' in representation_ms_info:
  1305. initialization_url = representation_ms_info['initialization_url'].replace('$RepresentationID$', representation_id)
  1306. f.update({
  1307. 'initialization_url': initialization_url,
  1308. })
  1309. if not f.get('url'):
  1310. f['url'] = initialization_url
  1311. try:
  1312. existing_format = next(
  1313. fo for fo in formats
  1314. if fo['format_id'] == representation_id)
  1315. except StopIteration:
  1316. full_info = formats_dict.get(representation_id, {}).copy()
  1317. full_info.update(f)
  1318. formats.append(full_info)
  1319. else:
  1320. existing_format.update(f)
  1321. else:
  1322. self.report_warning('Unknown MIME type %s in DASH manifest' % mime_type)
  1323. self._sort_formats(formats)
  1324. return formats
  1325. def _live_title(self, name):
  1326. """ Generate the title for a live video """
  1327. now = datetime.datetime.now()
  1328. now_str = now.strftime("%Y-%m-%d %H:%M")
  1329. return name + ' ' + now_str
  1330. def _int(self, v, name, fatal=False, **kwargs):
  1331. res = int_or_none(v, **kwargs)
  1332. if 'get_attr' in kwargs:
  1333. print(getattr(v, kwargs['get_attr']))
  1334. if res is None:
  1335. msg = 'Failed to extract %s: Could not parse value %r' % (name, v)
  1336. if fatal:
  1337. raise ExtractorError(msg)
  1338. else:
  1339. self._downloader.report_warning(msg)
  1340. return res
  1341. def _float(self, v, name, fatal=False, **kwargs):
  1342. res = float_or_none(v, **kwargs)
  1343. if res is None:
  1344. msg = 'Failed to extract %s: Could not parse value %r' % (name, v)
  1345. if fatal:
  1346. raise ExtractorError(msg)
  1347. else:
  1348. self._downloader.report_warning(msg)
  1349. return res
  1350. def _set_cookie(self, domain, name, value, expire_time=None):
  1351. cookie = compat_cookiejar.Cookie(
  1352. 0, name, value, None, None, domain, None,
  1353. None, '/', True, False, expire_time, '', None, None, None)
  1354. self._downloader.cookiejar.set_cookie(cookie)
  1355. def _get_cookies(self, url):
  1356. """ Return a compat_cookies.SimpleCookie with the cookies for the url """
  1357. req = sanitized_Request(url)
  1358. self._downloader.cookiejar.add_cookie_header(req)
  1359. return compat_cookies.SimpleCookie(req.get_header('Cookie'))
  1360. def get_testcases(self, include_onlymatching=False):
  1361. t = getattr(self, '_TEST', None)
  1362. if t:
  1363. assert not hasattr(self, '_TESTS'), \
  1364. '%s has _TEST and _TESTS' % type(self).__name__
  1365. tests = [t]
  1366. else:
  1367. tests = getattr(self, '_TESTS', [])
  1368. for t in tests:
  1369. if not include_onlymatching and t.get('only_matching', False):
  1370. continue
  1371. t['name'] = type(self).__name__[:-len('IE')]
  1372. yield t
  1373. def is_suitable(self, age_limit):
  1374. """ Test whether the extractor is generally suitable for the given
  1375. age limit (i.e. pornographic sites are not, all others usually are) """
  1376. any_restricted = False
  1377. for tc in self.get_testcases(include_onlymatching=False):
  1378. if 'playlist' in tc:
  1379. tc = tc['playlist'][0]
  1380. is_restricted = age_restricted(
  1381. tc.get('info_dict', {}).get('age_limit'), age_limit)
  1382. if not is_restricted:
  1383. return True
  1384. any_restricted = any_restricted or is_restricted
  1385. return not any_restricted
  1386. def extract_subtitles(self, *args, **kwargs):
  1387. if (self._downloader.params.get('writesubtitles', False) or
  1388. self._downloader.params.get('listsubtitles')):
  1389. return self._get_subtitles(*args, **kwargs)
  1390. return {}
  1391. def _get_subtitles(self, *args, **kwargs):
  1392. raise NotImplementedError("This method must be implemented by subclasses")
  1393. @staticmethod
  1394. def _merge_subtitle_items(subtitle_list1, subtitle_list2):
  1395. """ Merge subtitle items for one language. Items with duplicated URLs
  1396. will be dropped. """
  1397. list1_urls = set([item['url'] for item in subtitle_list1])
  1398. ret = list(subtitle_list1)
  1399. ret.extend([item for item in subtitle_list2 if item['url'] not in list1_urls])
  1400. return ret
  1401. @classmethod
  1402. def _merge_subtitles(cls, subtitle_dict1, subtitle_dict2):
  1403. """ Merge two subtitle dictionaries, language by language. """
  1404. ret = dict(subtitle_dict1)
  1405. for lang in subtitle_dict2:
  1406. ret[lang] = cls._merge_subtitle_items(subtitle_dict1.get(lang, []), subtitle_dict2[lang])
  1407. return ret
  1408. def extract_automatic_captions(self, *args, **kwargs):
  1409. if (self._downloader.params.get('writeautomaticsub', False) or
  1410. self._downloader.params.get('listsubtitles')):
  1411. return self._get_automatic_captions(*args, **kwargs)
  1412. return {}
  1413. def _get_automatic_captions(self, *args, **kwargs):
  1414. raise NotImplementedError("This method must be implemented by subclasses")
  1415. class SearchInfoExtractor(InfoExtractor):
  1416. """
  1417. Base class for paged search queries extractors.
  1418. They accept URLs in the format _SEARCH_KEY(|all|[0-9]):{query}
  1419. Instances should define _SEARCH_KEY and _MAX_RESULTS.
  1420. """
  1421. @classmethod
  1422. def _make_valid_url(cls):
  1423. return r'%s(?P<prefix>|[1-9][0-9]*|all):(?P<query>[\s\S]+)' % cls._SEARCH_KEY
  1424. @classmethod
  1425. def suitable(cls, url):
  1426. return re.match(cls._make_valid_url(), url) is not None
  1427. def _real_extract(self, query):
  1428. mobj = re.match(self._make_valid_url(), query)
  1429. if mobj is None:
  1430. raise ExtractorError('Invalid search query "%s"' % query)
  1431. prefix = mobj.group('prefix')
  1432. query = mobj.group('query')
  1433. if prefix == '':
  1434. return self._get_n_results(query, 1)
  1435. elif prefix == 'all':
  1436. return self._get_n_results(query, self._MAX_RESULTS)
  1437. else:
  1438. n = int(prefix)
  1439. if n <= 0:
  1440. raise ExtractorError('invalid download number %s for query "%s"' % (n, query))
  1441. elif n > self._MAX_RESULTS:
  1442. self._downloader.report_warning('%s returns max %i results (you requested %i)' % (self._SEARCH_KEY, self._MAX_RESULTS, n))
  1443. n = self._MAX_RESULTS
  1444. return self._get_n_results(query, n)
  1445. def _get_n_results(self, query, n):
  1446. """Get a specified number of results for a query"""
  1447. raise NotImplementedError("This method must be implemented by subclasses")
  1448. @property
  1449. def SEARCH_KEY(self):
  1450. return self._SEARCH_KEY