FS#48368 - [calibre] Metadata are not downloaded anymore
Attached to Project:
Community Packages
Opened by xdmx (xdmx) - Sunday, 28 February 2016, 15:17 GMT
Last edited by Doug Newgard (Scimmia) - Sunday, 28 February 2016, 15:19 GMT
Opened by xdmx (xdmx) - Sunday, 28 February 2016, 15:17 GMT
Last edited by Doug Newgard (Scimmia) - Sunday, 28 February 2016, 15:19 GMT
|
Details
I've just upgraded to the last version of calibre (2.52) and
I've added an empty ebook, then I set the title as usual (it
was "year without pants") and clicked on download metadata.
It returned an error saying that there are no match found.
I've tried with many other titles, but it cannot find
any.
This is the error: calibre, version 2.52.0 ERROR: No matches found: <p>Failed to find any books that match your search. Try making the search <b>less specific</b>. For example, use only the author's last name and a single distinctive word from the title.<p>To see the full log, click Show Details. Running identify query with parameters: {u'authors': None, u'identifiers': {}, u'timeout': 30, u'title': u'year without pants'} Using plugins: Google, Amazon.com The log from individual plugins is below ****************************** Google ****************************** Request extra headers: [('User-agent', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)')] Found 0 results Downloading from Google took 0.445051908493 ******************************************************************************** ****************************** Amazon.com ****************************** Request extra headers: [('User-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1')] Found 0 results Downloading from Amazon.com took 0.892254114151 Failed to parse amazon page for query: u'http://www.amazon.com/s/?sort=relevanceexprank&field-title=year+without+pants&search-alias=stripbooks&unfiltered=1' Traceback (most recent call last): File "/usr/lib/calibre/calibre/ebooks/metadata/sources/amazon.py", line 1063, in identify namespaceHTMLElements=False) File "/usr/lib/python2.7/site-packages/html5lib/html5parser.py", line 29, in parse return p.parse(doc, encoding=encoding) File "/usr/lib/python2.7/site-packages/html5lib/html5parser.py", line 236, in parse parseMeta=parseMeta, useChardet=useChardet) File "/usr/lib/python2.7/site-packages/html5lib/html5parser.py", line 94, in _parse self.mainLoop() File "/usr/lib/python2.7/site-packages/html5lib/html5parser.py", line 203, in mainLoop new_token = phase.processComment(new_token) File "/usr/lib/python2.7/site-packages/html5lib/html5parser.py", line 468, in processComment self.tree.insertComment(token, self.tree.openElements[-1]) File "/usr/lib/python2.7/site-packages/html5lib/treebuilders/etree_lxml.py", line 312, in insertCommentMain super(TreeBuilder, self).insertComment(data, parent) File "/usr/lib/python2.7/site-packages/html5lib/treebuilders/_base.py", line 262, in insertComment parent.appendChild(self.commentClass(token["data"])) File "/usr/lib/python2.7/site-packages/html5lib/treebuilders/etree.py", line 148, in __init__ self._element = ElementTree.Comment(data) File "src/lxml/lxml.etree.pyx", line 3017, in lxml.etree.Comment (src/lxml/lxml.etree.c:80806) ValueError: Comment may not contain '--' or end with '-' ******************************************************************************** The identify phase took 1.07 seconds The longest time (0.892254) was taken by: Amazon.com Merging results from different sources and finding earliest publication dates from the worldcat.org service We have 0 merged results, merging took: 0.00 seconds Few versions ago it worked perfectly, but now it's been few weeks (or even months) that it returns this error, so it's not because of the v2.52 I've tried with the binary (http://calibre-ebook.com/download_linux) and it works, so it must be an arch bug |
This task depends upon
Closed by Doug Newgard (Scimmia)
Sunday, 28 February 2016, 15:19 GMT
Reason for closing: Duplicate
Additional comments about closing: FS#43382
Sunday, 28 February 2016, 15:19 GMT
Reason for closing: Duplicate
Additional comments about closing: