icrawler icon indicating copy to clipboard operation
icrawler copied to clipboard

GoogleImageCrawler: TypeError: 'NoneType' object is not iterable

Open LostInDarkMath opened this issue 4 years ago • 11 comments

Hi there, I just tried out your library, but unfortunately, I get an error:

2021-03-06 13:56:54,609 - INFO - icrawler.crawler - start crawling...
2021-03-06 13:56:54,609 - INFO - icrawler.crawler - starting 1 feeder threads...
2021-03-06 13:56:54,610 - INFO - feeder - thread feeder-001 exit
2021-03-06 13:56:54,611 - INFO - icrawler.crawler - starting 1 parser threads...
2021-03-06 13:56:54,612 - INFO - icrawler.crawler - starting 1 downloader threads...
2021-03-06 13:56:55,160 - INFO - parser - parsing result page https://www.google.com/search?q=cat&ijn=0&start=0&tbs=&tbm=isch
Exception in thread parser-001:
Traceback (most recent call last):
  File "C:\Users\WILLI\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "C:\Users\WILLI\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Projekte\Foo\source\backend_prototyp\venv\lib\site-packages\icrawler\parser.py", line 104, in worker_exec
    for task in self.parse(response, **kwargs):
TypeError: 'NoneType' object is not iterable
2021-03-06 13:56:59,615 - INFO - downloader - no more download task for thread downloader-001
2021-03-06 13:56:59,615 - INFO - downloader - thread downloader-001 exit
2021-03-06 13:56:59,616 - INFO - icrawler.crawler - Crawling task done!

Process finished with exit code 0

But I just use your example code:

from icrawler.builtin import GoogleImageCrawler

google_crawler = GoogleImageCrawler(storage={'root_dir': 'D:'})
google_crawler.crawl(keyword='cat', max_num=100)

How can I fix the problem?

LostInDarkMath avatar Mar 06 '21 12:03 LostInDarkMath

Sorry for the inconvenience, could you please try to clone this project and build it manually? This has been fixed by #93 but I'm a bit busy to build and release a new package.

ZhiyuanChen avatar Mar 09 '21 13:03 ZhiyuanChen

I don't have the time to build it either. I just wanted to quickly test your library to see if it was suitable for my use case. And I'm probably not the only one having this problem either. Should all users now build this manually?

LostInDarkMath avatar Mar 09 '21 13:03 LostInDarkMath

I don't have the time to build it either. I just wanted to quickly test your library to see if it was suitable for my use case. And I'm probably not the only one having this problem either. Should all users now build this manually?

Sorry again for the inconvenience, I have updated the package on pypi

ZhiyuanChen avatar Mar 11 '21 08:03 ZhiyuanChen

It works now! Thank you were much :)

LostInDarkMath avatar Mar 12 '21 18:03 LostInDarkMath

I have the same problem. Works with Bing and Baidu, but does not work with Google. I keep getting the following errors: 2022-07-27 18:52:22,851 - INFO - icrawler.crawler - start crawling... 2022-07-27 18:52:22,852 - INFO - icrawler.crawler - starting 1 feeder threads... 2022-07-27 18:52:22,852 - INFO - icrawler.crawler - starting 1 parser threads... 2022-07-27 18:52:22,853 - INFO - icrawler.crawler - starting 4 downloader threads... 2022-07-27 18:52:23,323 - INFO - parser - parsing result page https://www.google.com/search?q=cat&ijn=0&start=0&tbs=isz%3Al%2Cic%3Aspecific%2Cisc%3Aorange%2Csur%3Afmc%2Ccdr%3A1%2Ccd_min%3A01%2F01%2F2017%2Ccd_max%3A11%2F30%2F2017&tbm=isch Exception in thread parser-001: Traceback (most recent call last): File "C:\Python310\lib\threading.py", line 1009, in _bootstrap_inner self.run() File "C:\Python310\lib\threading.py", line 946, in run self._target(*self._args, **self._kwargs) File "C:\Python310\lib\site-packages\icrawler\parser.py", line 104, in worker_exec for task in self.parse(response, **kwargs): TypeError: 'NoneType' object is not iterable 2022-07-27 18:52:27,857 - INFO - downloader - no more download task for thread downloader-001 2022-07-27 18:52:27,858 - INFO - downloader - thread downloader-001 exit 2022-07-27 18:52:27,858 - INFO - downloader - no more download task for thread downloader-003 2022-07-27 18:52:27,858 - INFO - downloader - thread downloader-003 exit 2022-07-27 18:52:27,858 - INFO - downloader - no more download task for thread downloader-004 2022-07-27 18:52:27,858 - INFO - downloader - thread downloader-004 exit 2022-07-27 18:52:27,859 - INFO - downloader - no more download task for thread downloader-002 2022-07-27 18:52:27,859 - INFO - downloader - thread downloader-002 exit 2022-07-27 18:52:27,894 - INFO - icrawler.crawler - Crawling task done!

Viachaslau85 avatar Aug 02 '22 15:08 Viachaslau85

I have the same problem. Works with Bing and Baidu, but does not work with Google. I keep getting the following errors: 2022-07-27 18:52:22,851 - INFO - icrawler.crawler - start crawling... 2022-07-27 18:52:22,852 - INFO - icrawler.crawler - starting 1 feeder threads... 2022-07-27 18:52:22,852 - INFO - icrawler.crawler - starting 1 parser threads... 2022-07-27 18:52:22,853 - INFO - icrawler.crawler - starting 4 downloader threads... 2022-07-27 18:52:23,323 - INFO - parser - parsing result page https://www.google.com/search?q=cat&ijn=0&start=0&tbs=isz%3Al%2Cic%3Aspecific%2Cisc%3Aorange%2Csur%3Afmc%2Ccdr%3A1%2Ccd_min%3A01%2F01%2F2017%2Ccd_max%3A11%2F30%2F2017&tbm=isch Exception in thread parser-001: Traceback (most recent call last): File "C:\Python310\lib\threading.py", line 1009, in _bootstrap_inner self.run() File "C:\Python310\lib\threading.py", line 946, in run self._target(*self._args, **self._kwargs) File "C:\Python310\lib\site-packages\icrawler\parser.py", line 104, in worker_exec for task in self.parse(response, **kwargs): TypeError: 'NoneType' object is not iterable 2022-07-27 18:52:27,857 - INFO - downloader - no more download task for thread downloader-001 2022-07-27 18:52:27,858 - INFO - downloader - thread downloader-001 exit 2022-07-27 18:52:27,858 - INFO - downloader - no more download task for thread downloader-003 2022-07-27 18:52:27,858 - INFO - downloader - thread downloader-003 exit 2022-07-27 18:52:27,858 - INFO - downloader - no more download task for thread downloader-004 2022-07-27 18:52:27,858 - INFO - downloader - thread downloader-004 exit 2022-07-27 18:52:27,859 - INFO - downloader - no more download task for thread downloader-002 2022-07-27 18:52:27,859 - INFO - downloader - thread downloader-002 exit 2022-07-27 18:52:27,894 - INFO - icrawler.crawler - Crawling task done!

This is not relevant to this issue, looks like https://github.com/hellock/icrawler/issues/107

ZhiyuanChen avatar Aug 03 '22 05:08 ZhiyuanChen

seem's this problem is back

gustavozantut avatar Apr 08 '23 02:04 gustavozantut

Any solution for this problem?

gijhi avatar Jan 17 '24 13:01 gijhi

Looks like some or many website's hosts are identifiying bots and asking for human validation, causing the problem.

gustavozantut avatar Mar 06 '24 18:03 gustavozantut

image change the code like this. helped to me. file ....\site-packages\icrawler\parser,py

uris = re.findall(r"http[^[]*?.(?:jpg|png|bmp)", txt) uris = [bytes(uri, 'utf-8').decode('unicode-escape') for uri in uris] if uris: return [{"file_url": uri} for uri in uris]

OxFF00FF avatar Apr 03 '24 06:04 OxFF00FF

image change the code like this. helped to me. file ....\site-packages\icrawler\parser,py

uris = re.findall(r"http[^[]*?.(?:jpg|png|bmp)", txt) uris = [bytes(uri, 'utf-8').decode('unicode-escape') for uri in uris] if uris: return [{"file_url": uri} for uri in uris]

Would you mind to submit a PR?

ZhiyuanChen avatar Apr 03 '24 10:04 ZhiyuanChen

@ZhiyuanChen sorry I was wrong. @OxFF00FF's change did fix it, but I didn't reload the module properly. Now google crawler works.

I still don't understand why @OxFF00FF's change works. Can anyone explain? Both before and after the change, a list of uris is returned from parse, whether the urls are decoded or not. Why does the error complain about the result of parse being none? uris should still be an iterable list before the change.

Thanks.

ed2050 avatar May 13 '24 13:05 ed2050

Please let me know if 0.6.8 fixes this issue~

ZhiyuanChen avatar May 15 '24 04:05 ZhiyuanChen