Ivan Ermilov

Results 17 comments of Ivan Ermilov

True, it should be added to chapter04.rst, line 142: Launch shell for your django project with "python manage.py shell" Also, here is a link to answer and explanation on the...

@atomobianco hi! what you see there as a hostname `a585a25be0f7` is a docker container id (default internal docker hostname representation). You may try to plan with `spark.driver.bindAddress` configuration option to...

@ktpktr0 that's correct, you will have to create named volume inside e.g. docker compose definition and mount it to both docker containers to have shared volume. This won't work well...

Hi @Lehis, you need to mount your local folder to docker container. For example: ``` docker run -v /local/folder/to/mount:/data --network mynetwork --env-file hadoop.env bde2020/hadoop-base:tag hdfs dfs -copyFromLocal /data/myfile / ```...

HI @haydenliu! PySpark is not supported by these images atm. The spark images has to be extended with installation of python, then it will be possible to use PySpark.

@Atahualkpa Hi! Which docker-compose are you using? Or what is your setup? Do you persist the data to the local drive from your docker containers? Eg by having volumes key....

@Atahualkpa How many nodes do you have in your swarm cluster? Do the containers always allocated on the same nodes?

@antonkulaga did you try running secondary namenode or that also got its' problems?

Hi! The application from the root Makefile is not for swarm. To test it, run one of the provided examples from spark distro. Also, what I can see from the...

@lokinell hi! Can you check logs of datanode/namenode and copy paste it here. How to check logs: docker ps --> copy container id for your namenode/datanode docker logs -f containerid...