download object from cloudserver issue
Dear all, yarn version: 1.19.1 node version: v11.8.0
I run these commands to create buckets, and then see them. it works fine. [root@localhost ~]# aws --endpoint-url=http://localhost:8000 s3 mb s3://mybucket [root@localhost ~]# aws --endpoint-url=http://localhost:8000 s3 mb s3://dabucket [root@localhost ~]# aws s3 ls --endpoint-url=http://localhost:8000 2019-10-21 10:10:45 dabucket 2019-10-21 10:10:36 mybucket
upload a file named "README.md" onto mybucket. [root@localhost ~]# aws s3 cp --acl bucket-owner-full-control ./README.md s3://mybucket/test.txt --endpoint-url=http://localhost:8000 upload: ./README.md to s3://mybucket/tes2.txt
[root@localhost ~]# s3cmd la 2019-10-22 05:38 4102 s3://mybucket/test.txt
at last I need to download this file. I got this exception: aws s3 cp s3://mybucket/test.txt ./test.txt --endpoint-url=http://localhost:8000 --debug
2019-10-22 13:40:08,832 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=HEAD, url=http://localhost:8000/mybucket/test.txt, headers={'X-Amz-Content-SHA256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accessKey1/20191022/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=b09e99b4088af4781a240767265b3e6ada6508703a9b3dc0c65e934ae2ce2e46', 'X-Amz-Date': '20191022T054008Z', 'User-Agent': 'aws-cli/1.16.263 Python/2.7.5 Linux/3.10.0-957.el7.x86_64 botocore/1.12.253'}> 2019-10-22 13:40:08,833 - MainThread - urllib3.util.retry - DEBUG - Converted retries value: False -> Retry(total=False, connect=None, read=None, redirect=0, status=None) 2019-10-22 13:40:08,833 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTP connection (1): localhost:8000 2019-10-22 13:40:08,841 - MainThread - urllib3.connectionpool - DEBUG - http://localhost:8000 "HEAD /mybucket/test.txt HTTP/1.1" 200 0 2019-10-22 13:40:08,842 - MainThread - botocore.parsers - DEBUG - Response headers: {'Content-Length': '4102', 'x-amz-id-2': 'e955aa158342943cd272', 'Accept-Ranges': 'bytes', 'server': 'S3 Server', 'Last-Modified': 'Tue, 22 Oct 2019 05:38:20 GMT', 'Connection': 'keep-alive', 'ETag': '"e8978fa5ced780cd5d4b1ae8a17a3428"', 'x-amz-request-id': 'e955aa158342943cd272', 'Date': 'Tue, 22 Oct 2019 05:40:08 GMT'} 2019-10-22 13:40:08,842 - MainThread - botocore.parsers - DEBUG - Response body:
2019-10-22 13:40:08,842 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7fc05f08fed0> 2019-10-22 13:40:08,842 - MainThread - botocore.retryhandler - DEBUG - No retry needed. 2019-10-22 13:40:08,842 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7fc05f08ff10>> 2019-10-22 13:40:08,842 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7fc05fb2c488> 2019-10-22 13:40:08,844 - MainThread - s3transfer.utils - DEBUG - Acquiring 0 2019-10-22 13:40:08,844 - ThreadPoolExecutor-1_0 - s3transfer.tasks - DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7fc05e816410>}) about to wait for the following futures [] 2019-10-22 13:40:08,844 - ThreadPoolExecutor-1_0 - s3transfer.tasks - DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7fc05e816410>}) done waiting for dependent futures 2019-10-22 13:40:08,844 - ThreadPoolExecutor-1_0 - s3transfer.tasks - DEBUG - Executing task DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7fc05e816410>}) with kwargs {'io_executor': <s3transfer.futures.BoundedExecutor object at 0x7fc05f51f9d0>, 'request_executor': <s3transfer.futures.BoundedExecutor object at 0x7fc05f51f450>, 'osutil': <s3transfer.utils.OSUtils object at 0x7fc05f51f290>, 'client': <botocore.client.S3 object at 0x7fc05f509e50>, 'transfer_future': <s3transfer.futures.TransferFuture object at 0x7fc05e816410>, 'config': <s3transfer.manager.TransferConfig object at 0x7fc05f51f210>} 2019-10-22 13:40:08,846 - ThreadPoolExecutor-1_0 - s3transfer.futures - DEBUG - Submitting task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'extra_args': {}, 'bucket': u'mybucket', 'key': u'test.txt'}) to executor <s3transfer.futures.BoundedExecutor object at 0x7fc05f51f450> for transfer request: 0. 2019-10-22 13:40:08,846 - ThreadPoolExecutor-1_0 - s3transfer.utils - DEBUG - Acquiring 0 2019-10-22 13:40:08,846 - ThreadPoolExecutor-0_0 - s3transfer.tasks - DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'extra_args': {}, 'bucket': u'mybucket', 'key': u'test.txt'}) about to wait for the following futures [] 2019-10-22 13:40:08,846 - ThreadPoolExecutor-0_0 - s3transfer.tasks - DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'extra_args': {}, 'bucket': u'mybucket', 'key': u'test.txt'}) done waiting for dependent futures 2019-10-22 13:40:08,847 - ThreadPoolExecutor-0_0 - s3transfer.tasks - DEBUG - Executing task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'extra_args': {}, 'bucket': u'mybucket', 'key': u'test.txt'}) with kwargs {'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7fc05e816890>, 'bandwidth_limiter': None, 'bucket': u'mybucket', 'download_output_manager': <s3transfer.download.DownloadFilenameOutputManager object at 0x7fc05e816810>, 'extra_args': {}, 'callbacks': [<functools.partial object at 0x7fc05f0eeec0>, <functools.partial object at 0x7fc05f0eef18>, <functools.partial object at 0x7fc05f0eef70>, <functools.partial object at 0x7fc05f0eefc8>], 'client': <botocore.client.S3 object at 0x7fc05f509e50>, 'key': u'test.txt', 'io_chunksize': 262144, 'max_attempts': 5} 2019-10-22 13:40:08,847 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function sse_md5 at 0x7fc0610e0aa0> 2019-10-22 13:40:08,847 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function validate_bucket_name at 0x7fc0610e0a28> 2019-10-22 13:40:08,847 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event before-parameter-build.s3.GetObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7fc05f516f50>> 2019-10-22 13:40:08,847 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function generate_idempotent_uuid at 0x7fc0610e06e0> 2019-10-22 13:40:08,848 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event before-call.s3.GetObject: calling handler <function add_expect_header at 0x7fc0610e0cf8> 2019-10-22 13:40:08,848 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event before-call.s3.GetObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7fc05f516f50>> 2019-10-22 13:40:08,848 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event before-call.s3.GetObject: calling handler <function inject_api_version_header_if_needed at 0x7fc0610e2d70> 2019-10-22 13:40:08,848 - ThreadPoolExecutor-0_0 - botocore.endpoint - DEBUG - Making request for OperationModel(name=GetObject) with params: {'body': '', 'url': u'http://localhost:8000/mybucket/test.txt', 'headers': {'User-Agent': 'aws-cli/1.16.263 Python/2.7.5 Linux/3.10.0-957.el7.x86_64 botocore/1.12.253'}, 'context': {'auth_type': None, 'client_region': 'us-east-1', 'signing': {'bucket': u'mybucket'}, 'has_streaming_input': False, 'client_config': <botocore.config.Config object at 0x7fc05f509f50>}, 'query_string': {}, 'url_path': u'/mybucket/test.txt', 'method': u'GET'} 2019-10-22 13:40:08,850 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event request-created.s3.GetObject: calling handler <function signal_not_transferring at 0x7fc05ffe4848> 2019-10-22 13:40:08,850 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event request-created.s3.GetObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7fc05f509f10>> 2019-10-22 13:40:08,850 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event choose-signer.s3.GetObject: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7fc05f53cf50>> 2019-10-22 13:40:08,850 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event choose-signer.s3.GetObject: calling handler <function set_operation_specific_signer at 0x7fc0610e05f0> 2019-10-22 13:40:08,850 - ThreadPoolExecutor-0_0 - botocore.auth - DEBUG - Calculating signature using v4 auth. 2019-10-22 13:40:08,850 - ThreadPoolExecutor-0_0 - botocore.auth - DEBUG - CanonicalRequest: GET /mybucket/test.txt
host:localhost:8000 x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date:20191022T054008Z
host;x-amz-content-sha256;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2019-10-22 13:40:08,850 - ThreadPoolExecutor-0_0 - botocore.auth - DEBUG - StringToSign: AWS4-HMAC-SHA256 20191022T054008Z 20191022/us-east-1/s3/aws4_request 32b3a86c5c1782ee71f581facbda69951efe284ee609b838816dfdeca8e50140 2019-10-22 13:40:08,850 - ThreadPoolExecutor-0_0 - botocore.auth - DEBUG - Signature: a0ac0511e5ac4259798593aa20ed3f7d2ef0d02398d396ec9af13075ca3b5f2f 2019-10-22 13:40:08,850 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event request-created.s3.GetObject: calling handler <function signal_transferring at 0x7fc05ffe48c0> 2019-10-22 13:40:08,851 - ThreadPoolExecutor-0_0 - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=True, method=GET, url=http://localhost:8000/mybucket/test.txt, headers={'X-Amz-Content-SHA256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accessKey1/20191022/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=a0ac0511e5ac4259798593aa20ed3f7d2ef0d02398d396ec9af13075ca3b5f2f', 'X-Amz-Date': '20191022T054008Z', 'User-Agent': 'aws-cli/1.16.263 Python/2.7.5 Linux/3.10.0-957.el7.x86_64 botocore/1.12.253'}> 2019-10-22 13:40:08,851 - ThreadPoolExecutor-0_0 - urllib3.util.retry - DEBUG - Converted retries value: False -> Retry(total=False, connect=None, read=None, redirect=0, status=None) 2019-10-22 13:40:08,851 - ThreadPoolExecutor-0_0 - urllib3.connectionpool - DEBUG - Starting new HTTP connection (1): localhost:8000 2019-10-22 13:40:08,852 - ThreadPoolExecutor-1_0 - s3transfer.utils - DEBUG - Releasing acquire 0/None 2019-10-22 13:40:08,868 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event needs-retry.s3.GetObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7fc05f516f10> 2019-10-22 13:40:08,868 - ThreadPoolExecutor-0_0 - botocore.retryhandler - DEBUG - retry needed, retryable exception caught: Connection was closed before we received a valid response from endpoint URL: "http://localhost:8000/mybucket/test.txt". Traceback (most recent call last): File "/usr/local/aws/lib/python2.7/site-packages/botocore/retryhandler.py", line 269, in _should_retry return self._checker(attempt_number, response, caught_exception) File "/usr/local/aws/lib/python2.7/site-packages/botocore/retryhandler.py", line 317, in call caught_exception) File "/usr/local/aws/lib/python2.7/site-packages/botocore/retryhandler.py", line 223, in call attempt_number, caught_exception) File "/usr/local/aws/lib/python2.7/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception raise caught_exception ConnectionClosedError: Connection was closed before we received a valid response from endpoint URL: "http://localhost:8000/mybucket/test.txt".
I found these log on server: {"name":"S3","bucketName":"mybucket","objectKey":"test.txt","bytesReceived":0,"bodyLength":0,"error":{"code":404,"description":"This object does not exist","ObjNotFound":true,"remote":true},"implName":"multipleBackends","time":1571722819036,"req_id":"92875d75994f9120c9e7","level":"error","message":"get error from datastore","hostname":"s3.host.com","pid":7828} {"name":"S3","bucketName":"mybucket","objectKey":"test.txt","bytesReceived":0,"bodyLength":0,"error":{"code":503,"description":"The request has failed due to a temporary failure of the server.","ServiceUnavailable":true},"method":"retrieveData","time":1571722819036,"req_id":"92875d75994f9120c9e7","level":"error","message":"failed to get object","hostname":"s3.host.com","pid":7828}
Maybe I should change acl of file object, but i don't know how to set. Would u please give me some help?
My aws cli config file: [root@s3 ~]# cat .aws/credentials [default] aws_access_key_id = accessKey1 aws_secret_access_key = verySecretKey1 [root@s3 ~]# [root@s3 ~]# [root@s3 ~]# cat .aws/config [default] region = us-east-1
[root@s3 ~]# aws s3api get-object-acl --bucket mybucket --key text.txt --endpoint-url=http://localhost:8000 { "Owner": { "DisplayName": "Bart", "ID": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be" }, "Grants": [ { "Grantee": { "Type": "CanonicalUser", "DisplayName": "[email protected]", "ID": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be" }, "Permission": "FULL_CONTROL" }, { "Grantee": { "Type": "Group", "URI": "http://acs.amazonaws.com/groups/global/AllUsers" }, "Permission": "READ" }, { "Grantee": { "Type": "Group", "URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" }, "Permission": "READ" } ] }
Hello @sunming2008, don't hesitate and paste this question into the "Questions and answers" category on Zenko forum. We will try to debug it with you :)
Hi @sunming2008, thanks for the question!
I have a few more questions about your setup:
Are you using all of Zenko or just CloudServer?
If just CloudServer, were you able to install dependencies with yarn install?
And what command did you use to start CloudServer?
I ask because I don't think it looks like an ACL issue - if that were the case you'd be seeing an AccessDenied error most likely.
I wonder about your dependency install, though, because when I tried it using node v11.8.0 I was unable to install everything. You could try to reinstall your dependencies using node v10.x and see if that helps!
@dora-korpar thank you for reply. i just use cloudserver $ git clone https://github.com/scality/cloudserver.git $ cd cloudServer; yarn install yarn install v1.19.1 [1/5] Validating package.json... [2/5] Resolving packages... success Already up-to-date. Done in 0.35s.
i try to downgrade nodejs and yarn with these version [root@localhost cloudserver]# node --version v10.7.0 [root@localhost cloudserver]# yarn --version 1.17.3 at last i still can not download file on cloudserver.
@dashagurova you are right, i try to paste my question onto forum but failed. it said new user only can submit question with two link. Would u please help me paste it?
@sunming2008, you should be good now :)
@sunming2008 were you able to fix the issue?
If not, what is the command you use cloudserver? Are you running yarn start?
@JianqinWang I didn't fix this issue. i run "yarn install" first and then run "yarn start"
@sunming2008 please try to start cloudserver with the following command:
REMOTE_MANAGEMENT_DISABLE=1 yarn start
Do you currently see any ECONNRESET errors in cloudserver while it is idle?
@JianqinWang many thanks. i already set this parameter in envrionment. i didn't see any ECONNRESET error. cloudserver start looks good. i can create buckets and upload files. but can't rename/download.
@sunming2008 That's extremely strange.
What other environmental parameters have you set? Did you update any environments related to storing metadata and data?
@JianqinWang i addREMOTE_MANAGEMENT_DISABLE=1 into envrionment and follow this getting-start document to start cloudserver. https://s3-server.readthedocs.io/en/latest/GETTING_STARTED.html
Can you send us the Cloudserver logs beginning from startup to when you try to get the object?
@dora-korpar . This is yarn install output. [root@s3 cloudserver]# yarn install yarn install v1.19.1 [1/5] Validating package.json... [2/5] Resolving packages... [3/5] Fetching packages... warning Pattern ["arsenal@scality/Arsenal#c57cde8"] is trying to unpack in the same destination "/usr/local/share/.cache/yarn/v6/npm-arsenal-8.1.4/node_modules/arsenal" as pattern ["arsenal@github:scality/Arsenal#635d2fe"]. This could result in non-deterministic behavior, skipping. warning Pattern ["arsenal@scality/Arsenal#b03f5b8"] is trying to unpack in the same destination "/usr/local/share/.cache/yarn/v6/npm-arsenal-7.4.3/node_modules/arsenal" as pattern ["arsenal@scality/Arsenal#32c895b"]. This could result in non-deterministic behavior, skipping. warning Pattern ["werelogs@scality/werelogs#4e0d97c"] is trying to unpack in the same destination "/usr/local/share/.cache/yarn/v6/npm-werelogs-7.4.1/node_modules/werelogs" as pattern ["werelogs@scality/werelogs#0a4c576"]. This could result in non-deterministic behavior, skipping. warning Pattern ["werelogs@scality/werelogs#22bca9c"] is trying to unpack in the same destination "/usr/local/share/.cache/yarn/v6/npm-werelogs-8.0.0/node_modules/werelogs" as pattern ["werelogs@scality/werelogs#351a2a3"]. This could result in non-deterministic behavior, skipping. [4/5] Linking dependencies... [5/5] Building fresh packages... Done in 48.38s.
This is "yarn start" log.
[root@s3 cloudserver]# REMOTE_MANAGEMENT_DISABLE=1 yarn start
yarn run v1.19.1
$ npm-run-all --parallel start_dmd start_s3server
$ npm-run-all --parallel start_mdserver start_dataserver
$ node index.js
$ node mdserver.js
$ node dataserver.js
{"name":"MetadataFileServer","time":1573623117010,"error":"The value of "byteLength" is out of range. It must be >= 1 and <= 6. Received 8","errorStack":"RangeError [ERR_OUT_OF_RANGE]: The value of "byteLength" is out of range. It must be >= 1 and <= 6. Received 8\n at boundsError (internal/buffer.js:58:9)\n at Buffer.readUIntLE (internal/buffer.js:80:3)\n at trySetDirSyncFlag (/root/cloudserver/node_modules/arsenal/lib/storage/utils.js:18:33)\n at Object.setDirSyncFlag (/root/cloudserver/node_modules/arsenal/lib/storage/utils.js:46:13)\n at MetadataFileServer.startServer (/root/cloudserver/node_modules/arsenal/lib/storage/metadata/file/MetadataFileServer.js:125:22)\n at Object.
any solution?
When starting the server I am receiving
$ npm-run-all --parallel start_dmd start_s3server
$ node index.js
$ npm-run-all --parallel start_mdserver start_dataserver
$ node dataserver.js
$ node mdserver.js
{"name":"S3","time":1583292339436,"level":"warn","message":"scality kms unavailable. Using file kms backend unless mem specified.","hostname":"ip-10-140-6-115","pid":12591}
{"name":"DataFileStore","time":1583292340598,"level":"info","message":"pre-creating 3511 subdirs...","hostname":"ip-10-140-6-115","pid":12632}
{"name":"DataFileStore","time":1583292340601,"error":"The value of "byteLength" is out of range. It must be >= 1 and <= 6. Received 8","errorStack":"RangeError [ERR_OUT_OF_RANGE]: The value of "byteLength" is out of range. It must be >= 1 and <= 6. Received 8\n at boundsError (internal/buffer.js:55:9)\n at Buffer.readUIntLE (internal/buffer.js:75:3)\n at trySetDirSyncFlag (/home/gcc/cloudserver/node_modules/arsenal/lib/storage/utils.js:18:33)\n at Object.setDirSyncFlag (/home/gcc/cloudserver/node_modules/arsenal/lib/storage/utils.js:46:13)\n at fs.access.err (/home/gcc/cloudserver/node_modules/arsenal/lib/storage/data/file/DataFileStore.js:92:30)\n at FSReqWrap.oncomplete (fs.js:145:20)","level":"warn","message":"WARNING: Synchronization directory updates are not supported on this platform. Newly written data could be lost if your system crashes before the operating system is able to write directory updates.","hostname":"ip-10-140-6-115","pid":12632}
{"name":"MetadataFileServer","time":1583292340696,"error":"The value of "byteLength" is out of range. It must be >= 1 and <= 6. Received 8","errorStack":"RangeError [ERR_OUT_OF_RANGE]: The value of "byteLength" is out of range. It must be >= 1 and <= 6. Received 8\n at boundsError (internal/buffer.js:55:9)\n at Buffer.readUIntLE (internal/buffer.js:75:3)\n at trySetDirSyncFlag (/home/gcc/cloudserver/node_modules/arsenal/lib/storage/utils.js:18:33)\n at Object.setDirSyncFlag (/home/gcc/cloudserver/node_modules/arsenal/lib/storage/utils.js:46:13)\n at MetadataFileServer.startServer (/home/gcc/cloudserver/node_modules/arsenal/lib/storage/metadata/file/MetadataFileServer.js:125:22)\n at Object.
Nodejs version = 10.18.1 Yarn version = 1.17.3