S3 Bucket not found
Hi @sicarul,
I have been working on a solution to push tables from R to redshift. However, I have encountered an issue while using the rs_upsert_table command wherein I am getting the following error. I have all the credentials and aws bucket access, however, the code is still bugging out.
ERROR** Client error: (403) Forbidden Error in uploadToS3(df, bucket, split_files, access_key, secret_key, region) : Bucket does not exist Calls: rs_upsert_table -> uploadToS3 Execution halted ***ERROR END
Any inputs would be great here. Thanks!
Best, Aayush Sahni
I've run into this issues a couple of time, and fixed it by forking the package internally and slightly modifying the uploadToS3 function to make it less stringent in the bucket checks. The items that fixed the issue for me were
- Setting the
check_regionkwarg as FALSE - Splitting the
bucketarg (which I need to pass abucket/subfolderstring into to get the tool to write files to the proper location) so that it only grabs the bucket part
uploadToS3 = function(data, bucket, split_files, key, secret, session, region){
prefix=paste0(sample(rep(letters, 10),50),collapse = "")
if(!bucket_exists(strsplit(bucket, "/")[[1]][1], key=key, secret=secret, session=session, region=region, check_region=FALSE)){
stop("Bucket does not exist")
}
I cannot write also on the main bucket directory.
Would it be possible to specify a subfolder within the bucket? Otherwise will have to use "pandas_redshift" Python Library :(
Thanks! :)
Hi everyone,
Ended up using pandas_redshift to solve my issue. Thanks @jtelleriar!