Comment to 'BUG: Cover fails to upload to S3 storage'
  • I have tried all the endpoint configuration options using the command:

    s3cmd --configure
    

    However, it doesn’t connect successfully. Instead, I manually set up the configuration file and managed to load all the storage folders. Here’s the content of my configuration file:

    cat ~/.s3cfg 
    
    [default] 
    access_key = mykey 
    secret_key = mysecretkey 
    host_base = s3.us-east-005.backblazeb2.com 
    host_bucket = %(bucket)s.s3.us-east-005.backblazeb2.com 
    use_https = True 
    signature_v4 = True 
    bucket_name = mybucketname 
    bucket_id = mybucketid
    

    The problem I’m encountering is that the connection always uses v2, and after loading the entire folder, it tells me that it’s not compatible with v2. Therefore, s3cmd connects using v2, and the signatures on the files are incorrect, which prevents media files from loading on the site.

    Having a working configuration file would be very helpful, which is why I am asking if you already have a functional exemple. Maybe something is missing, I'll try to add other variables if no one else does, if I find a working version I'll post it here. Acording tp the backblaze documentation it only supports v4.

    ERROR: S3 error: 400 (InvalidRequest): The V2 signature authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256

    I also added the tag

    signature_v2 = False  
    

    on irder to force and use v4 but the connection created by s3cmd only uses v2 and the problem is that the version of CLI that Blackbase has cannot load folders.

    • for version v4 the best option is aws s3

      aws s3 cp /una/storage s3://yourbucketfolder --recursive   
      

      this works perfectly without error

      • I have no idea why you have the issues you have, but I have never used the aws tool, and I used the simple configuration tool to connect without issue.

        I think you tinker too much, and please do not take this the wrong way. You tend to take the hard way a lot. I had no issues you had, and just set it all up for another and had 0 issues. I am glad that I was reminded of this post before trying, I would have been wondering where the option was. 😁

        • When you already have files stored on the server, you need to upload them manually. The UNA application doesn't automatically upload files from the storage folder. Simply changing the setting from local to Blackbase won't make the files transfer automatically. S3cmd uses v2, while Blackbase only supports v4 for signatures.

          You can use the native Backblaze B2 application, but it does not support uploading entire folders only list of files and single files. However, with AWS CLI, you can upload the entire storage folder directly in a aingle comans, depending on your data center's bandwidth and how much data you have in storage wil upload or sync your data .

          Additionally, you can update the local storage folder on your server with the UNA application using a bucket from Blackbase whenever you feel it's necessary. The UNA application may not always update files automatically, and if a file is missing, you can manually sync your storage at any time using AWS CLI . This way, you can ensure your files are always up-to-date. I prefer to manually write all my configuration files, as I don't like having them created automatically. I like to review every detail, checking each step carefully to ensure everything works flawlessly. When I manually write the configuration files, things always work perfectly like now. Thanks for the suggestion, but I don't understand what tool you are using?("and I used the simple configuration tool")