BUG: Cover fails to upload to S3 storage

I have s3v4 storage working perfectly across my site, everything works. I went to change the cover in designer, upload failed. I wasn't sure why, tried different things.

Changed sys_images to Local in database, upload was fine, cover worked.

Video, images, files, etc all upload across the site, everything is working properly. Logic then tells me something is wrong with s3 storage and this option.

  • 312
  • More
Replies (16)
    • Hello @Wise !

      Could you please check any error log in your S3 account? May be it is something interesting there.

      • @LeonidS hello!

        I responded under other post, but I sent a message on messenger. I think I found an amazing s3 provider for UNA based sites, just need to figure out why some things are not working, while others are. Hope UNA team can help.

        • S3 providers differ a little bit, so we have several S3 libraries where one could work with one provider and not with another. I suggest to try different ones (S3, S3v4, S3v4alt) to see which working better. Also if you could specify the actual error we could probably suggest something more specific.

          • I have tried them all. S3v4 was the one that worked everywhere on my site, except in Messenger, and studio add cover. There is no visible error message, it just doesn't work.

            Please ask @Alexey as he tried it on my site. He has credentials.

            • Ok, he said that you are using Backblaze S3 storage, actually it looks like private files on Backblaze S3 aren't working with any of these engines, so the only option for now is to disable private files uploading, to do it some customizations are needed:

              1) upload attached file into `inc/classes` folder with `BxDolStorageS3v4Backblaze.php` name

              2) update `sys_options` table to be able to select new storage engine

              UPDATE `sys_options` SET `extra` = 'Local,S3,S3v4,S3v4alt,S3v4Backblaze' WHERE `name` = 'sys_storage_default';
              
              • @Alex T⚜️ @Alexey Are those running Wasabi Cloud Storage also affected by this issue? Do they also need to apply the modification you provided above? Does Wasabi support private file uploading? Thanks

                • @Alex T⚜️ all is working now. Thank you! I sent a message to you on messenger.

                  • Wasabi works fine with all storage engines except S3v4alt

                      • Thanks for sharing. Do you have a working exemple configuration file for s3cmd? i found someting on github, but i have to adapt it to una

                        [default]
                        # Cheia de acces pentru autentificarea la serviciul de stocare
                        access_key = 
                        
                        
                        # Token de acces pentru autentificarea suplimentară (dacă este necesar)
                        access_token = 
                        
                        
                        # Extensii de fișiere pentru care se aplică codificarea
                        add_encoding_exts =  
                        
                        
                        # Header-uri suplimentare care pot fi adăugate la cereri
                        add_headers =  
                        
                        
                        # Locația bucket-ului în AWS (sau alt serviciu compatibil S3)
                        bucket_location = us-east-1  
                        
                        
                        # Fișierul cu certificatele CA pentru validarea SSL
                        ca_certs_file =  
                        
                        
                        # Fișier utilizat pentru a stoca datele cache
                        cache_file =  
                        
                        
                        # Verifică certificatele SSL la conectare
                        check_ssl_certificate = True  
                        
                        
                        # Verifică numele gazdei în certificatele SSL
                        check_ssl_hostname = True  
                        
                        
                        # Host-ul CloudFront pentru distribuirea conținutului
                        cloudfront_host = cloudfront.amazonaws.com  
                        
                        
                        # Tipul MIME implicit pentru fișiere
                        default_mime_type = binary/octet-stream  
                        
                        
                        # Întârzierea actualizărilor (True pentru a aștepta)
                        delay_updates = False  
                        
                        
                        # Șterge fișierele după transfer (True pentru a activa)
                        delete_after = False  
                        
                        
                        # Șterge fișierele după ce au fost preluate
                        delete_after_fetch = False  
                        
                        
                        # Șterge fișierele care nu mai sunt în sursa originală
                        delete_removed = False  
                        
                        
                        # Setează dacă operațiunile de scriere trebuie să fie efectuate în mod secvențial
                        dry_run = False  
                        
                        
                        # Activează încărcarea multipart pentru fișiere mari
                        enable_multipart = True  
                        
                        
                        # Encoding-ul pentru fișiere
                        encoding = UTF-8  
                        
                        
                        # Activează sau dezactivează criptarea pentru fișiere
                        encrypt = False  
                        
                        
                        # Data de expirare pentru fișiere (dacă este setată)
                        expiry_date =  
                        
                        
                        # Numărul de zile până când fișierele expiră
                        expiry_days =  
                        
                        
                        # Prefixul pentru expirare
                        expiry_prefix =  
                        
                        
                        # Urmărește symlink-urile în timpul operațiunilor de transfer
                        follow_symlinks = False  
                        
                        
                        # Forțează operațiunile de ștergere
                        force = False  
                        
                        
                        # Continuă descărcarea fișierelor mari
                        get_continue = False  
                        
                        
                        # Comanda pentru decriptarea fișierelor cu GPG
                        gpg_command = /usr/bin/gpg  
                        
                        
                        # Comanda pentru decriptarea fișierelor GPG
                        gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s  
                        
                        
                        # Comanda pentru criptarea fișierelor cu GPG
                        gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s  
                        
                        
                        # Parola pentru GPG (dacă este necesară)
                        gpg_passphrase =  
                        
                        
                        # Ghicește tipul MIME pe baza extensiilor de fișiere
                        guess_mime_type = True  
                        
                        
                        # Host-ul pentru SimpleDB (dacă este utilizat)
                        simpledb_host = sdb.amazonaws.com  
                        
                        
                        # Limita de fișiere procesate
                        limit = -1  
                        
                        
                        # Limita de rată pentru operațiuni (dacă este setată)
                        limitrate = 0  
                        
                        
                        # Listează MD5-urile fișierelor (dacă este activat)
                        list_md5 = False  
                        
                        
                        # Permite listarea fișierelor în ordine aleatorie
                        list_allow_unordered = False  
                        
                        
                        # Prefixul pentru jurnalele de target
                        log_target_prefix =  
                        
                        
                        # Activează listarea detaliată
                        long_listing = False  
                        
                        
                        # Numărul maxim de fișiere care pot fi șterse simultan
                        max_delete = -1  
                        
                        
                        # Tipul MIME pentru fișierele încărcate
                        mime_type =  
                        
                        
                        # Dimensiunea maximă a chunk-urilor pentru încărcări multipart
                        multipart_chunk_size_mb = 15  
                        
                        
                        # Numărul maxim de chunk-uri pentru încărcări multipart
                        multipart_max_chunks = 10000  
                        
                        
                        # Păstrează atributele fișierelor originale
                        preserve_attrs = True  
                        
                        
                        # Activează sau dezactivează afișarea progresului
                        progress_meter = True  
                        
                        
                        # Host-ul pentru proxy (dacă este utilizat)
                        proxy_host =  
                        
                        
                        # Portul pentru proxy (dacă este utilizat)
                        proxy_port = 0  
                        
                        
                        # Continuă descărcarea fișierelor mari
                        put_continue = False  
                        
                        
                        # Dimensiunea chunk-urilor pentru descărcări
                        recv_chunk = 65536  
                        
                        
                        # Permite utilizarea redundanței reduse (Reduced Redundancy Storage)
                        reduced_redundancy = False  
                        
                        
                        # Activează sau dezactivează plata pentru solicitanți
                        requester_pays = False  
                        
                        
                        # Numărul de zile pentru restaurarea fișierelor șterse
                        restore_days = 1  
                        
                        
                        # Cheia secretă pentru autentificare
                        secret_key = zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG  
                        
                        
                        # Dimensiunea chunk-urilor pentru încărcări
                        send_chunk = 65536  
                        
                        
                        # Activarea criptării pe server pentru fișiere
                        server_side_encryption = False  
                        
                        
                        # Semnătura utilizată (V2 sau V4)
                        signature_v2 = False  
                        
                        
                        # Host-ul pentru SimpleDB (dacă este utilizat)
                        simpledb_host = sdb.amazonaws.com  
                        
                        
                        # Omite fișierele existente în timpul transferului
                        skip_existing = False  
                        
                        
                        # Timpul de expirare pentru socket
                        socket_timeout = 300  
                        
                        
                        # Statistici (activează sau dezactivează)
                        stats = False  
                        
                        
                        # Oprește procesul la prima eroare
                        stop_on_error = False  
                        
                        
                        # Clasa de stocare utilizată (dacă este specificată)
                        storage_class =  
                        
                        
                        # Modul de codificare URL
                        urlencoding_mode = normal  
                        
                        
                        # Activează utilizarea HTTP Expect
                        use_http_expect = False  
                        
                        
                        # Folosește HTTPS
                        use_https = False  
                        
                        
                        # Utilizarea magiei MIME pentru a ghici tipurile MIME
                        use_mime_magic = True  
                        
                        
                        # Verbose level pentru logare
                        verbosity = WARNING  
                        
                        
                        # Endpoint pentru website
                        website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/  
                        
                        
                        # Endpoint pentru pagina de eroare a website-ului
                        website_error =  
                        
                        
                        # Endpoint pentru pagina principală a website-ului
                        website_index = index.html  
                        
                        
                        
                        • you don't need to create this manually, just type:

                          s3cmd --configure 
                          

                          and it will be created for you.

                          If you are going to work with Wasabi, then you need to specify custom endpoint URL, after configure is done make sure you have the following line in your config:

                          website_endpoint = https://s3.us-east-2.wasabisys.com
                          

                          please replace us-east-2 with your datacenter

                          • Absolutely. This makes it much easier, and works smoothly with backblaze.

                            • I have tried all the endpoint configuration options using the command:

                              s3cmd --configure
                              

                              However, it doesn’t connect successfully. Instead, I manually set up the configuration file and managed to load all the storage folders. Here’s the content of my configuration file:

                              cat ~/.s3cfg 
                              
                              [default] 
                              access_key = mykey 
                              secret_key = mysecretkey 
                              host_base = s3.us-east-005.backblazeb2.com 
                              host_bucket = %(bucket)s.s3.us-east-005.backblazeb2.com 
                              use_https = True 
                              signature_v4 = True 
                              bucket_name = mybucketname 
                              bucket_id = mybucketid
                              

                              The problem I’m encountering is that the connection always uses v2, and after loading the entire folder, it tells me that it’s not compatible with v2. Therefore, s3cmd connects using v2, and the signatures on the files are incorrect, which prevents media files from loading on the site.

                              Having a working configuration file would be very helpful, which is why I am asking if you already have a functional exemple. Maybe something is missing, I'll try to add other variables if no one else does, if I find a working version I'll post it here. Acording tp the backblaze documentation it only supports v4.

                              ERROR: S3 error: 400 (InvalidRequest): The V2 signature authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256

                              I also added the tag

                              signature_v2 = False  
                              

                              on irder to force and use v4 but the connection created by s3cmd only uses v2 and the problem is that the version of CLI that Blackbase has cannot load folders.

                              • for version v4 the best option is aws s3

                                aws s3 cp /una/storage s3://yourbucketfolder --recursive   
                                

                                this works perfectly without error

                                • I have no idea why you have the issues you have, but I have never used the aws tool, and I used the simple configuration tool to connect without issue.

                                  I think you tinker too much, and please do not take this the wrong way. You tend to take the hard way a lot. I had no issues you had, and just set it all up for another and had 0 issues. I am glad that I was reminded of this post before trying, I would have been wondering where the option was. 😁

                                  • When you already have files stored on the server, you need to upload them manually. The UNA application doesn't automatically upload files from the storage folder. Simply changing the setting from local to Blackbase won't make the files transfer automatically. S3cmd uses v2, while Blackbase only supports v4 for signatures.

                                    You can use the native Backblaze B2 application, but it does not support uploading entire folders only list of files and single files. However, with AWS CLI, you can upload the entire storage folder directly in a aingle comans, depending on your data center's bandwidth and how much data you have in storage wil upload or sync your data .

                                    Additionally, you can update the local storage folder on your server with the UNA application using a bucket from Blackbase whenever you feel it's necessary. The UNA application may not always update files automatically, and if a file is missing, you can manually sync your storage at any time using AWS CLI . This way, you can ensure your files are always up-to-date. I prefer to manually write all my configuration files, as I don't like having them created automatically. I like to review every detail, checking each step carefully to ensure everything works flawlessly. When I manually write the configuration files, things always work perfectly like now. Thanks for the suggestion, but I don't understand what tool you are using?("and I used the simple configuration tool")

                                    Login or Join to comment.