Album - token

Every time when uploading several photos in the albums, a problem appears - After uploading the photos and clicking on the submit button, the error message is Token expired. Please try again. If I want to upload 100 pictures and the bandwidth is not wide enough for the user, posting becomes impossible. it's frustrating, especially after a long wait, to see that you waited for nothing. How do I not limit this expiration time? the some problem with videos. Trying again results in a double or triple post, the same happened here on UNA with "Location field not working as expected" post. This happens when the mobile connection is very slow. please review my post "Location field not working as expected" to investigate

  • 883
  • More
Replies (5)
    • Hello @Romulus !

      Could you please specify how many photos and with what size didyou try to upload? From my side, I could add 8 2Mbs photos without problems. Perhaps, you need to increase the post_max_size parameter too.

      • post_max_size its already 2048 mb I also have a post on UNA that expired the token and posted the same post three times, and there was no photo, the problem is to prevent double posting if the user clicks to try again. Search UNA for "Location field not working as expected" so it has nothing to do with my php.ini or nginx conf settings. You can investigate the logs from my post in UNA if you want to find the problem and fix it. that post was posted three times due to totken expiration.

        [memcached]
        ; Memcached extension configuration
        
        
        ; Specify the serializer for Memcached
        memcached.serializer = php
        
        
        ; Specify the session prefix for Memcached
        memcached.sess_prefix = memc.sess.key.
        
        
        ; Enable binary protocol for Memcached sessions
        memcached.sess_binary = On
        
        
        ; Use SASL authentication for Memcached (0 = disable, 1 = enable)
        memcached.use_sasl = 0
        
        
        ; Minimum waiting time for session lock acquisition (in milliseconds)
        memcached.sess_lock_wait_min = 150
        
        
        ; Maximum waiting time for session lock acquisition (in milliseconds)
        memcached.sess_lock_wait_max = 150
        
        
        ; Maximum size of POST data that PHP will accept.
        post_max_size = 2048M   ; Modificat la 2GB
        
        
        ; Maximum allowed size for uploaded files.
        upload_max_filesize = 2048M   ; Modificat la 2GB
        
        
        ; Maximum execution time of each script, in seconds
        max_execution_time = 7200   ; Modificat la 2 ore
        
        
        ; Maximum amount of time each script may spend parsing request data
        max_input_time = 7200   ; Modificat la 2 ore
        
        
        ; Maximum amount of memory a script may consume
        memory_limit = 32768M   ; Modificat la 32GB
        
        
        ; Maximum number of files that can be uploaded via a single request
        max_file_uploads = 50000
        
        
        ; Enable HTML error handling
        html_errors = On
        
        
        ; String to output before an error message
        error_prepend_string = "<pre style='white-space: pre-line'>"
        
        
        ; String to output after an error message
        error_append_string = "</pre>"
        
        
        [opcache]
        ; Enables the opcode cache for PHP
        opcache.enable = 1
        
        
        ; Frequency of checking for updated files (0 = never)
        opcache.revalidate_freq = 0
        
        
        ; Validates cached files with timestamp checks
        opcache.validate_timestamps = 1
        
        
        ; Maximum number of files that can be cached in memory
        opcache.max_accelerated_files = 200000   ; Modificat pentru a permite mai multe fișiere cache
        
        
        ; Memory consumption by the opcode cache (in MB)
        opcache.memory_consumption = 256   ; Modificat la 256MB
        
        
        ; Maximum wasted memory percentage before restart
        opcache.max_wasted_percentage = 20
        
        
        ; Size of the interned strings buffer (in MB)
        opcache.interned_strings_buffer = 16
        
        
        ; Allows for faster shutting down of the opcode cache
        opcache.fast_shutdown = 1
        
        
        
        • My latest tray today 35 photos phtos Size: 2.4 MB (2,410,854 bytes) - 2,7 Mb each photos load quickly, maximum 30 seconds but I can't post them, the error is always the same: Token expired. Please try again. if i click 2 time although the post is done in the enda fter modere 30 seconds, this message only brings confusion, If I click several times when I see this message and several albums are created with a different number of photos. 4 click result in 4 albums with 20, 18, 15, and 35 photos in it. Online status timeframe (minutes) in my account setting is setup to 60 minuts.

          Google Chrome console error:

          • [Deprecation] Listener added for a 'DOMNodeInserted' mutation event. Support for this event type has been removed, and this event will no longer be fired. See for more information.
          • Google Maps JavaScript API has been loaded directly without loading=async. This can result in suboptimal performance. For best-practice loading patterns please see https://goo.gle/js-api-loading (google maps field location not working on post album only on edit album form, already reported her: https://unacms.com/d/location-field-not-working-as-expected )
          • Ufa @ js?libraries=places&language=en&key=A.........
          • Error handling response: TypeError: Cannot read properties of undefined (reading 'status')
          • Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
          • Uncaught (in promise) Error: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
          • It appears that there is a misunderstanding regarding the issue at hand. A solution needs to be implemented to prevent the creation of duplicate albums when clicked multiple times, regardless of any errors. The current script seems outdated, reminiscent of coding practices from over 20 years ago, rather than reflecting modern scripting standards. Honestly, until this version of UNA, I have not encountered a script that creates multiple albums from a single upload.

            • Problem solved with help, Thanks!


              Resolving Upload Delays Related to ZFS Sync Settings

              When managing photo uploads, users may encounter delays or errors that can disrupt their experience. One common issue stems from the way the ZFS (Zettabyte File System) handles data synchronization, which can significantly impact upload performance. This article explores how adjusting the ZFS sync settings can resolve these upload-related errors.

              Understanding the Problem

              ZFS has a built-in synchronization mechanism that controls how data is written to disk. The default setting is sync=standard, which ensures data integrity by writing data to disk immediately. However, this can introduce latency, especially during bulk uploads, as each file must be fully written before the upload process can complete.

              Symptoms of the Issue:

              • Slow upload times when adding multiple photos to albums.
              • Delayed responses in the application during the upload process.
              • Possible notifications for each individual photo upload, causing notification overload.

              Adjusting ZFS Sync Settings

              To alleviate these issues, adjusting the ZFS sync setting for the relevant dataset can be beneficial. Setting sync=always ensures maximum data integrity but at the cost of performance. Conversely, setting it to sync=disabled allows for faster uploads, but it introduces a risk of data loss in the event of a crash.

              For optimal performance in scenarios where data integrity is not as critical , you can set sync to disabled or standard based on your use case. But in this case it may cause an error (such as during photo uploads where users can re-upload failed photos due to lack of synchronization) so the best option is 'always'. Here's how to do this:

              # Set sync to disabled for better performance during uploads
              zfs set sync=always rpool/USERDATA/user_dataset
              

              where 'user_dataset' is the folder that I want to make sure will be synchronized very quickly. You can have multiple datasets, For more information see the official documentation.https://docs.oracle.com/cd/E19253-01/819-5461/idx/index.html

              Login or Join to comment.