Performance problems

Hello everyone,

We have a UNA 12.1 installation that has been distributed over 2 instances. At the moment we have performance problems, which are primarily caused by a shared cache_public folder. When the cache is empty, it usually takes more than 10 seconds for a page to load.

The following questions came up :

* Is it possible to use one cache_public folder per instance or will there be problems, e.g. because user-related data is cached ?

* Can the cache_public folder also be moved to Memcache, for example, and if so, how would that work?

Are there any other ways to speed up the system, such as optimised OpCache parameters, switching to PHP 8+ etc.?

Unfortunately we cannot upgrade to UNA 13.1 at the moment.

Many thanks in advance for any ideas.

  • 286
  • More
Replies (11)
    • Hello very strange. Do you look profiler report? You can use cdn. Plus more powerful server if that not soft problem.

      • I had real problems with UNA loading speed. I couldn't work it out. Changed hosts and the problem magically went away.

        📷 Michel - 🌎 Meta-Travel 🌴 Community Hub

        • Hello @Peter Fuchs !

          Could you please specify - do you have any actual errors in your server's error log? And, of course, do you see any red warnings in the Studio->Dashboard->Server audit area?

          • Hello Rocco,

            Thank you for your feedback. A CDN could possibly help. I will discuss this with our SysAdmin.

            • Hello Michel,

              Thanks for the tip. We host the UNA website ourselves, so I assume that the server setup should be fine. 🙂

              • Hello LeonidS,

                Thank you very much for your quick response. There is currently no error message in the log file.

                The server audit reports 'MySQL: thread_cache_size = 0 - FAIL (must be > 0)' and we still have to work on the two issues 'User-side caching for static content' and 'Server-side content compression'.

                But I don't think that explains the initially long time for creating the cache files.

                Is there perhaps a tool that can be used to warm the cache after deletion?

                Best regards

                • There may be troubles with the fast write/read for the cache files on your server's hard drive. For a quick check, you may switch the cache engine from the Files to Memcache variant in the Studio->Settings->Cache are (if you have the ready Memache on your server).

                  • We already use Memcache as a cache engine. Nearly all files of the 'cache' folder are stored there. However, this has no effect on the 'cache_public' folder. CSS, LESS and JS files are still stored in this folder.

                    • I think no need to share cache and/or cache_public folders, each server can have their own copy. It maybe shared folder isn't working fast enough and making separate copies of cache folders can speed up your site.

                      There are many optimizations steps which can be implemented, such as

                      • DB server optimization and maybe separating or maybe istead moving it wo th the web-server
                      • PHP optimization (configure php fpm pool, enable and configure zend cache)
                      • enable remote storage
                      • enablich caching on different levels
                      • adding different metrics to fine tune the settings

                      But it's more custom work than some general guidlines for every website

                      • Hello @Alex T⚜️ ,

                        Thank you very much for your feedback. We will test this next and run each server with its own cache folders.

                        I will let you know if that works.

                        • News on this topic. Each server now has its own copy of the cache and cache_public folder. Initial tests have shown that this works well. The load balancer only has to ensure that a logged-in user always remains on the same server.

                          We will test this in more detail.

                          Login or Join to comment.