gkuhnert

  •  ·  Standard
  • 3 friends
  • 3 followers
  • 3940 views
  • 1 votes
  • More
Add new...
 
 
·
Added a discussion

Another ChtGPT assisted bugfix - I've been getting errors daily from the cron job related to sys_seo_links duplicates. I hope this will be useful to anyone else with this issue. The error looks like this from cron:

Database error in sitename

MySQL error: Duplicate entry 'bx_persons-view-persons-profile-59007' for key 'module_page_param'

This happens when UNA attempts to generate SEO links in the sys_seo_links table.

It is not isolated to one module — the system was frequently trying to insert SEO links that already existed, causing site errors and breaking pages that render profile cards, search results, or timeline entries.

Cause of the Problem

The core function insertSeoLink() uses a plain SQL insert:

INSERT INTO sys_seo_links SET ...

The table sys_seo_links has a UNIQUE index on:

(module, page_uri, param_name, param_value)

When UNA tries to create an SEO link for an item that already has one, MySQL throws a duplicate-key error, which results in a PHP exception and interrupts the page load.

This can happen when:

  • Permalinks are regenerated
  • Slugs change (for example, profile name changes)
  • Cache returns outdated values
  • Modules request permalink creation multiple times
  • Search/timeline/pages re-trigger permalink logic

The result is frequent, noisy site errors.

The Fix — Update When Duplicate Exists

Instead of using a plain INSERT and letting MySQL halt the page, we can change the logic so that:

  • New SEO links are inserted normally
  • Existing SEO links are updated instead of causing errors
  • No exceptions break the page
  • SEO slugs stay current
  • No silent failures occur

This is done by using:

INSERT ... ON DUPLICATE KEY UPDATE

This tells MySQL:

  • Insert the row if it’s new
  • Update the existing row when a duplicate is detected
  • Avoid all duplicate-key errors

Existing (UNA core code)

static public function insertSeoLink($sModule, $sPageUri, $sSeoParamName, $sSeoParamValue, $sUri)
{
    return BxDolDb::getInstance()->query("INSERT INTO `sys_seo_links` SET `module` = :module, `page_uri` = :page_uri, `param_name` = :param_name, `param_value` = :param_value, `uri` = :uri, `added` = :ts", [
        'module' => $sModule,
        'page_uri' => $sPageUri,
        'param_name' => $sSeoParamName,
        'param_value' => $sSeoParamValue,
        'uri' => $sUri,
        'ts' => time(),
    ]);
}

This throws an error if the SEO link already exists.

Fixed (Improved version with update logic)

static public function insertSeoLink($sModule, $sPageUri, $sSeoParamName, $sSeoParamValue, $sUri)
{
    return BxDolDb::getInstance()->query("
        INSERT INTO `sys_seo_links`
            (`module`, `page_uri`, `param_name`, `param_value`, `uri`, `added`)
        VALUES
            (:module, :page_uri, :param_name, :param_value, :uri, :ts)
        ON DUPLICATE KEY UPDATE
            `uri` = VALUES(`uri`),
            `added` = VALUES(`added`)
    ", [
        'module' => $sModule,
        'page_uri' => $sPageUri,
        'param_name' => $sSeoParamName,
        'param_value' => $sSeoParamValue,
        'uri' => $sUri,
        'ts' => time(),
    ]);
}

Why not just ignore it?

Another solution might be to use

INSERT IGNORE

But that approach has major drawbacks:

  • It silently drops the insert
  • The slug never updates even if the content changes
  • Old or incorrect SEO URLs remain forever
  • Debugging becomes harder because errors are hidden

The improved solution updates only the values that should change:

  • uri (the SEO-friendly slug)
  • added (timestamp)

Nothing else in the unique key is modified.

ON DUPLICATE KEY UPDATE is the correct method because it:

  • Keeps SEO links current
  • Maintains consistency across all modules
  • Prevents site-breaking errors
  • Does not hide failures
  • Properly handles repeated insert attempts
  • Avoids creating a mess of stale or outdated slugs

This matches the recommended “upsert” pattern for MySQL.

  • 46
  • 1
·
Added a discussion

I’m running UNA 14.0.0 with the latest Payments module. Analysis of a bug I've identified (with some formatting and research help from ChatGPT) is listed below. This bug existed all the way back to the Dolphin days - and still exists in Una today.

PayPal successfully processes a payment and sends back an IPN with:

payment_status=Completed

But UNA sometimes logs:

_bx_payment_pp_err_wrong_transaction

and PayPal returns:

INVALID

during the IPN verification step.

After reviewing the UNA debug logs and the provider code, the cause can be reproduced and confirmed.

🔍 What the UNA logs show

  • UNA sends a "custom" value like:
custom = NHw4MQ%3D%3D
  • PayPal returns the IPN with the exact same value:
custom = NHw4MQ%3D%3D
  • But UNA’s validation request (cmd=_notify-validate) shows:
custom = NHw4MQ%253D%253D

The %3D  inside the custom value has been encoded twice.

PayPal requires the validation payload to match the original IPN exactly, so it responds:

INVALID

and UNA rejects an otherwise successful payment.

📘 PayPal IPN Specification Requirement

Direct quote from PayPal’s IPN documentation:

“You must send back the entire messageexactly as PayPal sent it, in your (cmd=_notify-validate) message.
The message must match byte-for-byte, otherwise PayPal will return INVALID .”

UNA’s current PayPal provider violates this requirement by re-encoding POST values.

🎯 Root Cause

The issue is in:

modules/boonex/payment/classes/BxPaymentProviderPayPal.php

Inside:

protected function _validateCheckout(&$aData, &$aPending)

UNA builds the validation request like this:

$sRequest .= '&' . urlencode($sKey) . '=' . urlencode(bx_process_pass($sValue));

This always encodes the value again, even if PayPal already encoded it.

Example:

NHw4MQ%3D%3D → NHw4MQ%253D%253D

As soon as this happens, PayPal will always return INVALID .

This is not environment-related — the behavior is in the provider code.

🛠 Fix / Patch

Modify the loop in _validateCheckout()  so that the original POST data

is passed back to PayPal without re-encoding the value.

Replace this:

$sRequest .= '&' . urlencode($sKey) . '=' . urlencode(bx_process_pass($sValue));

With this:

// Do not double-encode values — PayPal requires exact original values
$sRequest .= '&' . $sKey . '=' . $sValue;

This ensures:

  • UNA sends PayPal the exact original value
  • PayPal returns VERIFIED
  • Checkout completes successfully

🧪 Why this works

PayPal sends:

custom = NHw4MQ%3D%3D

If UNA returns:

custom = NHw4MQ%3D%3D

→ PayPal: VERIFIED

If UNA returns:

custom = NHw4MQ%253D%253D

→ PayPal: INVALID

The fix restores the required byte-for-byte match.

📝 Environment context

  • UNA 14.0.0
  • Payments module fully up to date
  • Apache + PHP-FPM 8.2.x
  • 57
  • 1
·
Added a discussion

I know a lot of applications out there do error reporting to the "mothership" to allow app developers to more rapidly troubleshoot and fix issues. Is this something the team has considered - with an option of course to opt in or opt out?

For example - right now I have this error. If it was auto added to a database, improved workflow for improving una, instead of individual reports. Focus on easy wins affecting the largest number of servers first?

PHP Warning: Undefined array key "txt_sample_comment_single" in /home/.sites/site11/wwwroot/web/template/scripts/BxBaseCmtsServices.php on line 416

  • 3604
  • 2
·
Added a discussion

The included ffmpeg.exe binary was built in 2021, and has known critical bugs and security issues, including memory corruption in some demuxers, non thread-safe filters and format handlers, and other use-after-free bugs. (arbitrary code execution risks).

Additionally, although it is static, it relies upon libnss for DNS resolution. If mismatched to system libraries, it can cause crashes. This is relevant as __nss_readline is the cause of the coredumps observed. I have recently reported issues regarding the speed and performance of UNA. Hypothetically, this type of regular core dumping due to the included static ffmpeg binary would be a significant bottleneck that may be a contributing factor.

Aug  6 16:58:45 hostname kernel: ffmpeg.exe[1162854]: segfault at 47 ip 00007fa1c6f39e54 sp 00007ffee0c3fbb0 error 4 in libc.so.6[7fa1c6e28000+175000] likely on CPU 7 (core 3, socket 1)
Aug  6 16:58:45 hostname kernel: Code: ff 48 85 c0 0f 84 99 00 00 00 41 80 3c 24 ff 0f 85 b4 00 00 00 48 8b 05 9a ef 0b 00 49 0f be 16 4c 89 f5 64 48 8b 08 48 89 d0 <f6> 44 51 01 20 74 18 0f 1f 44 00 00 48 0f be 55 01 48 83 c5 01
Aug  6 16:58:45 hostname systemd[1]: Started Process Core Dump (PID 1162855/UID 0).
Aug  6 16:58:45 hostname systemd-coredump[1162856]: Process 1162854 (ffmpeg.exe) of user 1001033 dumped core.#012#012Stack trace of thread 832570:#012#0  0x00007fa1c6f39e54 __nss_readline (/usr/lib64/libc.so.6 + 0x139e54)#012#1  
Aug  6 16:58:45 hostname systemd[1]: systemd-coredump@14697-1162855-0.service: Deactivated successfully.
Aug  6 16:58:45 hostname kernel: ffmpeg.exe[1162864]: segfault at 47 ip 00007f75d8d39e54 sp 00007ffc3bdd5d70 error 4 in libc.so.6[7f75d8c28000+175000] likely on CPU 11 (core 5, socket 1)
Aug  6 16:58:45 hostname kernel: Code: ff 48 85 c0 0f 84 99 00 00 00 41 80 3c 24 ff 0f 85 b4 00 00 00 48 8b 05 9a ef 0b 00 49 0f be 16 4c 89 f5 64 48 8b 08 48 89 d0 <f6> 44 51 01 20 74 18 0f 1f 44 00 00 48 0f be 55 01 48 83 c5 01
Aug  6 16:58:45 hostname systemd[1]: Started Process Core Dump (PID 1162865/UID 0).
Aug  6 16:58:45 hostname systemd-coredump[1162867]: Process 1162864 (ffmpeg.exe) of user 1001033 dumped core.#012#012Stack trace of thread 832572:#012#0  0x00007f75d8d39e54 __nss_readline (/usr/lib64/libc.so.6 + 0x139e54)#012#1  
Aug  6 16:58:45 hostname systemd[1]: systemd-coredump@14698-1162865-0.service: Deactivated successfully.


  • 4190
  • 1
·
Added a discussion

One common point of feedback from users on my primary site is that its slow. But the question is, why. Their point of reference is Dolphin. People thought it was slow, but the feedback is they see Una as slower - and not competitive in terms of performance to other community style applications. Google page speed ranking is quite poor. In terms of bottlenecks, I have CPU and network headroom. The primary causes as far as I can tell are the large number of SQL queries. To make sure it was not just my deployment, I ran the same analysis against the Una community server. Results below.

Can we look at starting an initiative to examine how we can improve performance for the stats below?

image_transcoder.php?o=sys_images_editor&h=2793&dpx=2&t=1754179120

  • 4933
  • 1
·
Added a discussion

There is a field created by the migration tool mig_id. .... Some parts of Una dont like this field, and generate errors. Once your migration is 100% complete, you can remove these extra fields. My solution is below - you may have more or less tables depending on what you migrated.

ALTER TABLE bx_convos_cmts DROP COLUMN mig_id;
ALTER TABLE bx_forum_discussions DROP COLUMN mig_id;
ALTER TABLE bx_photos_entries DROP COLUMN mig_id;
ALTER TABLE bx_timeline_events DROP COLUMN mig_id;
ALTER TABLE bx_videos_entries DROP COLUMN mig_id;
ALTER TABLE sys_accounts DROP COLUMN mig_id;
ALTER TABLE sys_acl_levels DROP COLUMN mig_id;
ALTER TABLE sys_form_inputs DROP COLUMN mig_id;
  • 4157
  • 1
·
Added a discussion

I am getting these errors frequently (multiple times per day). Any hints?

Database error in 

Query:
INSERT INTO `sys_accounts` SET `name` = 'redacted',`email` = 'redacted-email',`receive_news` = '0',`email_confirmed` = '0',`password` = 'redacted',`password_changed` = '1752692966',`salt` = 'redacted/W',`added` = '1752692966',`changed` = '1752692966',`ip` = 'redacted',`referred` = 'redacted/create-account',`phone` = ''

Mysql error:
Duplicate entry 'redacted-email' for key 'email'

Location:
The error was found in query function in the file /redacted/inc/classes/BxDolForm.php at line 910.

collation_connection:
utf8mb4_unicode_ci

  • 4024
  • 1
·
Added a discussion

Cache to disk helps to reduce load, but is no substitute for an in memory cache. Back with Dolphin, there were multiple cache options available. These options are still available with Una, however the various cache products don't exist any more. As part of tuning my server, I wanted to get more performance so I built something to use Memcachd (different to Memcache). Below are my basic instructions and documentation to make it work.

UNA Memcached Integration Guide

This guide will walk you through installing, enabling, and validating Memcached as a cache backend for UNA CMS.

1. Install Memcached on the Server

Install the Memcached server daemon and PHP extension.

For CentOS / AlmaLinux / RHEL:

sudo dnf install memcached php-memcached
sudo systemctl enable --now memcached

For Ubuntu/Debian:

sudo apt-get install memcached php-memcached
sudo systemctl enable --now memcached

Verify Memcached is running:

systemctl status memcached

2. Validate Memcached Is Working

Quick PHP CLI Test:

Create a test.php file on your server, and browse to this file. This is a good check before changing Una to confirm your PHP server is correctly configured, and memcached is operating as expected.

<?php
$mc = new Memcached();
$mc->addServer('127.0.0.1', 11211);
$mc->set('una_test', 'ok', 60);
echo "Value: " . $mc->get('una_test') . "\n";
?>

3. Add the UNA Memcached Cache Handler

Create the following file:

inc/classes/BxDolCacheMemcached.php

<?php defined('BX_DOL') or die('hack attempt');
/**
 * Memcached cache handler for UNA
 */

class BxDolCacheMemcached extends BxDolCache
{
  protected $iTTL = 3600;
  protected $oMemcached = null;

  function __construct()
  {
    parent::__construct();
    if (class_exists('Memcached')) {
      $this->oMemcached = new Memcached();
      $sHost = getParam('sys_cache_memcache_host');
      $iPort = getParam('sys_cache_memcache_port');
      if (false === strpos($sHost, ',')) {
        $this->oMemcached->addServer($sHost, $iPort);
      } else {
        $aHosts = explode(',', $sHost);
        foreach ($aHosts as $s)
          $this->oMemcached->addServer(trim($s), $iPort);
      }
    }
  }

  function getData($sKey, $iTTL = false)
  {
    if (!$this->oMemcached) return null;
    $mixedData = $this->oMemcached->get($sKey);
    return $mixedData === false ? null : $mixedData;
  }

  function setData($sKey, $mixedData, $iTTL = false)
  {
    if (!$this->oMemcached) return false;
    $ttl = $iTTL === false ? $this->iTTL : $iTTL;
    return $this->oMemcached->set($sKey, $mixedData, $ttl);
  }

  function delData($sKey)
  {
    if (!$this->oMemcached) return false;
    $this->oMemcached->delete($sKey);
    return true;
  }

  function isAvailable()
  {
    return $this->oMemcached !== null;
  }

  function isInstalled()
  {
    return extension_loaded('memcached');
  }

  function removeAllByPrefix($s)
  {
    if (!$this->oMemcached) return false;
    return $this->oMemcached->flush();
  }
}

4. Add "Memcached" as a Cache Option in UNA

To allow the admin panel to select Memcached as a backend, update the UNA config:

Run this SQL query in your UNA database:

UPDATE sys_options
SET extra = REPLACE(extra, 'Memcache', 'Memcache,Memcached')
WHERE extra LIKE '%Memcache%'
AND extra NOT LIKE '%Memcached%';
This will add "Memcached" to all picklists that currently offer "Memcache".

5. Enable Memcached in UNA Admin

5.1 Configure system wide Memcached options

  1. Go to Studio → Settings → Cache
  2. For each cache type (page, DB, templates, etc.), select `Memcached` from the storage engine dropdown.
  3. Set the Memcache Host (usually `127.0.0.1`) and **Port** (usually `11211`).
  4. Save changes.

**Note:** UNA uses the same host/port settings for both Memcache and Memcached PHP extensions.

5.2 Configure module specific settings

Some modules also require configuration to activate Memcached. Timeline is one such example. Check your installed modules.

  1. Go to Studio → Settings → Timeline → Cache
  2. Enable item cache, and select Memcached from the storage engine dropdown.

6. (Optional) Monitor Memcached Usage

  • Show cache keys:
memcached-tool 127.0.0.1 dump | grep -a ^add
  • Show stats:
memcached-tool 127.0.0.1 stats
  • Page Count Stats:
memcached-tool 10.0.0.5:11211 display 
  •  Settings:
 memcached-tool 127.0.0.1 settings

7. Troubleshooting

  • Ensure php-memcached is installed (not just php-memcache).
  • Restart PHP-FPM/Apache after installing extensions.
  • Confirm Memcached server is running and reachable.
  • Watch error logs (php_errors.log or UNA Studio logs) for issues.

8. Other considerations

  • Security: If server is public-facing, firewall the Memcached port (`11211`). However, this should not be an issue if your RPM only installs a listener on the loopback address.
  • 4166
  • 1
·
Added a discussion

I am getting errors like the one below multiple times per day. Is this a known issue? The related duplicate entries come from different modules all the time. The only common element is that its always sys_seo_links

Query:
INSERT INTO `sys_seo_links` SET `module` = :module, `page_uri` = :page_uri, `param_name` = :param_name, `param_value` = :param_value, `uri` = :uri, `added` = :ts

Mysql error:
Duplicate entry 'bx_persons-view-persons-profile-57819' for key 'module_page_param'

Location:
The error was found in query function in the file inc/classes/BxDolPageQuery.php at line 203.

collation_connection:
utf8mb4_unicode_ci


  • 4028
  • 1
gkuhnert Discussions
Bug report: Duplicate SEO Link — Patch
UNA 14.0.0 — PayPal IPN “INVALID” Due to Double-Encoding Bug in BxPaymentProviderPayPal
Auto report errors
Segmentation fault from bundled ffmpeg.exe - SECURITY RISK
The need for speed
More database errors fixed (Dolphin migration related)