Jan 14, 2023

WordPress backend wp-admin running extremely slow

If your WordPress backend wp-admin running extremely slow and if it takes really long or forever to load then below troubleshooting will surely help:

  1. There must have been a parameter that was not completed. Hence, manually call /wp-admin/upgrade.php through browser.
  2. Try decativating all plugins. If that resolves the issue, reactivate each one individually until you find the culprit.
  3. Try switching to the wordpress default theme ( Twenty Fifteen theme ) to rule-out a theme-specific issue.
  4. Download a fresh copy of WordPress and replace your copies of the /wp-admin/ and /wp-includes/ directories with fresh copies from the download.
  5. Access your WordPress database via phpMyAdmin (most hosting providers offer this in their control panel), check all of the tables, and choose “Optimize tables” from the pull-down menu.
  6. Try installing the Heartbeat Control plugin.
  7. If nothing else works, try these methods to increase PHP’s memory allocation:
    1. If you have access to your php.ini file, change the line in php.ini
      If your line shows 32M try 64M:
      memory_limit = 64M; //Maximum amount of memory a script may consume
    2. If you don’t have access to php.ini try adding this to an .htaccess file:
      php_value memory_limit 64M
    3. Try adding this line to your wp-config.php file:
      Increasing memory allocated to PHP
      define(‘WP_MEMORY_LIMIT’, ’64M’);
    4. Talk to your hosting service provider.

Please don’t forget to comment if this saves your day ðŸ™‚

After server migration only the homepage works - wordpress

On Ubuntu server, if after server migration only home page works; you can follow below troubleshooting to fix the issue:

In /etc/apache2/apache2.conf, edit AllowOverride None for /var/www/ to AllowOverride All

Type in the following command to enable mod_rewrite for .htaccess

sudo a2enmod rewrite

Restart your apache server:

sudo service apache2 restart

The problem will be solved!!!

If you have access to the wp-admin then:

Try flushing your mod_rewrite rules by going to:
Dashboard -> Settings -> Permalinks
Save settings (no need to make any changes)

Install OPM (Open PostgreSQL Monitoring)

Install OPM (Open PostgreSQL Monitoring) – A free software suite designed to help you manage your PostgreSQL servers. It’s a flexible tool that will follow the activity of each instance. It can gather stats, display dashboards and send warnings when something goes wrong. The long-term goal of the project is to provide similar features to those of Oracle Grid Control or SQL Server Management Studio.

To install OPM, you need

  • a PostgreSQL 9.3+ cluster,
  • standard compiling tools,
  • nagios and
  • pg_config

The PostgreSQL cluster and Nagios can be installed either on same server or different servers.

Prerequisites:

apt-get update

# Install required packages
apt-get -y install gcc make build-essential libgd2-xpm-dev openssl libssl-dev xinetd apache2-utils unzip

# Install Apache server
apt-get –y install apache2 apache2-utils

# Install PHP
apt-get –y install php5 libapache2-mod-php5 php5-mcrypt php5-curl

# Install PostgreSql
apt-get –y install postgresql postgresql-contrib postgresql-server-dev-all postgresql-common

# Install pg_config
apt-get –y install libpq-dev python-dev pgtune

apt-get update

Click here to see Nagios Installation Guide

Install OPM Core

Run below commands as root user to install and configure opm-core:

# Change directory to /usr/local/src
cd /usr/local/src

# Create new directory opm and enter into it
mkdir opm
cd opm

# Download & unzip OPM
wget https://github.com/OPMDG/opmdg.github.io/releases/download/REL_2_3/OPM_2_3.zip
unzip OPM_2_3.zip

# Rename the extracted directories
mv opm-core-REL_2_3 opm-core
mv opm-wh_nagios-REL_2_3 opm-wh_nagios
mv check_pgactivity-REL1_13 check_pgactivity

# Install opm-core
cd opm-core/pg
make install

# Change to postgres user & login to postgresql database
su postgres
psql -u postgres -p 5432

postgres@postgres=# CREATE DATABASE opm;
postgres@postgres=# \c opm
postgres@opm=# CREATE EXTENSION opm_core;
postgres@opm=# SELECT create_admin('admin1', 'admin1');
postgres@opm=# \q
postgres:/usr/local/src/opm# exit
root:/usr/local/src/opm#

admin1 is the user you will be using to log on to the OPM user interface.

wh_nagios

To install wh_nagios module run below commands as root user:

# change to appropriate directory
cd /usr/local/src/opm/opm-wh_nagios/pg
make install

# Change to postgres user & login to postgresql database
su postgres
psql -u postgres -p 5432
postgres@opm=# CREATE EXTENSION hstore;
postgres@opm=# CREATE EXTENSION wh_nagios;

Then, create a crontab that will process incoming data and dispatch them.

* * * * * psql -c 'SELECT wh_nagios.dispatch_record()' opm  # trigger it every minute

This crontab can belong to any user, as long as it can connect to the PostgreSQL opm database with any PostgreSQL role. To allow a PostgreSQL role to import data in a warehouse, you need to call public.grant_dispatch. For instance, if the PostgreSQL role is user1 and the warehouse is wh_nagios:

postgres@opm=# SELECT grant_dispatcher('wh_nagios', 'user1');

Nagios & nagios_dispatcher

The dispatcher nagios_dispatcher is aimed to dispatch perfdata from Nagios files to the wh_nagios warehouse. nagios_dispatcher requires the DBD::Pg perl module.

# Install DBD::Pg perl
# For Debian
apt-get install libdbd-pg-perl

# For CentOS
yum install perl-DBD-Pg

Now you need to setup Nagios to create its perfdata files that nagios_dispatcher will poll and consume. As user root, create the required command file and destination folder:

mkdir -p /var/lib/nagios4/spool/perfdata/
chown nagios:nagios /var/lib/nagios4/spool/perfdata/
chown -R nagios:nagios /var/lib/nagios4/
cat <<'EOF' >> /usr/local/nagios/etc/objects/commands.cfg
define command{
    command_name    process-service-perfdata-file
    command_line    /bin/mv /var/lib/nagios4/service-perfdata /var/lib/nagios4/spool/perfdata/service-perfdata.$TIMET$
}
define command{
    command_name    process-host-perfdata-file
    command_line    /bin/mv /var/lib/nagios4/host-perfdata /var/lib/nagios4/spool/perfdata/host-perfdata.$TIMET$
}
EOF

Then, in Nagios main configuration file (nagios.cfg), append following parameters accordingly:

process_performance_data=1
host_perfdata_file=/var/lib/nagios4/host-perfdata
service_perfdata_file=/var/lib/nagios4/service-perfdata
host_perfdata_file_processing_command=process-host-perfdata-file
service_perfdata_file_processing_command=process-service-perfdata-file
host_perfdata_file_template=DATATYPE::HOSTPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tHOSTPERFDATA::$HOSTPERFDATA$\tHOSTCHECKCOMMAND::$HOSTCHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$\tHOSTOUTPUT::$HOSTOUTPUT$
service_perfdata_file_template=DATATYPE::SERVICEPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tSERVICEDESC::$SERVICEDESC$\tSERVICEPERFDATA::$SERVICEPERFDATA$\tSERVICECHECKCOMMAND::$SERVICECHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$\tSERVICESTATE::$SERVICESTATE$\tSERVICESTATETYPE::$SERVICESTATETYPE$\tSERVICEOUTPUT::$SERVICEOUTPUT$
host_perfdata_file_mode=a
service_perfdata_file_mode=a
host_perfdata_file_processing_interval=15
service_perfdata_file_processing_interval=15

and then create the dispatcher configuration file:

cd
mkdir -p /usr/local/etc/
cat <<EOF > /usr/local/etc/nagios_dispatcher.conf
daemon=1
directory=/var/lib/nagios4/spool/perfdata/
frequency=5
db_connection_string=dbi:Pg:dbname=opm host=127.0.0.1
db_user=YOUR_USER
db_password=YOUR_PASS
debug=0
syslog=1
hostname_filter = /^$/ # Empty hostname. Never happens
service_filter = /^$/ # Empty service
label_filter = /^$/ # Empty label
EOF

# Change the ownership of nagios_dispatcher file to nagios
chown nagios:nagios /usr/local/etc/nagios_dispatcher.conf

# Install the nagios_dispatcher.pl file into the /usr/local/bin/ directory:
cp /usr/local/src/opm/opm-wh_nagios/bin/nagios_dispatcher.pl /usr/local/bin/

Now, If your operating system uses inittab , add following line at the end of the /etc/inittab file:

d1:23:respawn:/usr/bin/perl -w /usr/local/bin/nagios_dispatcher.pl --daemon --config /usr/local/etc/nagios_dispatcher.conf

Reload the /etc/inittab file:

init q

And, if your operating system uses upstart , create the file /etc/init/nagios_dispatcher.conf, with following content:

# This service maintains nagios_dispatcher

start on stopped rc RUNLEVEL=[2345]
stop on starting runlevel [016]

respawn
exec /usr/local/bin/nagios_dispatcher.pl -c /usr/local/etc/nagios_dispatcher.conf

And start the job:

initctl start nagios_dispatcher

User interface

The default user interface is based on the web framework Mojolicious. You need to install:

  • Perl (5.10 or above)
  • Mojolicious (4.63 or above, less than 5.0)
  • Mojolicious::Plugin::I18N (version 0.9)
  • DBD::Pg perl module
  • PostgreSQL (9.3 or above)
  • A CGI/Perl webserver

You can install Mojolicious using CPAN or your Linux distribution package system if available. Here is an example with CPAN:

curl -L cpanmin.us | perl - Mojolicious@4.99
curl -L cpanmin.us | perl - Mojolicious::Plugin::I18N@0.9
curl -L cpanmin.us | perl - DBI
curl -L cpanmin.us | perl - DBD::Pg

To install the UI plugin wh_nagios (or any other UI plugin), from your opm directory as user root:

cd /usr/local/src/opm/opm-core/ui/modules
ln -s /usr/local/src/opm/opm-wh_nagios/ui wh_nagios

Then, on OPM database side, you need to create an opm user for the UI:

postgres@opm=# CREATE USER opmui WITH ENCRYPTED PASSWORD 'opmui';
postgres@opm=# SELECT * from grant_appli('opmui');

Finally, in the directory /usr/local/src/opm/opm-core/ui, copy the opm.conf-dist file to opm.conf, and edit it to suit our needs, for instance:

{
    ...
    "database" : {
        "dbname"   : "opm",
        "host"     : "127.0.0.1",
        "port"     : "5432",
        "user"     : "opmui",
        "password" : "opmui"
    },
    ...
    "plugins" : [ "wh_nagios" ]
}

This user is only needed for the connection between the UI and the database. You only have to use it in the opm.conf file

To test the web user interface quickly, you can use either morbo or hypnotoad, both installed with Mojolicious. Example with Morbo:

cd /usr/local/src/opm/opm-core/ui
morbo script/opm
[Sun Jan 31 13:45:41 2016] [debug] Reading configuration file "/usr/local/src/opm/opm-core/ui/opm.conf".
[Sun Jan 31 13:45:41 2016] [debug] Helper "url_for" already exists, replacing.
[Sun Jan 31 13:45:41 2016] [info] Listening at "http://*:3000".
Server available at http://127.0.0.1:3000.

Using hypnotoad, which suit better for production:

user:/usr/local/src/opm/ui/opm/opm-core$ hypnotoad -f script/opm

Removing -f”/code> makes it daemonize.

Configure PostgreSql services with Nagios:

cd ~
wget http://bucardo.org/downloads/check_postgres.tar.gz
cp check_postgres.pl /usr/local/nagios/libexec/
cd /usr/local/nagios/libexec/
perl ../check_postgres.pl --symlinks

cd /usr/local/nagios/etc/objects

vi pgcommands.cfg # add the postgresql commands 
vi pgservices.cfg # add the postgresql services

vi /usr/local/nagios/etc/nagios.cfg # include the path of above 2 files.

Sample Postgresql command:

define command {
command_name check_edb_bloat
command_line $USER1$/check_postgres.pl --host $HOSTADDRESS$ --dbuser=pgmonitor--dbpass=password -db postgres -p 5444 --action bloat
}

Sample Postgresql Service:

define service {
    use                     generic-service
    host_name               localhost
    service_description     Postgres bloat
    is_volatile             0
    check_period            24x7
    max_check_attempts      3
    normal_check_interval   5
    retry_check_interval    1
    contact_groups          admins
    notification_interval   120
    notification_period     24x7
    notification_options
    check_command           check_edb_bloat!3000000!9000000!flr
}

Note: OPM heavily depends on Nagios so you ought to be comfortable with nagios services and commands. To monitor postgreSql database, you need to install and configure check_postgres plugin from bucardo.

Reference: NRPE_PostgreSQL_check
monitor postgresql with nagios
Install and configure nagios for postgresql PPAS on linux

Your comments/suggestions will be highly appreciated.

Dec 28, 2016

PostgreSQL Architecture

Based on PostgreSQL documentation and various articles, I comprehend the PostgreSQL Architecture as below:

PostgreSQL Architecture


PostgreSQL instance consists of set of Process and Memory. PostgreSQL uses a simple “process per-user” client/server model. The major processes are:
  1. The ‘postmaster’ which is:
    • supervisory daemon process,
    • ‘postmaster’ is attached to shmmem segment but refrains from accessing to it.
    • always running waiting for connection requests
  2. Utility processes ( bgwriter, walwriter, syslogger, archiver, statscollector and autovacuum launcher ) and
  3. User Backend process ( postgres process itself, Server Process )
When a client request for connection to the database, firstly request is hit to Postmaster daemon process. After performing authentication and authorization it forks one new backend server process (postgres). Henceforth, the frontend process and the backend server communicate directly without intervention by the postmaster. The postmaster is always running, waiting for connection requests, whereas frontend and backend processes come and go. The libpq library allows a single frontend to make multiple connections to backend processes.
However, each backend process is a single-threaded process that can only execute one query at a time; so the communication over any one frontend-to-backend connection is single-threaded.
Postmaster and postgres servers run with the user ID of the PostgreSQL “superuser”.
One postgres process exists for every open database session. Once authenticated with user connection, it directly connects (with who and for what purpose) with shared memory.

POSTGRESQL SHARED MEMORY

Shared Buffers:

Sets the amount of memory the database server uses for shared memory buffers. The default is typically 32MB. Larger settings for shared_buffers usually require a corresponding increase in checkpoint_segments, in order to spread out the process of writing large quantities of new or changed data over a longer period of time. Below 3 parameters should be discussed:
  • bgwriter_delay
  • bgwriter_lru_maxpages
  • bgwriter_lru_multiplier

WAL Buffers:

The amount of shared memory used for WAL data that has not yet been written to disk. The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers, but not less than 64kB nor more than the size of one WAL segment, typically 16MB. This value can be set manually if the automatic choice is too large or too small, but any positive value less than 32kB will be treated as 32kB. This parameter can only be set at server start. The contents of the WAL buffers are written out to disk at every transaction commit, so extremely large values are unlikely to provide a significant benefit. However, setting this value to at least a few megabytes can improve write performance on a busy server where many clients are committing at once. The auto-tuning selected by the default setting of -1 should give reasonable results in most cases.

CLOG Buffers:

$PGDATA/pg_clog contains a log of transaction metadata. This log tells PostgreSQL which transactions completed and which did not. The clog is small and never has any reason to become bloated, so you should never have any reason to touch it.

POSTGRESQL PER BACKEND MEMORY

work_mem:

Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Default is 1M. Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value.

temp_buffers:

Sets the maximum number of temporary buffers used by each database session. Default is 8M. The setting can be changed within individual sessions, but only before the first use of temporary tables within the session; subsequent attempts to change the value will have no effect on that session.

maintenance_work_mem:

Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. Default is 16M. Since only one of these operations can be executed at a time by a database session, and an installation normally doesn’t have many of them running concurrently, it’s safe to set this value significantly larger than work_mem.

UTILITY PROCESSES:

Mandatory processes: We cannot Enable/Disable these processes.
  • BGWriter
  • WAL Writer
Optional Processes:
  • Stats-collector
  • Autovacuum launcher
  • Archiver
  • Syslogger
  • WAL Sender
  • WAL Receiver

BGWriter

The function of background writer is to issue writes of “dirty” (new or modified) shared buffers. It writes shared buffers so that server processes handling user queries seldom or never need to wait for a write to occur. However, the background writer does cause a net overall increase in I/O load, because while a repeatedly-dirtied page might otherwise be written only once per checkpoint interval, the background writer might write it several times as it is dirtied in the same interval. Below parameters can be used to tune the behavior for local needs.
bgwriter_delay: Specifies the delay between activity rounds for the background writer. Default is 200ms.
bgwriter_lru_maxpages: In each round, no more than this many buffers will be written by the background writer. Setting this to zero disables background writing (except for checkpoint activity). The default value is 100 buffers.
bgwriter_lru_multiplier: The number of dirty buffers written in each round is based on the number of new buffers that have been needed by server processes during recent rounds. The default is 2.0.
Smaller values of bgwriter_lru_maxpages and bgwriter_lru_multiplier reduce the extra I/O load caused by the background writer, but make it more likely that server processes will have to issue writes for themselves, delaying interactive queries.

WAL Writer

WAL writer process writes and fsync WAL at convenient Intervals. In order to guarantee transaction security; WAL buffers holds the changes made to the database in the transaction logs. WAL buffers are written out to the disk at every transaction commit, as WAL writer process is responsible to write on to the disk. WAL_WRITER_DELAY parameter for invoking the WAL Writer Process, however there are other parameters which also keeps the WAL Writer busy.

Stats Collector

Stats collector process will collect the information about the server activity. It count number of access to the tables and indexes in both disk-block and individual row items. It also tracks the total number of rows in each table, and information about VACUUM and ANALYZE actions for each table. Collection of statistics adds some overhead to query execution, whether to collect or not collect information. Some of the parameter in the postgresql.conf file will control the collection activity of the stats collector process. Click here to know more about the stats collector process and its related parameters.

Autovacuum Launcher

For automating the execution of VACUUM and ANALYZE command, Autovacuum Launcher is a daemon process consists of multiple processes called autovacuum workers. Autovacuum launcher is a charge of starting autovacuum worker processes for all databases. Launcher will distribute the work across time, attempting to start one worker on each database for every interval, set by the parameter autovacuum_naptime. One worker will be launched for each database, set by the parameter autovacuum_max_workers. Each worker process will check each table within its database and execute VACUUM or ANALYZE as needed. Following will breif about the AUTOVACUUM LAUNCHER PROCESS parameters.

Syslogger Process / Logger Process:

Syslogger

As per the figure, it is clearly understood that all – the utility processes + user backends + Postmaster Daemon are attached to syslogger process for logging the information about their activities. Every process information is logged under $PGDATA/pg_log with the file .log.
Debugging more on the process information will cause overhead on the server. Minimal tuning is always recommended. However, increasing the debug level when required. Click Here for further on logging parameters.

Archiver

Achiver process is optional process, default is OFF.
Setting up the database in archive mode means to capture the WAL data of each segment file once it is filled and save that data somewhere before the segment file is recycled for reuse.
  1. On Database Archivelog mode, once the WAL data is filled in the WAL Segment, that filled segment named file is created under $PGDATA/pg_xlog/archive_status by the WAL Writer naming the file as “.ready”. File naming will be “segment-filename.ready”.
  2. Archiver Process triggers on finding the files which are in “.ready” state created by the WAL Writer process. Archiver process picks the ‘segment-file_number’ of .ready file and copies the file from $PGDATA/pg_xlog location to its concerned Archive destination given in ‘archive_command’ parameter(postgresql.conf).
  3. On successful completion of copy from source to destination, archiver process renames the “segment-filename.ready” to “segment-filename.done”. This completes the archiving process.
It is understood that, if any files named “segement-filename.ready” found in $PGDATA/pg_xlog/archive_status. They are the pending files still to be copied to Archive destination.
Click Here for more information on parameters and Archiving,
Comments/suggestion would be greatly appreciated.

Nov 22, 2013

Trunk, Branch & Tags

in SVN the directory names themselves mean nothing -- "trunk, branches and tags" are simply a common convention that is used by most repositories. Not all projects use all of the directories (it's reasonably common not to use "tags" at all), and in fact, nothing is stopping you from calling them anything you'd like, though breaking convention is often confusing.
I'll describe probably the most common usage scenario of branches and tags, and give an example scenario of how they are used.
  • Trunk: The main development area. This is where your next major release of the code lives, and generally has all the newest features.
  • Branches: Every time you release a major version, it gets a branch created. This allows you to do bug fixes and make a new release without having to release the newest - possibly unfinished or untested - features.
  • Tags: Every time you release a version (final release, release candidates (RC), and betas) you make a tag

Nov 18, 2013

Release Management

1) What factors influence the opening of a feature branch?

Typically, feature branches are created in cases where the new feature or enhancement has broad-sweeping changes to the code base such that introducing them in the trunk may be too disruptive. Also, feature branches may be used for prototyping or proof-of-concept for code that may never end up in trunk.


2) What is the purpose of continuous integration for a development team?

The primary purpose of CI is to provide regular, fast feedback to developers as they commit changes to the shared code repository (VCS). The idea being that we’re always integrating our code on commit, so that when conflicts arise, they can be addressed more quickly and easily than if the changes had been made days, week, or even months ago.

Nov 9, 2013

Views


Views in a nutshell
  • virtual table
  • based on 1 or more tables or views
  • takes no storage space other than the definition of the view in the DD
  • contains no data
  • provides additional level of security
  • hides implementation complexity
  • lets you change the data you can access, applying operators, aggregation functions, filters etc. on the base table.
  • isolates application from changes
  • An updatable view allows you to insert, update, and delete rows by propagating the changes to the base table
  • The data dictionary views ALL_UPDATABLE_COLUMNS indicate which view columns are updatable.
  • Views which are not updatable can be modified using INSTEAD OF trigger.
  • can be replaced with a CREATE OR REPLACE VIEW statement. The REPLACE option updates the current view definition but preserves the present security authorizations.
  • lets you reorder columns easily with a CREATE OR REPLACE rather than going into a messy drop column for the base table with data
  • The underlying SQL definition of the view can be read via select text from user_views for the view.
  • Oracle does not enforce constraints on views. Instead, views are subject to the constraints of their base tables.