Search

Recording and Auditing SQL Queries in Postgres

Database query logging enables audit management and disaster recovery. Fortunately, the open-source PostgreSQL database is packed with logging mechanisms. However, these may not be enabled by default.

You can turn on Postgres query logging and auditing using the default logging mechanism, community-provided solutions such as audit-trigger 91plus or pgaudit, or third-party commercial solutions.

In this article, we’ll focus on the default database query logging technique. We’ll demonstrate how to enable Postgres logging, discuss accessing and auditing the logs, and examine some of their features.

How to Enable Access Logging in Postgres

First, find the postgresql.conf config file and add configuration to start Postgres logging. Usually, you can find the config file in this directory:

				
					/etc/postgresql/9.3/main/postgresql.conf
				
			

However, if you are unsure, use the following command to yield the config file’s path:

				
					SHOW config_file;
				
			
Next, open the postgresql.conf file with your favorite editor, such as nano, using a command like this:
				
					nano /etc/postgresql/9.3/main/postgresql.conf
				
			

Then, find a label named logging_collector and set its value to on.

The file contains two more essential items: log_destination and log_directory. Perhaps unsurprisingly, the log_directory stores logs in a directory.

The log_destination target’s default value is stderr. Its other options include strider, csvlog, syslog, or eventlog.

If you select the value csvlog, it stores the output in CSV format. But to work with csvlog as a value of log_destination, you must first set the logging_collector to on. Learn more about configuration settings on the Postgres website.

After setting up appropriate configuration settings, your final step is to restart the Postgresql service so that the configurations take effect:

				
					service postgresql restart
				
			

How to Access Postgres Logs

Once you have set up the config file, it’s easy to access the logs. For example, if we set the following configuration:
				
					log_destination = ‘csvlog’
logging_collector = on
log_directory = ‘demo_logs’

				
			

We find the logs in this directory:

				
					/var/lib/postgresql/13/main/demo_logs
				
			

Because we have selected log_destination as csvlog, we can view the logs in CSV format.

Exploring Example Logs and Features

Here is an example log file. Logs from this file look similar to the following:

				
					2022-02-04 21:58:17.871 UTC,,,70,,61fda179.46,1,,2022-02-04 21:58:17 UTC,,0,LOG,00000,"database system was shut down at 2022-02-04 21:56:56 UTC",,,,,,,,,"","startup"

2022-02-04 21:58:18.064 UTC,,,46,,61fda175.2e,6,,2022-02-04 21:58:13 UTC,,0,LOG,00000,"database system is ready to accept connections",,,,,,,,,"","postmaster"

				
			

Each log starts with a timestamp in a selected timezone format. Additionally, the log contains a meaningful message for the engineering team to read and debug.

 

The first log explains that the database was down for a certain duration. The second log suggests that now it can accept connections with an external application.

 

Postgres access logs can also store the entire SQL query. This information makes it straightforward to debug when there is an error.

 

Consider another example log:

				
					2022-02-04 21:58:47.936 UTC,"discourse","discourse",119,"[local]",61fda197.77,1,"SELECT",2022-02-04 21:58:47 UTC,3/0,0,LOG,00000,"duration: 111.410 ms  statement: SELECT a.attname
 FROM (
        SELECT indrelid, indkey, generate_subscripts(indkey, 1) idx
          FROM pg_index
         WHERE indrelid = '""allowed_pm_users""'::regclass
           AND indisprimary
      ) i
 JOIN pg_attribute a
   ON a.attrelid = i.indrelid
  AND a.attnum = i.indkey[i.idx]
ORDER BY i.idx
",,,,,,,,,"/var/www/discourse/vendor/bundle/ruby/2.7.0/bin/unicorn","client backend"

				
			

The log captures the entire query, including joins and order by sections. It also features a duration attribute that displays the execution time. This enables you to identify how long the query ran in the database.

Conclusion

Now that you know how to record and audit SQL queries in Postgres, you can set up your own. Monitor and debug your database queries to keep your Postgres database running efficiently and securely.

The next step is to ensure that only the correct people can access your log data. For more information, visit Teleport to explore the benefits of enabling secure identity-based Postgres database access.

If you’re interested in developing expert technical content that performs, let’s have a conversation today.

Facebook
Twitter
LinkedIn
Reddit
Email

POST INFORMATION

If you work in a tech space and aren’t sure if we cover you, hit the button below to get in touch with us. Tell us a little about your content goals or your project, and we’ll reach back within 2 business days. 

Share via
Copy link
Powered by Social Snap