PostgreSQL: Logs and Error Reporting
PostgreSQL is quietly writing a diary about everything that happens inside your database - every slow query, every failed connection, every lock that made something grind to a halt. The question is: are you reading it?
Most developers ignore PostgreSQL logs until something breaks in production, and by then it's often too late. The logs were sitting there, whispering warnings, and nobody was listening. This guide will show you how to configure PostgreSQL logging so it gives you exactly the right amount of information - not too noisy, not too quiet - and actually helps you solve real problems.
Why Logging Matters (More Than You Think)
Think of PostgreSQL logs as your database's black box recorder. When something goes wrong - and eventually something always does - the logs are the first place you look. But logs are not just for emergencies. Done right, they are an ongoing health report for your system.
Here are a few practical scenarios where good logging pays off immediately:
- A page on your app starts loading slowly. You suspect the database. Without logs, you're guessing. With
log_min_duration_statementset, PostgreSQL has already written down every query that took longer than your threshold, including the exact SQL text. - A developer accidentally runs a
DROP TABLEin production. With statement logging enabled, you have a full audit trail of who ran what, and when. - Your database server disk fills up. Temporary files from big sorts and hash joins pile up silently unless you have
log_temp_filesturned on. With it enabled, you'd have seen the warning signs days earlier. - Something is mysteriously slow but not obviously broken. Checkpoint logging reveals if PostgreSQL is doing heavy I/O bursts, and lock wait logging tells you if processes are blocking each other.
Step 1: Where Do the Logs Go? (log_destination)
Before you log anything, you need to decide what format you want your logs in, because that determines how you'll use them later.
PostgreSQL supports four destinations: stderr, csvlog, jsonlog, and syslog.
stderr is the default. Logs come out as plain text lines. It is perfectly fine for local development or simple setups where you just want to tail -f a file and read it yourself.
csvlog outputs every log entry as a structured row: timestamp, username, database name, query, error code, and more - all neatly comma-separated. This format shines when you want to answer analytical questions like "which user caused the most errors last week?" You can import a CSV log directly into a Postgres table with a single COPY command and run SQL against it.
jsonlog produces each log entry as a self-describing JSON object. If your infrastructure uses modern monitoring tools like Datadog, the ELK stack (Elasticsearch + Logstash + Kibana), or Splunk, this is the format to use. These tools can ingest and index JSON instantly without needing custom parsing rules.
syslog hands logs off to the Linux operating system's logging daemon. Use this when your organization has a centralized security or compliance server that collects logs from every machine in the network.
You can even combine destinations. The following sends logs as both plain text and JSON simultaneously:
log_destination = 'stderr,jsonlog'Step 2: Turn On the Logging Collector (logging_collector = on)
This is the single most important setting to get right before anything else.
The logging collector is a dedicated background process whose only job is to receive log messages from all other PostgreSQL processes and write them to files safely. Without it, if two backend processes try to write a log line at the exact same millisecond, they can corrupt each other's output - you end up with garbled, half-written log lines that are impossible to read.
More importantly, logging_collector is required if you want to use csvlog or jsonlog. Without it, those formats simply won't work.
logging_collector = on
log_directory = 'log' # relative to your data directory
log_filename = 'postgresql-%a.log' # one file per weekday: Mon, Tue, Wed...The filename pattern %a generates files named postgresql-Mon.log, postgresql-Tue.log, and so on. Combined with the next setting, this gives you automatic 7-day log retention with zero manual cleanup.
Step 3: Control File Growth With Rotation
Log files that are never rotated will eventually fill your disk and crash your server. PostgreSQL handles this for you automatically, but you need to tell it how.
log_rotation_age controls how long to use a single file before switching to a new one. Set to 1440 (minutes) for a daily rotation, or 60 for hourly.
log_rotation_size sets a maximum file size. Once the log file hits this size, a new one starts. Set to 0 to disable size-based rotation if you prefer time-based only.
log_truncate_on_rotation is the setting that makes the 7-day rolling window work. When this is on, PostgreSQL doesn't append to an existing file with the same name - it wipes it and starts fresh. So on Monday, it overwrites last Monday's file. You always have exactly 7 days of history, and you never have to manually delete old files.
log_truncate_on_rotation = on
log_rotation_age = 1440 # New file every 24 hours
log_rotation_size = 0 # Don't rotate based on sizeStep 4: Add Identity to Every Log Line (log_line_prefix)
Raw log files can be confusing. You see an error, but you have no idea which user triggered it, which database it came from, or which backend process was responsible. The log_line_prefix setting stamps every single log line with identifying information before the message.
The default prefix is '%m [%p] ', which gives you a timestamp and process ID. That is a good start, but this setting is far more useful:
log_line_prefix = '%m [%p] %u@%d 'This produces lines that look like:
2026-02-21 14:32:10.456 UTC [18442] alice@myapp ERROR: ...
Now when you see an error, you immediately know it was user alice on the myapp database, at exactly what time, from which process. The key escape codes to know are:
%m- timestamp with milliseconds%p- process ID%u- database username%d- database name%r- remote IP address and port (great for finding which app server caused a problem)
Step 5: Hunt Down Slow Queries (log_min_duration_statement)
This is the most commonly useful logging feature for application developers, and it is surprisingly simple. You set a time threshold, and PostgreSQL logs any query that takes longer than that, including the full SQL text.
log_min_duration_statement = 500 # millisecondsWith this setting, every query slower than half a second gets logged automatically. You do not need to instrument your application code or install any plugins. The logs become your automatic slow-query report.
When you see a log entry, it looks like this:
2026-02-21 14:32:10.456 UTC [18442] alice@myapp LOG:
duration: 2341.678 ms statement: SELECT * FROM orders
JOIN customers ON orders.customer_id = customers.id
WHERE orders.created_at > '2026-01-01';
You now have the exact query, the exact duration, and the exact user - everything you need to open it in EXPLAIN ANALYZE and start optimizing.
Start with 500ms in production. For a well-tuned application, you might tighten that to 100ms over time. During active debugging, you can temporarily set it to 0 to log every single query.
Step 6: Catch Disk-Hungry Queries (log_temp_files)
When PostgreSQL cannot fit a sort operation or hash join in memory, it spills data to temporary files on disk. This is normal and expected for large queries - but if it's happening constantly, it's a sign that your work_mem setting is too low or your queries are not optimized.
By default, PostgreSQL creates and deletes these temp files silently. You never know they existed. Turn on log_temp_files and PostgreSQL will report every temp file when it's deleted, including its size:
log_temp_files = 0 # Log all temp files, regardless of sizeA value of 0 logs everything. You can set a threshold in kilobytes to only log files above a certain size - for example, log_temp_files = 10240 only logs temp files larger than 10MB.
When you see a flood of temp file log entries, it's a clear signal to either increase work_mem for those sessions or investigate why those queries are processing so much data.
Step 7: Find What's Blocking What (log_lock_waits)
One of the most frustrating performance problems in a production database is when one process holds a lock and everything else queues up behind it. Your app slows to a crawl, users see timeouts, and you have no idea why.
log_lock_waits fixes this visibility problem. It logs a message any time a session has to wait longer than deadlock_timeout (the default is 1 second) to acquire a lock:
log_lock_waits = onWhen a lock wait is logged, you see which process is waiting, which lock it needs, and the query it was trying to run. This turns a mysterious slowdown into a traceable event. You can then investigate what the other process (the one holding the lock) was doing - often a long-running transaction, a batch job, or a forgotten open connection.
Step 8: Monitor Database Health (log_checkpoints)
A checkpoint is when PostgreSQL flushes all pending changes from memory to disk to ensure data durability. It is a necessary operation, but it can cause brief I/O bursts that slow down queries if it happens too frequently or takes too long.
log_checkpoints = onWith this enabled, every checkpoint writes a log entry that includes how many buffers were written, how long it took, and whether it was triggered by a timer or by hitting the max_wal_size limit. If you see the words LOG: checkpoint complete: wrote 18423 buffers appearing very frequently, your checkpoint configuration needs tuning.
This setting is on by default in PostgreSQL 18, and there is rarely a reason to turn it off.
Putting It All Together: A Recommended Configuration
Here is a practical starting point that covers the most important scenarios without overwhelming you with noise:
# Where logs go
log_destination = 'jsonlog' # or 'csvlog' if you prefer SQL analysis
logging_collector = on # Required for file-based logging
# File management
log_directory = 'log'
log_filename = 'postgresql-%a.log' # Rolling 7-day files (Mon, Tue, Wed...)
log_truncate_on_rotation = on
log_rotation_age = 1440
log_rotation_size = 0
# Line identity
log_line_prefix = '%m [%p] %u@%d ' # Timestamp, PID, user, database
# What to log
log_min_duration_statement = 500 # Slow queries over 500ms
log_temp_files = 10240 # Temp files over 10MB
log_lock_waits = on # Lock contention
log_checkpoints = on # Checkpoint activity
# Noise level (production default)
log_min_messages = WARNINGAnalyzing CSV Logs in the Database
One of the nicest things about csvlog is that you can load it directly into PostgreSQL itself and query it with SQL. Create the table once:
CREATE TABLE postgres_log (
log_time timestamp(3) with time zone,
user_name text,
database_name text,
process_id integer,
connection_from text,
session_id text,
session_line_num bigint,
command_tag text,
session_start_time timestamp with time zone,
virtual_transaction_id text,
transaction_id bigint,
error_severity text,
sql_state_code text,
message text,
detail text,
hint text,
internal_query text,
internal_query_pos integer,
context text,
query text,
query_pos integer,
location text,
application_name text,
backend_type text,
leader_pid integer,
query_id bigint,
PRIMARY KEY (session_id, session_line_num)
);Then import any log file with:
COPY postgres_log FROM '/var/log/postgresql/postgresql-Mon.csv' WITH csv;Now you can answer questions that would be impossible to answer by reading plain text files:
-- Which users triggered the most errors today?
SELECT user_name, COUNT(*) AS error_count
FROM postgres_log
WHERE error_severity = 'ERROR'
AND log_time > now() - interval '24 hours'
GROUP BY user_name
ORDER BY error_count DESC;
-- What were the slowest queries in the last hour?
SELECT log_time, user_name, message
FROM postgres_log
WHERE message LIKE 'duration:%'
AND log_time > now() - interval '1 hour'
ORDER BY log_time DESC;Quick Note on Security
Log files can contain sensitive data. If you log all statements (log_statement = all) or include bind parameters, passwords and personal data can end up in plain text in your log files. Keep these principles in mind:
- Restrict access to log files with
log_file_mode = 0600(the default), so only the database owner can read them. - Store log files outside your cluster data directory if you're sharing access with other system administrators.
- Be careful with
log_statement = allin production - it is extremely useful for auditing, but creates large files and can expose sensitive query parameters.
Conclusion
We covered a lot! You now know how to choose the right log format for your setup, plain text for simplicity, CSV for analysis, JSON for monitoring tools. You know how to keep log files from growing out of control with rotation and truncation. You know how to stamp every log line with the user, database, and timestamp so errors are immediately traceable. And, you know the six settings that matter most in practice: turning on the collector, catching slow queries, flagging temp file spills, surfacing lock contention, and keeping an eye on checkpoint activity.
Good logging is one of those things that feels unnecessary until the day you desperately need it. Set it up once, configure it thoughtfully, and your future self will thank you the next time something goes wrong at 2am.