Triggers: A Love/Hate Relationship

For this installment of PGSQL Phriday #007, Lætita Avrot asked us to blog about triggers in PostgreSQL. Love them, like them, or hate them, triggers often… well… trigger strong reactions. 😂

To get an idea of why triggers can cause so many reactions, I want to give a really brief history of triggers (from my perspective) and how that impacted my experience with them early on.

Action… Reaction

We live in an action/reaction world – and how our data is processed is no different. As relational database systems have evolved and matured through the decades, various forms of programmatic features have been added to an otherwise declarative system.

By design, plain ANSI SQL is declarative (“hey database, this is the data I want, you figure out how to do it”), not procedural (“Hey database, I want this data and I want you to retrieve it like this”). Early on, there wasn’t a standard design for how to add on additional procedural-like features, although that later came with the definition of SQL/PSM sometime in the mid-90s.

However, through the late 80s and most of the 90s, database vendors were trying to keep pace with very quickly changing requirements and needs in the database space. Even though triggers weren’t officially added until the SQL:99 standard, databases like Oracle had already released their own procedural languages and features. Triggers may have been deferred in the SQL-92 standard, but the Standards team couldn’t ignore them (or the complexity that triggers add to transactional consistency).

This history matters, to some extent, because it means each relational database was implementing trigger functionality a bit differently with different levels of feature compatibility compared with SQL:99. While the concepts of how a trigger works in one database generally transfers to another, the devil is in the details. 😉

One other reason it matters is that a lot of what we call(ed) relational consistency was initially implemented as triggers. Foreign Key constraints? Triggers. Check constraints? Triggers.

And therein starts my lovely history with Triggers. (you knew we’d get there eventually, right?)

(In)Consistent Triggers

My main introduction to triggers started with SQL Server in a large (for it’s day) application schema. The development of the schema tracked back to Sybase and eventually into SQL Server. Although foreign key and check constraints had been added along the way in SQL Server, the database was still using triggers for relational integrity, rather than check or foreign keys constraints. This was primarily because the team was using the same ER tooling through many years, and it still implemented constraints as triggers. 🤷‍♂️

On the surface it worked, but the query planner couldn’t do the best job possible because constraint definitions provide additional context about data guarantees. A trigger might prevent duplicate data (like a unique constraint), but the query planner doesn’t know it is guaranteed to find unique values in a column otherwise.

To be honest, I think scenarios like this in older applications have given triggers a bad wrap in more established organizations. Don’t get me wrong, there are other issues to be aware of, but in my opinion triggers hold a valuable spot in the DBA and database developer toolset, particularly if your mindset continues to be “keep logic as close to the data as possible.”

Problems With Triggers Beyond Constraints

Beyond constraints, triggers inherently bring some other complexities. A number of years ago I remember reading this article by Joe Celko and realizing that there was a lot about triggers that I didn’t fully understand at the time. They do add multiple layers of complexity, and partially owing to the continued use for things like constraints, triggers are often used for the wrong purpose.

Of the issues that Joe points out, there are a few that I think are more compelling than others as reasons not to use triggers in most cases. I’ll throw in a bonus that he didn’t specifically mention (but I hear most often)

  1. Order of execution: Each database implements a different method for tracking the order of execution of triggers on an operation. In SQL Server you can name a first and last trigger for a specific operation, but the others execute randomly. In Postgres, multiple triggers for the same operation are executed alphabetically.
  2. Triggers are easy to mess up: I think this is often true, and something I’ve experienced myself. Particularly when triggers cause other triggers to fire, it’s very tricky (almost impossible sometimes) to debug what’s going on through the entire stack.
  3. Triggers impact performance: There’s no way around this. If you require something to happen as part of every transaction or modification, it will impact overall performance at some level. The “good” news here is that computer performance and overall database improvements have probably made this negligible for most viable use cases, but it’s still an unavoidable cost.
  4. (Bonus) Triggers hide logic: There are countless stories of how a software bug was hidden by triggers. When other developers or DBAs don’t know to look throughout the entire chain of modified data, it can be difficult to figure out what’s going on (see point #2 above). Particularly in a development space proliferated with ORMs, triggers can make it seem impossible to follow exactly what’s happening as data is inserted and modified. Take a hard look at the value you get by implementing a trigger verses the overall ability for your team to really understand the full system.

Add to these the plethora of feature variations from database to database (row-level vs. table-level? only AFTER triggers or does the database support BEFORE triggers?), and things get pretty hairy.

But still…

I ️Triggers… Usually

Throughout my career I’ve generally leaned into using triggers when it’s appropriate. And once I moved to PostgreSQL and understood some of the additional features provided by triggers (more in-line with the standard), I actually found additional uses for them.

Here are four use cases that come to mind which I think are still viable options for triggers, at least in PostgreSQL. YMMV depending on your database server of choice, but if you have a solid process for tracking changes to your schema model (I happen to know a great tool for that, BTW! 😉) and a team that understands why they’re making this architectural choice, I think triggers are a fine option to consider.

Auditing Table

Many people will disagree about this and that’s OK. Some databases have implemented other features to do (maybe) a better job of tracking data modifications. Still, auditing tables are a classic use case for implementing triggers. What’s even better (in my opinion), is that PostgreSQL provides features like trigger filters so that trigger won’t fire if the condition isn’t met.

Do you only need to log a row when a specific column is modified? Easy peasy.

Propagating changes

There are times when a change to one table or column needs to propagate to other parts of the database. Many of the applications I worked on previously had large parent/child tree-like structures as part of the design.

While recursive queries or special indexes could make querying those relationships possible, they often have a negative return the larger the tree model gets. In these cases, a trigger on the main entity table can fire if (and only if) the parent changed. It takes care of materializing the relationship data only for the effected branches which limits update activity and allows simpler, faster indexing.

Don’t get me wrong, knowing how this process worked wasn’t trivial. But the overall logic rarely changed once it was set up and it “just worked”, efficiently.

Building a Queue

In SQL Server there is a built-in messaging queue called Service Broker. While the queue can be filled at any time with SQL statements, many applications use triggers to insert messages to the queue whenever data changed in a table. A secondary process could then pick messages off the queue for further processing in a new transaction.

PostgreSQL has LISTEN/NOTIFY functionality, which isn’t specifically a queue, but can serve a similar purpose. And like Service Broker in SQL Server, you can publish messages manually or using a trigger. This gives a (mostly) asynchronous feel to your work.

When data is modified the transaction can notify the listeners and then complete the transaction and then a second process/application can receive the message and do additional work.

I think this is one of the more useful things applications could be using triggers for and yet many of them don’t. IMHO, it’s worth exploring.

Automatically updating a column

This one might be controversial, too. Sorry.

Remember that PostgreSQL (and most other relational databases) aren’t procedural by nature. With functionality like triggers, you can keep columns in a database updated automatically.

The classic example is a “last modified” timestamp column. There is no native feature that will set the value of a column when a row is updated. Triggers make that possible. Even better, if you use the same column (name and type) in each table that you want to track, you can use one trigger function for all tables!
Should this be done from the application code? Maybe. That’s for you to decide. But it’s easy to set naming and schema conventions such that this functionality is easy to implement and maintain.

When All You Have Is a Hammer…

… everything starts to look like a nail. As much as I believe there are valid and useful reasons to use triggers, I’ve inevitably seen them be used as a solution to many application decisions. Any time your team decides that a trigger is the best approach to accomplish the task, take time to seriously consider how you can proactively make the data manipulation process as transparent as possible.

  • How will you document it?
  • How easy is it to “see” the model of your schema (including features like triggers)?
  • If possible (and your database supports it), are you limiting the surface areas of your trigger activity through filters?
  • Do you have a plan to test the functionality to ensure it works as intended with each migration?

In my opinion, triggers are a valid and useful tool. Just know why you’re using them and the benefit they are providing for the problem at hand.

Would You Like to Host a PGSQL Phriday?

Thank you again to Lætitia for hosting the event this month. I think triggers was a great topic and I love when there’s a somewhat controversial database topic that we can all think about and share our experiences.

What about you? Do you have a PostgreSQL topic that you want to write about and see what others in the community have to say? Then please reach out through Twitter, the PostgreSQL Slack #pgsqlphriday channel, LinkedIn, or any other way you can find me. I’ll make sure you get signed up for a future month and we’d love to have your voice added to the discussion!

5 thoughts on “Triggers: A Love/Hate Relationship”

  1. I like that you bring the declarativeness of SQL to the discussion. In fact, I know some people who reject triggers for this very reason. On the other hand, as you mention, “somebody” has to figure out how the database would make it happen. And for this purpose triggers are essential.

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.