Writing Well-Behaved Triggers

Triggers are an indispensible feature of SQL Server. Essentially stored procedures that are invoked automatically, a trigger can be used to perform logging, data validation and cleaning, synchronization, and notification, and implement referential integrity constraints that are too complicated for foreign keys, among, no doubt, plenty of other things.

But alas, any seasoned SQL Server DBA is probably familiar with the minefield of performance-related and other issues that can very easily stem from misuse or overuse of triggers. I've frequently been victim to (and perpetrator of) such issues myself.

So what does it take to write a good, well-behaved trigger? A surprising number of non-obvious things, that I'm going to address one at a time, by taking an example trigger from garbage to civilized. In a nutshell, here are the major concerns:

  1. Cut the chatter. Use NOCOUNT.
  2. Make sure your trigger can handle multiple rows.
  3. Prevent unbounded trigger cascading and recursion.
  4. Avoid horrible performance of the INSERTED and DELETED virtual tables.

The Sample Trigger

This will be our contrived example: imagine there's a table of customers (not a big stretch of imagination, I know). Suppose we need to log changes to the customers’ assigned sales rep, automatically change the assigned sales rep of associated customer contacts, and also send an email whenever a customer is reassigned away from the sales manager (he's a bit paranoid). A pretty routine scenario, in other words.

This is how you might write such a trigger, assuming the table is named Customers, and the primary key is CustomerNumber (int):

CREATE TRIGGER tr_CustomerAudit ON Customers FOR UPDATE
AS
--Insert the audit record
INSERT INTO CustomerAuditLog (
    CustomerNumber,
    OldSalesRep,
    NewSalesRep
)
SELECT
    i.CustomerNumber,
    d.SalesRep,
    i.SalesRep
FROM INSERTED i
    INNER JOIN DELETED d
        ON i.CustomerNumber = d.CustomerNumber
        AND i.SalesRep != d.SalesRep

--Update the customer contacts
UPDATE con
SET SalesRep = i.SalesRep
FROM INSERTED i
    INNER JOIN DELETED d
        ON i.CustomerNumber = d.CustomerNumber
        AND i.SalesRep != d.SalesRep
    INNER JOIN CustomerContacts con
        ON i.CustomerNumber = con.CustomerNumber

--Call a procedure to send an email notification
DECLARE @CustomerNumber int
SELECT @CustomerNumber = CustomerNumber
FROM INSERTED i
    INNER JOIN DELETED d
        ON i.CustomerNumber = d.CustomerNumber
WHERE d.SalesRep = 'Sales Manager'
    AND i.SalesRep != 'Sales Manager'

IF @CustomerNumber IS NOT NULL
    EXEC SendCustomerChangeNotification @CustomerNumber

It gets the job done (mostly), but there are some big problems with it.

Cut the Chatter – Use NOCOUNT

At some point, you've probably issued a sizeable update/insert against a table with a trigger on it, and saw results that looked something like this:

(1496 row(s) affected)

(1 row(s) affected)

(731 row(s) affected)

(731 row(s) affected)

Huh? Just how many rows did I update anyway? All the work going on in the database triggers is leaking through. Now imagine that you've just run a long script manipulating data in a number of tables, with hundreds of extra messages like that. Your script output would be impenetrable.

This is a simple problem to deal with – just put SET NOCOUNT ON at the beginning of your trigger. The NOCOUNT setting is scoped to whatever procedure changed it, so once your trigger (or stored procedure) finishes executing, the setting will revert to its original state.

I'm sure you can visualize what this code change looks like, so I won't repeat the entire trigger here just to add a single line.

Make Sure Your Trigger Can Handle Multiple Rows

In our example trigger, pay close attention to the portion that calls SendCustomerChangeNotification. Remember that a trigger executes only once per insert/update/delete operation, not once for each row that's changed. Thus, in this case, the query which sets @CustomerNumber will retrieve only a single value, even if the entire Customers table was just reassigned away from the sales manager. Rather than being notified of this massive (and questionable) change, the sales manager will only hear about a single change. (Though, in fairness, he'd probably find out about the other changes pretty quickly.)

Let's assume that SendCustomerChangeNotification is then modified to accept a list of customer numbers (delimited by commas) rather than a single customer number. Ideally you'd do this with a table parameter, but that's outside the scope of this discussion on triggers, so let's assume it's just a varchar(max) parameter. In order to change the trigger to properly handle multiple customer changes, you could do the following:

--Call a procedure to send an email notification
DECLARE @CustomerNumber varchar(max)
SELECT @CustomerNumber = ISNULL(CustomerNumber + ',', '') + CustomerNumber
FROM INSERTED i
    INNER JOIN DELETED d
        ON i.CustomerNumber = d.CustomerNumber
WHERE d.SalesRep = 'Sales Manager'
    AND i.SalesRep != 'Sales Manager'

IF @CustomerNumber IS NOT NULL
    EXEC SendCustomerChangeNotification @CustomerNumber

Other situations may require the use of a cursor, or some different technique.

There are many clever ways to make a trigger function incorrectly by assuming that you'll only have to deal with a single row being modified. It's important to always consider, what will happen in my trigger if multiple rows are updated, especially if not all of those rows should be acted upon by the trigger?

Prevent Unbounded Trigger Cascading and Recursion

This is a problem that generally only rears its head after you've gotten a few triggers in place. It also becomes much less fun to troubleshoot after you've gotten a few triggers in place.

By default, SQL Server will enable “nested triggers” for the server. What this means is that if the actions of one trigger would cause another trigger to fire, then that trigger will fire. This in itself is perfectly normal, and generally desirable behavior. Where things get ugly is if this second trigger then does something that would cause the first trigger to fire again – you get a tennis match between the two (or more) triggers that only stops when you hit the 32-level nesting limit, and the original query fails with an error. Not good.

So how could this be a problem with our example trigger? After all, the INSERT and UPDATE statements are filtered to only look at rows where INSERTED.SalesRep != DELETED.SalesRep. Sure, they do that, but an INSERT is still an INSERT, and an UPDATE is still and UPDATE, even if no rows are affected! If you run a nonsense query like UPDATE Customers SET SalesRep = 'N/A' WHERE 1=0, the UPDATE triggers defined on Customers will still fire. And if those triggers are written naively, they could then cause tr_CustomerAudit to run again. Back and forth…

Fortunately, this problem is easy to mitigate – simply check that you actually need to make changes before doing it. Here's what that looks like for our example:

CREATE TRIGGER tr_CustomerAudit ON Customers FOR UPDATE
AS

SET NOCOUNT ON

IF EXISTS (
        SELECT TOP 1 i.CustomerNumber
        FROM INSERTED i
            INNER JOIN DELETED d
                ON i.CustomerNumber = d.CustomerNumber
                AND i.SalesRep != d.SalesRep
    )
BEGIN
    --Insert the audit record
    INSERT INTO CustomerAuditLog (
        CustomerNumber,
        OldSalesRep,
        NewSalesRep
    )
    SELECT
        i.CustomerNumber,
        d.SalesRep,
        i.SalesRep
    FROM INSERTED i
        INNER JOIN DELETED d
            ON i.CustomerNumber = d.CustomerNumber
            AND i.SalesRep != d.SalesRep

    --Update the customer contacts
    UPDATE con
    SET SalesRep = i.SalesRep
    FROM INSERTED i
        INNER JOIN DELETED d
            ON i.CustomerNumber = d.CustomerNumber
            AND i.SalesRep != d.SalesRep
        INNER JOIN CustomerContacts con
            ON i.CustomerNumber = con.CustomerNumber
END

--Call a procedure to send an email notification
DECLARE @CustomerNumber varchar(max)
SELECT @CustomerNumber = ISNULL(CustomerNumber + ',', '') + CustomerNumber
FROM INSERTED i
    INNER JOIN DELETED d
        ON i.CustomerNumber = d.CustomerNumber
WHERE d.SalesRep = 'Sales Manager'
    AND i.SalesRep != 'Sales Manager'

IF @CustomerNumber IS NOT NULL
    EXEC SendCustomerChangeNotification @CustomerNumber

This way, if the table is updated, but no rows have had SalesRep changed, the INSERT and UPDATE statements never even execute. Thus, any triggers on the CustomerContacts table won't be executed. Life seems good.

Except, this leads us into our next issue…

Avoid Horrible Performance of the INSERTED and DELETED Virtual Tables

This is an issue that only came to my attention recently, largely because I never expected such severe performance problems from these tables. I had wrongly assumed the performance problems were coming from excessive trigger cascading, and inefficent user-defined functions.

The core of the problem is that INSERTED and DELETED have no indexes on them. None. They don't inherit any from the base tables. If your trigger joins INSERTED against DELETED, you can be in for a nasty surprise, on the order of O(n2).

We're talking stupefyingly bad performance. In one case, I had a trigger that would use the IF EXISTS(... pattern, joining INSERTED to DELETED and checking for changes. The bulk of ERP queries would only update a single row, in which case everything was fine. But update 5,000 rows in the base table, and you would get execution plans from the trigger that were utterly absurd. I saw nested-loop joins with 1,800,000 rows by 5,000 rows. It would take several minutes just to complete a simple update of a few thousand rows in this table. I never waited long enough to see how long it would have taken to update all 20,000+ rows that I needed to modify.

Lesson learned: you need indexes. But you can't directly index INSERTED and DELETED – they're virtual tables. What you can do is copy INSERTED and DELETED, into either temporary tables or table variables. There are pros and cons to both. And while it sounds like a lot of extra overhead for each trigger execution, it's nowhere near as bad as the overhead you get from joining a large INSERTED and DELETED. As long as you only copy the columns that you're interested in, you should be fine.

Temporary Table

Pro:

  • Can create with a simple SELECT ... INTO statement.
  • Allows the full range of indexing.

Con:

  • Names must be unique to the current connection.

Table Variable

Pro:

  • Names are scoped locally to the batch/procedure.

Con:

  • Must manually declare, and specify column names/data types.
  • Can only create a single index by way of declaring a primary key.

So it sounds like temporary tables are the more convenient option. They are. Unfortunately, that name scoping issue can have some irritating consequences. When I was first experimenting with the technique, I daftly named my tables #i and #d in all the triggers. And that works fine, as long as you don't have any cascading triggers (but you probably will).

If one trigger creates #i, then ends up causing another trigger to run, that second trigger will fail with some very unexpected compile errors if it also attempts to use #i. What do you mean CustomerNumber isn't a valid column name? It's right there in Customers! But the error is in fact referring to the structure of #i that was created in the first trigger, which is probably based on a different table from Customers – one which probably doesn't have a CustomerNumber column.

To work around this, you'll need to use unique temporary table names in each trigger that creates them. For simplicity, I recommend naming them like #trigger_name_i, or #trigger_name_d. Of course, if you have recursive triggers enabled for a database, in which a trigger can cause itself to fire again, this can still bite you in the ass.

For full name-collision safety, you'll need to use table variables, which are scoped only to the trigger that creates them. Here's what our example trigger looks like with INSERTED and DELETED copied into table variables:

CREATE TRIGGER tr_CustomerAudit ON Customers FOR UPDATE
AS

SET NOCOUNT ON

DECLARE @i TABLE (
    CustomerNumber int PRIMARY KEY CLUSTERED,
    SalesRep varchar(32)
)
INSERT INTO @i
SELECT CustomerNumber, SalesRep FROM INSERTED

DECLARE @d TABLE (
    CustomerNumber int PRIMARY KEY CLUSTERED,
    SalesRep varchar(32)
)
INSERT INTO @d
SELECT CustomerNumber, SalesRep FROM DELETED

IF EXISTS (
        SELECT TOP 1 i.CustomerNumber
        FROM @i i
            INNER JOIN @d d
                ON i.CustomerNumber = d.CustomerNumber
                AND i.SalesRep != d.SalesRep
    )
BEGIN
    --Insert the audit record
    INSERT INTO CustomerAuditLog (
        CustomerNumber,
        OldSalesRep,
        NewSalesRep
    )
    SELECT
        i.CustomerNumber,
        d.SalesRep,
        i.SalesRep
    FROM @i i
        INNER JOIN @d d
            ON i.CustomerNumber = d.CustomerNumber
            AND i.SalesRep != d.SalesRep

    --Update the customer contacts
    UPDATE con
    SET SalesRep = i.SalesRep
    FROM @i i
        INNER JOIN @d d
            ON i.CustomerNumber = d.CustomerNumber
            AND i.SalesRep != d.SalesRep
        INNER JOIN CustomerContacts con
            ON i.CustomerNumber = con.CustomerNumber
END

--Call a procedure to send an email notification
DECLARE @CustomerNumber varchar(max)
SELECT @CustomerNumber = ISNULL(CustomerNumber + ',', '') + CustomerNumber
FROM @i i
    INNER JOIN @d d
        ON i.CustomerNumber = d.CustomerNumber
WHERE d.SalesRep = 'Sales Manager'
    AND i.SalesRep != 'Sales Manager'

IF @CustomerNumber IS NOT NULL
    EXEC SendCustomerChangeNotification @CustomerNumber

Don't forget to update your references to INSERTED and DELETED to whatever temporary table/variable names you've chosen.

The Last Word

You'll notice that the final version of our trigger is quite a bit longer than the original. Triggers are very powerful, but also a very easy way to shoot yourself in the foot if you don't code defensively. There's a lot to consider, and a fair amount of boilerplate code to keep straight. But the headaches caused by writing a well-behaed trigger are nothing compared to the headaches caused by quelling the anarchy of 20 that were written with no regard for performance and safety.

Trust me. I speak from experience on this.

Leave a Comment