Citrix Blogs

XenDesktop SQL Transaction Log usage

Recently I was asked about what the growth rate of the transaction log should be when an XenDesktop environment is idle. I felt it’s an area worth blogging about, so this post aims to cover a little on what the transaction log is, and how much it grows by when an environment is idle.

In some ways this is a supplement to the DB Sizing document, and is likely to be rolled into a future version of that document.

What is the transaction log?

SQL Server uses the transaction log to store changes to the database. So it’s a continuous stream of database changes, rather than the database files, which store the actual rows and tables.

This means that there is a steady stream of writes to the transaction log file all the time. Note that in mirrored and availability groups this log growth occurs on all replicas.

The log contains the before and after state of the rows being changed, in general it is just the columns that were changed which are logged. However SQL has no concept of only updating the column if the data is different, if you say update the column it will do so, no matter what the contents previously was. So it’s quite possible to cause excess growth by setting a column value to the same thing all the time.

Until the log has been hardened (IE flushed to disk) a transaction isn’t committed (assuming it doesn’t deadlock or conflict in some way) This means that the write performance of the transaction log disk is more important in XenDesktop than the read performance. As even a large site tends to fit the site database into memory.

What’s the impact on the disk I/O?

The log is written to all the time, and if you place the database and the log files to different volumes you can see the difference in performance.

The main difference (and this surprised me) is that the database file is only updated when it’s checkpointed, which in SQL Server defaults to 1 minute. This can be seen as a regular write spike to the DB volume. The log file will have continuous writes, and so there’s a lot of writes per sec.

Recommendations on Transaction log setup

SQL Express or no replication

For those using SQL Express, or no form of replication, I’d suggest using Simple Recovery Model, and so the transaction log will continously be cleared out, and this requires little maintenance.

Replication

For customers using mirroring or availability groups the database has to be running in Full Recovery Model, for which I’ve a few suggestions.

Use a fixed size transaction log

This stops it ever filling up the disks. It also has the advantage that the transaction log is pre-zero’d and won’t auto-grow. Each time the log is auto-grown it has to be zero’d. Until the zeroing completes the DB can’t carry out any more transactions. And if you’re mirroring the primary and the mirror have to grow and zero that space sequentially. This can badly stall the database when the auto-grow is in the default of 10% as the size gets bigger so does the 10%.

Setup a job and an alert

With a fixed size transaction log you’ll need to maintain the log, and keep it backed up/trimmed to size. SQL Agent can be configured to run a job when a perfmon counter passes a threshold. So normally I use an alert for going over 80% of the counter:
Databases -- Percent Log Used -- Database Name

You then need a job for the alert to call that will backup the transaction log. This job may contain an SQL statement similar to:
BACKUP LOG [chrisg-XenDesktop] TO DISK = N'D:\Log_Backup\ChrisGXenDesktop.bak' WITH NOFORMAT, NOINIT, COMPRESSION, NAME = N'XD-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD

How much activity should the transaction log see?

This is a tricky question, as the exact activity depends on what’s going on. However, I’ll attempt to give some baseline information, this will be the expected growth when your environment is in a steady state.

Some of the number are approximate, and most of it is based on data retrieved from looking at the transaction log with:
select * from fn_dblog(null,null)

Services

Each DDC Windows Service has a simple heatbeat every 30s to indicate it’s still alive and running.

Currently in XenDesktop 7.5 there are 10 services, each heartbeat is 606 bytes, so each DDC is 6060 bytes per heartbeat, with 120 heartbeats an hour, so 727200 bytes per hour per ddc.

Site Services

Within some of the services are “Site Services” these are items that only run on one DDC and so they also heartbeat to maintain ownership, they’re done in one transaction so the size varies. The list looks like:

Monitor: 6 site services = 384 header + 6 * 190 = 1524 bytes
Delegated Admin: 1 site service = 384 header + 1* 190 = 574 bytes
Broker: 16 site services = 384 header + 16 * 190 = 3424 bytes
Hosting: 1 site service = 384 + 190 = 574 bytes
Desktop Update Manager: 1 site service = 384 + 190 = 574 bytes
Config Logging: 2 site services = 384 + 2*190 = 764 bytes
AD Identity: 1 site service = 384 + 190 = 574 bytes

So in total the site services consume 8008 bytes per heartbeat, or 960960 bytes per hour. This is not per DDC as only the owning DDC updates site services that it still owns it.

Note that the site services will also vary by the number of hypervisor connections, however, the impact is relatively small compared to the worker heartbeats.

Worker heartbeats

Each worker also heartbeats once every 5 minutes. This is the same for VDI and HSD.

The heartbeat is 1150 bytes in size (note this grew in 7.x, and should hopefully be shrunk in a future XenDesktop version down to approximately half the size)

The heartbeat will consume 13800 bytes per worker per hour.

So how much growth per hour should I expect

DDC Heartbeats = “Number of DDCs” * 727200 bytes
Site Services = 960960 bytes
VM growth = “Number of workers” * 13800 bytes
Total idle growth per hour = “DDC Heartbeats” + “Site Services” + “VM Growth”

Eg for a 10k VDI environment, with 2 DDCs, minimum growth would be:

DDC Heartbeats = 2 * 727200 bytes = 1454400 bytes
Site Services = 960960 bytes
VM growth = 10000 * 13800 bytes = 138000000 bytes
Total idle growth per hour = 1454400 + 960960 + 138000000 = 140,415,360 bytes

Or: ~134MB per hour, ~3.2GB per day.

Exit mobile version