Batch Apex Query Behaviour

How fresh is your batch?

When running a Batch Apex job which implements Database.Batchable<sObject>, you specify a query in the start method (in which you can select up to 50M records) and define an activity to be performed based on these records in the execute method.

The platform runs the start method, queries the data, and feeds the data to your execute method, in batches.

Ch-Ch-Changes

Originally, the way this worked was that the data was queried in the start, and the entire query results (all the queried fields) put into temporary storage, to be chopped into batches and processed in the execute method (stale batches).

However, some time ago (around 2012) this was changed so that only the IDs of the records were put into temporary storage. The platform will retrieve the records by ID just-in-time to be processed for each batch.

Apex developers need to be aware of what this means for them. If any of the records are modified by another user or process while the Batch job is running, it is likely that it will be the modified (fresh) version of the records which will be passed into the execute, and not the (stale) version of the records as they were when the start method was called.

I have seen some discussion about this recently and so set about creating a test to verify this behaviour. I have shared my test code on github.

Testing 1… 2…3…

In my test I created 5,000 test Account records with sequential AccountNumbers, and then ran two concurrent Batch jobs over the same records. In the one job the records were sorted in ascending order of AccountNumber, and the other job, in descending order.

Both jobs simply update the Site (text) field, appending a simple tag to identify the batch process (whether it’s the ascending or descending job) and also adding a batch number.

Once both processes are finished, if the records had been read into temporary storage from the start method, then when examining the records with the lowest AccountNumbers the Site field would show the tag for the descending batch job. The “low” records would have been updated by the ascending batch job first, and then later on the changes would appear to have been overwritten by the descending batch job – because the descending job would be using stale data which does not contain the ascending tag.

On reviewing the Account records I found that the records at the start of the set showed both tags – the ascending tag first, and then the descending tag. This means that the descending job must have selected these records after the ascending job had already updated them. Similarly, the records at the end of the set showed the descending tag followed by the ascending tag.

In the middle of the record set were some records which had only one tag – either ascending or descending. These records would have been read at the same time by both processes and so each process appended its tag to an empty field (the initial state).

Contrary_Batch_Processes

Conclusions… this slice of data is only fit for toasting

The test would appear to confirm that the records are not passed into temporary storage at the start of the job.

What was interesting was that by using a smaller batch size I found that these single-tagged records spanned multiple batches, which would indicate that the platform queries the records in larger chunks than the batch size and breaks these query chunks into batches.

This is also quite important, as it implies that sometimes, but not always, the platform will read-ahead and so your batch data may or may not be stale.

This entry was posted in Documentation and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s