A Tale of Two Platforms

It will be 10 years ago next month that I wrote my first line of Apex code. Visualforce was in beta (though not yet packageable), the only valid use of the “extends” keyword in Apex was for custom exceptions, and S-Controls were still alive and kicking… happy days.

Earlier in 2007, and of slightly (British understatement) more significance than my first Apex trigger handler, Heroku was founded.

By 2009 my trigger handler class now belonged to a FinancialForce product, and in the same year Heroku had commercially launched as a deployment platform for Ruby apps.

In 2010 Salesforce acquired Heroku, then in the following year Heroku added support for Node.js, Clojure, Java, Python and Scala. The acquisition of Heroku by Salesforce was interesting. Heroku and the Lightning Platform or “the platform formerly known as Force.com”, are very different beasts.

The Lightning Platform is a high productivity Application Platform as a Service (aPaaS) designed for “power admins” and “citizen developers” and is primarily model-driven – based around Salesforce Objects (SObjects). Apex, Visualforce and Lightning Components provide coding capabilities for advanced requirements, but this is a “low-code” platform in technology and culture.

Heroku on the other hand is a high-control Platform as a Service (PaaS) designed for developers, and which firmly embraces a message-driven architecture. We might call it a “high-code” platform, designed to be elastically scalable to very high throughputs.

It was a few years before we saw any real bridge between Heroku and the Lightning Platform. Heroku Connect was introduced in 2014 as an automated data synchronisation service which is often used by freestanding Heroku web apps or services to bring data into Salesforce without the Heroku developer knowing too much about the Lighting Platform.

Lighting Platform apps have become bigger and much more complex, to the point where “low-code” is perhaps no longer a very good description of what is going on. Apex governors are a “one size fits all” way of managing platform resources, and may well be suited to a “low-code” platform. However with the very complex logic now being written in Apex, governors are driving engineers to extreme lengths to get the job done.

Some Apex developers have looked enviously at the Heroku environment which not only offers computational power which can be scaled up to the needs of the task, but the ability to choose the most suitable language, and the availability of a plethora of libraries and frameworks which mean that the developer need not reinvent the wheel on a daily basis.

Some intrepid developers have decided to straddle the gap between the Lighting Platform and Heroku to take advantage of Heroku compute power within their applications – embedding Heroku power within their Salesforce applications.

However, despite the advantages of using Heroku compute with the Lightning Platform there are challenges to be overcome.

Identification and Authentication

When starting a Heroku compute process from within the Lighting Platform, your Heroku process will most likely want to check the identity of the caller – the Salesforce Organization and User – and whether the Org Admin has authorised the user to perform the action.

Similarly, when the Heroku process needs to call back into the Lightning Platform, Salesforce will likewise need to authenticate the request and identify the Salesforce User – and Heroku will need to provide the necessary credentials. This ensures that the data written back to Salesforce is tied to an appropriate Salesforce user – which will often be the user who initiated the process.

Asynchronicity

To a large extent, Heroku’s scalability comes from the way it can handle processes asynchronously – using a message-driven architecture. Ok, so you don’t have to write asynchronous services in Heroku, but you’ll be limiting your application’s scalability if you don’t. Worker Dynos, Background Jobs and Queueing does a good job of explaining this, but here’s a real world illustration:

Imagine you walk into a pizza place, you make your order and the person who takes it pops on an apron, washes their hands and heads back into the kitchen to prepare your food. What happens to the next person who comes in? There’s no one to take their order! That’s synchronous processing. There’s a lot of waiting. I want to get my order taken right away, and maybe step next door to pick up some gelato while my pizza is being prepared. Maybe I’ll pop in to ask how it’s going – or better yet get a notification on my phone when my order ready. 

With asynchronous processing, work is put into a queue. In Heroku if the queue gets too long, then you can either increase the power available for your process, or use a greater number of “processors” to get through the work faster. Either way you are not causing anything upstream to wait while the work is done.

The downside of running asynchronous processes is that it involves a different way of looking at the implementation and a little more effort – publishing and subscribing messages to a queue, and considering how you’re going to respond to the process ending.

Orizuru

FinancialForce has recently released the Orizuru open-source toolkit to help to overcome these challenges.

For Identification and Authentication there’s a package which makes the necessary calls to the Salesforce Identity Provider, to validate the requests coming from Salesforce, and also to authenticate with Salesforce in order to call back in when needed. There’s also a command line tool which configures both the Salesforce org (with a Connected App) and the Heroku app – wiring up the authentication both ways.

With the asynchronous processing side of things, Orizuru aims to make this as simple as writing Batch Apex. You, the developer, write the code that does the actual work you want done in Heroku – maybe pulling in some super libraries in Node or Java, and the Orizuru toolkit does the plumbing: Apex API client, API service, validation of requests, publishing messages to a queue and subscribing from the queue.

It’s early days, so there are bound to be some gaps, but it’s an open source project, and the team would welcome both constructive criticism and contributions!


orizuru_logo_2017-11-08

https://orizuru.github.io/
https://twitter.com/OrizuruCode/

https://www.heroku.com/about
https://www.salesforce.com/blog/2017/10/forrester-leader-low-code-app-development.html
https://devcenter.heroku.com/articles/background-jobs-queueing

Posted in News, Patterns | Tagged , , , , , , | Leave a comment

Batch Apex Query Behaviour

How fresh is your batch?

When running a Batch Apex job which implements Database.Batchable<sObject>, you specify a query in the start method (in which you can select up to 50M records) and define an activity to be performed based on these records in the execute method.

The platform runs the start method, queries the data, and feeds the data to your execute method, in batches.

Ch-Ch-Changes

Originally, the way this worked was that the data was queried in the start, and the entire query results (all the queried fields) put into temporary storage, to be chopped into batches and processed in the execute method (stale batches).

However, some time ago (around 2012) this was changed so that only the IDs of the records were put into temporary storage. The platform will retrieve the records by ID just-in-time to be processed for each batch.

Apex developers need to be aware of what this means for them. If any of the records are modified by another user or process while the Batch job is running, it is likely that it will be the modified (fresh) version of the records which will be passed into the execute, and not the (stale) version of the records as they were when the start method was called.

I have seen some discussion about this recently and so set about creating a test to verify this behaviour. I have shared my test code on github.

Testing 1… 2…3…

In my test I created 5,000 test Account records with sequential AccountNumbers, and then ran two concurrent Batch jobs over the same records. In the one job the records were sorted in ascending order of AccountNumber, and the other job, in descending order.

Both jobs simply update the Site (text) field, appending a simple tag to identify the batch process (whether it’s the ascending or descending job) and also adding a batch number.

Once both processes are finished, if the records had been read into temporary storage from the start method, then when examining the records with the lowest AccountNumbers the Site field would show the tag for the descending batch job. The “low” records would have been updated by the ascending batch job first, and then later on the changes would appear to have been overwritten by the descending batch job – because the descending job would be using stale data which does not contain the ascending tag.

On reviewing the Account records I found that the records at the start of the set showed both tags – the ascending tag first, and then the descending tag. This means that the descending job must have selected these records after the ascending job had already updated them. Similarly, the records at the end of the set showed the descending tag followed by the ascending tag.

In the middle of the record set were some records which had only one tag – either ascending or descending. These records would have been read at the same time by both processes and so each process appended its tag to an empty field (the initial state).

Contrary_Batch_Processes

Conclusions… this slice of data is only fit for toasting

The test would appear to confirm that the records are not passed into temporary storage at the start of the job.

What was interesting was that by using a smaller batch size I found that these single-tagged records spanned multiple batches, which would indicate that the platform queries the records in larger chunks than the batch size and breaks these query chunks into batches.

This is also quite important, as it implies that sometimes, but not always, the platform will read-ahead and so your batch data may or may not be stale.

Posted in Documentation | Tagged | Leave a comment

FlexQueue and the Evolution of Asynchronous Apex

Governor Grappling

Sooner or later (okay, sooner rather than later) when working in Apex we will need to grapple with Apex Governor Limits

Because Apex runs in a multitenant environment, the Apex runtime engine strictly enforces limits to ensure that runaway Apex doesn’t monopolize shared resources

This is a good thing – it means that a bad tenant (who thinks that bulkification is a dietary term and object orientation refers to feng shui for the desktop) doesn’t affect the good tenants – that’s you and me.

Making Light Work of Heavy Lifting

Sometimes though, the limits may be too restrictive – when we have some heavy lifting to do. Here Asynchronous Apex comes to the rescue, which makes light work of heavy jobs.

As the name implies, Asynchronous Apex means that the work is not done immediately / synchronously, as it is in the normal interactive context for your users. There’s an exchange, a compromise to be made – you accept that you will be prepared to wait a little for the work to be done, which makes it easier for Salesforce to manage demand on resources, and in return Salesforce will give you higher governor limits, as the resources are easier for the platform to allocate. Currently with Async Apex the governor benefits are:

  • x6 CPU time
  • x2 SOQL queries
  • x2 heap size

Now, because the work isn’t done immediately, you need to put some more thought into what happens when the work is finished. With synchronous Apex you can provide immediate feedback to your user – for example if you’re working in a Visualforce page or a Lightning Component. But with async Apex the execution has split off from the user’s path and there’s no direct way of providing them with a response.

You can of course send user notifications by email or Chatter, or otherwise you would have to provide some means in your application to check back on the work being done in the async process.

Future Methods

The future method was the first means provided by Salesforce to do Asynchronous Apex. These methods are really simple to use – annotate a static method in Apex, and when you call it, the method returns immediately, but the work will be done at some point in the not too distant future:

public with sharing class FutureClass
{
  @future(callout=true) 
  public static void myMethod(String s, Integer i)
  {
    System.debug('Processing primitive variables ' + s + ' and ' + i + ' asynchronously');
    // do your stuff 
  }
}

// usage:
// FutureClass.myMethod('foo',1);

We get the higher async governor limits within the execution of the static method, but there are some limitations / considerations:

  • we cannot directly monitor the status of future method execution once we’ve fired it off – we’d have to look for the effects of the method on our data (but what if it fails? – there would be no effects – queue tumbleweed…)
  • we cannot be sure of the order that multiple future methods are executed
  • parameters can only be primitive types (or collections of primitives)
  • there is a limit of 50 future method calls in a single Apex execution
  • recommended best practice for future methods is that methods should be designed to execute quickly; therefore if you need to make a (potentially slow) HTTP callout from a future method, you need to declare it in the annotation with “callout=true”

Future Method Documentation

Batch Apex

Batch Apex was provided to handle situations where a similar operation needs to be executed iteratively many times, most typically an operation to be performed on a large set of records (up to 50M records).

A batch job has three parts:

  1. start by defining the overall scope of the job – typically (but not exclusively) using a SOQL query to select your records
  2. execute a method repeatedly, each iteration needs to handle a batch of the work that was scoped in the start (again, typically a batch will be a list of sObjects)
  3. finish by doing any work you need to be done after the last batch
public class MyBatch implements Database.Batchable<sObject>
{
  public Database.QueryLocator start(Database.BatchableContext BC)
  {
    return Database.getQueryLocator('select MyField__c from MySObject__c limit 50000000');
  }
  public void execute(Database.BatchableContext BC, List<sObject> scope)
  {
    for(sObject s : scope) { // do something to each record }
    update scope; // update the records
   }
   public void finish(Database.BatchableContext BC) {}
}
// Usage:
// Integer batchSize = 2000;
// ID batchprocessid = Database.executeBatch(new MyBatch(), batchSize);

You fill out the three methods required in a Batch job, and the platform will call them for you, once you’ve submitted the job with the executeBatch call.

Each batch can process between 1 and 2,000 items of work (you choose). The governor limits of course are applied to each batch – that is to a single execute method (rather than to each item in the batch).

You might therefore think that a batch size of 1 would be perfect, as you get the highest governor limits per item of work, but you should avoid really small batch sizes – as the overall job will take much longer to process owing to the “overhead” in starting each batch.

We can also programmatically monitor a batch job while it is running, and abort it if we need to:

Integer batchSize = 2000;
ID batchprocessid = Database.executeBatch(new MyBatch(), batchSize);
AsyncApexJob aaj = [SELECT Id, Status, JobItemsProcessed, TotalJobItems, NumberOfErrors FROM AsyncApexJob WHERE ID =: batchprocessid ];
Boolean ItMakesSense=false;
// make a decision
if(ItMakesSense)
{
  System.abortJob(aaj);
}

Whilst batch Apex is most often used to process many sObject records via a SOQL query, it can also be used with an Iterable class so that you can design how you want to define the job and divide it into batches. In this case there’s no batch size limit, but you will likely be restricted by what can be accomplish within the start method (in which you construct your Iterable).

Batch Chaining

But what if 50M records isn’t enough? Or, you have a composite process in mind where you need to first process Opportunities and then Accounts? We will want to create several batch jobs and chain them together, so that one starts on completion of another – by calling Database.executeBatch within the finish method of a batch job.

When batch Apex was first made available we had to cheat to achieve this – we couldn’t do it directly. We would use the Schedulable interface and call System.schedule to kick off the next batch process, say in 5 minutes time.

Today however, we don’t need this workaround; we can chain jobs together by calling Database.executeBatch for one job in the finish method of another.

Whoah there…

The main limitation of batch Apex is that there can be only 5 concurrent batch jobs running in an organisation; this should be a major consideration when developing batch processes as this limit applies across all users and applications.

Given this, we might infer that batch Apex is provided primarily for administrative processes rather then end-user processes, but it is very common to use Batch Apex for routine rather than admin processes.

Batch Apex Examples
Batch Apex Documentation

Queueable Apex – Winter 15

Queueable is the future of future (future^2 ?) and was introduced in the Winter 15 release. It has an implementation looks a lot more like batch Apex (or Schedulable) than future methods:

public class MyQueueable implements Queueable, Database.AllowsCallouts
{
  public MyQueueable(Account a)
  {
    newAccount = a;
  }

  Account newAccount;
  
  public void execute(QueueableContext context)
  {
    insert newAccount;
  }
}
// Usage:
// ID jobID = System.enqueueJob(new MyQueueable(new Account(Name='Foo')));
// system.debug([SELECT Status,NumberOfErrors FROM AsyncApexJob WHERE Id=:jobID]);

The key features of Queueable Apex are:

  • unlike future methods, execution can be monitored: when a job is submitted, an ID is returned to you which you can use to query the AsyncApexJob table in the same way as batch Apex
  • future methods were static and took only primitive arguments, but a queueable Apex implementation can effectively pass in complex types like sObjects and custom Apex types (an Account in the above example)
  • in common with future methods, there is a limit of 50 executions enqueued within a single Apex execution
  • as of Spring 15, queueable executions can be chained together (although in DE orgs the chain limit is 5 jobs)
  • although not documented at the time of writing, as with future methods and batch Apex we must declare when we intend to make HTTP callouts – with Database.AllowsCallouts (same syntax as batch Apex)

Queueable Apex Documentation
The New Apex Queueable Interface (Josh Kaplan)
Winter 15 Release Notes

FlexQueue – Spring 15

Hot on the heels of queueable Apex, in the Spring 15 release we have the FlexQueue. At the time of writing, this feature can be seen purely as an enhancement to batch Apex – but at some point in the future we should see queueable Apex join the FlexQueue (see Josh Kaplan’s blog post, linked below).

The FlexQueue provides you with a backlog of up to 100 batch Apex processes, in addition to the 5 “live” concurrent batch Apex jobs.

Say you have 5 batch jobs preparing or processing, and nothing in the FlexQueue, you can submit 100 new batch Jobs and they will go straight into the FlexQueue – previously this would have caused an error.

When jobs are picked from the FlexQueue to be processed, this frees up space in the FlexQueue for further jobs to be submitted.

Jobs in the FlexQueue can be seen in the existing Apex Jobs admin page, and from here they can also be aborted. They can also be seen programatically when querying AsyncApexJob and also aborted programmatically in the same way as any running batch job.

Note that we should take care when aborting a job as the same method aborts a job in the FlexQueue or the currently running Batch jobs – you may set out to abort a job waiting in the FlexQueue but by the time you make the call, the job has started processing and so you end up cancelling a part-complete running batch job.

There’s also a new FlexQueue admin page is provided which can be used to change the order of jobs that aren’t yet being processed.

Flex Your Batch Apex Muscles with FlexQueue (Josh Kaplan)
Spring 15 Release Notes

FlexQueue programmatic control – Winter 16

In the Summer 15 release, a new system method was piloted for controlling the order of jobs in the FlexQueue programmatically. This mirrored the functionality already provided in the FlexQueue UI – specifying a new position for a job in the FlexQueue as an integer:

Boolean isSuccess = System.moveFlexQueueJob(jobId, positionNumber);

The weakness of this is that position numbers change as jobs are picked off the queue – so the method call may not produce the expected results. As a result, this method was not made generally available.

Instead, in the Winter 16 release we have several methods on a new FlexQueue class which provide for control over the order of items by relative rather than absolute position, which is much better:

Boolean isSuccess = FlexQueue.moveBeforeJob(jobToMoveId, jobInQueueId);
Boolean isSuccess = FlexQueue.moveAfterJob(jobToMoveId, jobInQueueId);
Boolean isSuccess = FlexQueue.moveJobToEnd(jobId);
Boolean isSuccess = FlexQueue.moveJobToFront(jobId);

Unfortunately however, there is something missing, which is the ability to determine programmatically the current order of the FlexQueue. This severely limits our ability to build functionality to manage the ordering of jobs. However, from what I understand, this gap is due to be plugged soon.

Winter 16 Release Notes – Reorder Your Batch Jobs in the Flex Queue Programmatically
Winter 16 Release Notes – New Classes and Methods

Enhanced Futures – in some future release

One final feature worth mentioning in passing is Enhanced Futures. This is not yet generally available, but it has been in pilot since the Summer 14 release. Enhanced futures allows a specific future method to increase one (and only one) governor limit by doubling or tripling. That’s doubling/tripling the already superior async governor limits. The enhanceable governors are:

  • Heap size
  • CPU time
  • Number of SOQL queries
  • Number of DML statements
  • Number of DML records

There is a fuzzy warning linked to this feature: “Running future methods with higher limits might slow down the execution of all your future methods”. I imagine we will get more detail when the feature is released, but clearly the intention is that enhanced limits should be used sparingly, when you really really need them.

Bigger Apex Limits with Enhanced Futures (Josh Kaplan)
Summer 14 Release Notes

Global Async Limit

And finally… in addition to the limits which apply to each async type, there is an overarching limit on the number of asynchronous method executions which can be made in a 24 hour period. This limit is typically 250,000 calls / 24 hours, but higher in large organisations. See Apex Governor Limits for more details.

All async methods come under this limit: future methods, execute Queueable, execute Schedulable, start Batchable, execute Batchable and finish Batchable.

Note that this limit encourages us to obtain high value from each call – rather than firing off a higher number of lower value calls. For example – avoiding small / overly pessimistic batch sizes for Batch Apex jobs.


Most of the content for this post came from the Dreamforce 15 DevZone session: Apex Liberation: The evolution of Flex Queue which I had the great pleasure of presenting with Carolina Ruiz Medina. The session slides are available on SlideShare.

Posted in Documentation | Tagged , , | Leave a comment

Q&A with AMsource Technology

Reblog: AMsource Technology

Q&A WITH STEPHEN WILLCOCK OF FINANCIALFORCE.COM

Monday 27th April 2015

We talked with Stephen Willcock from our client Financialforce.com, and had the opportunity to discuss a number of subjects such as his role as Director of Product Innovation, what it’s like to be part of a technology company going through hyper-growth and he fills us in on his favourite tech gadgets.

With 16 years’ experience in consulting and software development, including 7 years developing and architecting Force.com applications, in his role as Director of Product Innovation Stephen has a remit to ensure that product development effectively harnesses the Salesforce1 platform and complementary technologies to solve business challenges through delivery of innovative products.

DF14_Glass_swillcock

Demoing a Salesforce integration with Google Glass at Dreamforce 2014 – photo courtesy of @CarolEnLaNube

HI STEPHEN, THANKS FOR TAKING THE TIME TO HAVE A CHAT WITH US TODAY.  SO, YOUR CURRENT ROLE IS “DIRECTOR, PRODUCT INNOVATION” WHICH SOUNDS GREAT – WHAT ARE YOU RESPONSIBLE FOR IN THIS ROLE?

Not a problem! Well, it’s genuinely a “dream job”. I have a small team of extraordinary developers, and we get to build prototypes for new products or product features, experimenting with the very latest Salesforce technologies. The work is varied; we might be presented with a tricky problem to solve, or given a new technology or feature to evaluate and come up with ideas around how we can put them to best use in our products. Continue reading

Posted in Uncategorized | Tagged , , | Leave a comment

Apex Method of the Day – JSON.serialize(Object)

Sometimes using JSON.serialize on a custom Apex type does not provide sufficient control over how the JSON is serialized. For example, serializing an Apex type will include null values for all fields that haven’t been set, when you might prefer to omit these properties from the JSON altogether. Or perhaps you need to use a property name in the JSON which is invalid for a class member name Apex. Fortunately JSON.serialize works on any Object, so you can serialize JSON from any structure you can assemble in Apex, for example:

String jsonString = JSON.serialize(new Map<String,Object> {
	'datetimevalue' => System.now(),
	'somelist' => new List<Object> {
		new Map<String,Object> {
			'name' => 'Mac',
			'os' => 'OS X 10.9'
		},
		new Map<String,Object> {
			'name' => 'PC',
			'os' => 'Windows'
		}
	}
});

JSON:

{
    "somelist": [
        {
            "name": "Mac",
            "os": "OS X 10.9"
        },
        {
            "name": "PC",
            "os": "Windows"
        }
    ],
    "datetimevalue": "2014-07-25T10:47:34.981Z"
}

Links:
Salesforce Stackexchange Question
JSON class

Posted in Documentation | Tagged , | Leave a comment

One controller to rule them all…

I recently wanted to create an extension controller for a custom object which I could use with both single records and a set of records. In other words, a list and detail controller extension, with actions that could be applied to one or many records.

I started out by creating my controller with two constructors, one which accepted a standard controller and a second that accepted a standard set controller so that it could be bound to Visualforce pages for either list or detail pages:

public with sharing class BumpControllerExtension {

	ApexPages.StandardSetController m_setController;
	ApexPages.StandardController m_controller;

	public BumpControllerExtension(ApexPages.StandardSetController controller) {
		m_setController = controller;
	}

	public BumpControllerExtension(ApexPages.StandardController controller) {
		m_controller = controller;
	}

	public ApexPages.PageReference myActionMethod() {

		// do some stuff...

		if(m_setController==null) {
			return m_controller.save();
		}
		else {
			return m_setController.save();
		}
	}
}

So far, so good. But before using any methods on either standard controller I need to check which controller has been assigned, which I think is a little messy, and also seems like hard work if the same method exists on both controllers – save() in this case.
Continue reading

Posted in Patterns | Tagged , , , | 1 Comment

IP Address Ranges when Logging in to Salesforce

Salesforce user security. Its great. As well as being one of the things customers value highly, its a massive advantage for application developers to be building upon a trusted platform with robust and well considered security features.

Restricting logins to specific IP addresses or ranges of addresses is a valuable user security feature, although it can be slightly confusing.

Salesforce help explains:

… Salesforce then checks whether the user’s profile has IP address restrictions. If IP address restrictions are defined for the user’s profile, any login from an undesignated IP address is denied, and any login from a specified IP address is allowed.

If profile-based IP address restrictions are not set, Salesforce checks whether the user is logging in from an IP address they have not used to access Salesforce before:

  • If the user’s login is from a browser that includes a Salesforce cookie, the login is allowed. The browser will have the Salesforce cookie if the user has previously used that browser to log in to Salesforce, and has not cleared the browser cookies.
  • If the user’s login is from an IP address in your organization’s trusted IP address list, the login is allowed.
  • If the user’s login is from neither a trusted IP address nor a browser with a Salesforce cookie, the login is blocked.

Whenever a login is blocked or returns an API login fault, Salesforce must verify the user’s identity:

For access via the user interface, the user is prompted to enter a token (also called a verification code) to confirm the user’s identity.

I have diagrammed the flow to (hopefully) make this a little easier to follow, focussing on a user logging in via a web browser (rather than API access):

IP_login_restriction_flow

A few points that I think are worth mentioning:

There are two places where an admin can specify IP addresses, or ranges of addresses for the login process. Whilst these look very similar, they purposes are distinct.

User Profile – Login IP Ranges

We can set up IP ranges (or individual addresses) on each user profile. When no IP address ranges are specified, the user’s profile does not restrict IP addresses in any way. However as soon as we specify any IP addresses as login IP ranges, then all other IP addresses become invalid for users with the profile and will result in access being denied.

Network Access – Trusted IP Ranges

In Security Settings, we can also set up trusted IP addresses for the entire org.

When a user logs in from an IP address they haven’t used to login to Salesforce, they will need to go through a verification process. This is where they are challenged and need to ask for a token to be sent to them, which they can use to gain access.

Trusted IP Ranges simply allow users to bypass the verification process when logging in from a trusted IP address.

Each trusted IP range is limited however, so 0.0.0.0 – 2.0.0.0 is acceptable, but 0.0.0.0 –  3.0.0.0 is not. Larger groups of trusted IP addresses would need to be specified my adding several ranges.

“Login IP Range” beats “Trusted IP Range”

From my flow diagram we can see that any profile login IP ranges are enforced before trusted IPs are considered. This means that “trusting” your IP address has no effect if your profile blocks it (by specifying some ranges, but not one that includes your address).

Engage Demo Mode

IP address restrictions and the verification process are generally a good thing. However, there may be times where the feature is unnecessary and perhaps frustrating – where you are giving demonstrations form a developer edition org of work in progress, or of new Salesforce features, for example.

In order to disable the verification process, we cannot add all IP addresses to a single Trusted IP Range, as the size of the range is limited. We could add many ranges, e.g. 0.0.0.0 – 0.255.255.255, 1.0.0.0 – 1.255.255.255, 2.0.0.0 – 2.255.255.255, etc.

However, there is no limit in size for to a Login IP Range, so you could specify all IP addresses: 0.0.0.0 – 255.255.255.255. In a demo org you may have have one, or at worst a handful of profiles that you will be using, so it would be easy to add an all-IP-addresses Login IP Range to each profile.

This will prevent login challenges and verification in the middle of a demo, but is not good practice in production.

 

 

Posted in Documentation | Tagged , | 7 Comments

Apex Method of the Day – String.format(String value, List<String> args)

String value = 'This message has {0} and {1}', f = 'foo', b = 'bar';
String message = String.format(value, new List<String>{f,b});
System.assertEquals('This message has foo and bar',message);

The above is an example of simple token replacement and is equivalent to:

String f = 'foo', b = 'bar';
String message = 'This message has ' + f + ' and ' + b;
System.assertEquals('This message has foo and bar',message);

Why bother? Well, for more complex strings use of String.format can be easier to write and maintain than concatenation, but a great benefit comes from its combination with Custom Labels. Messages with tokens can be stored as Custom Labels and merged with values at run-time. Custom Labels are translatable, and as different languages may use different word order, String.format becomes the sensible choice.

String.format works in a similar way to the Visualforce apex:outputText component, and the value being formatted uses the same syntax as the MessageFormat class in Java.

If you read the Java documentation you will find that single quotes have a special meaning in format strings. By enclosing text in single quotes, any special meaning of the enclosed text is ignored. For example, braces { } were used in the above example to create tokens to be replaced with values from the args parameter. You will therefore need to use single quotes if you want to include braces in your formatted text:

String value = 'Braces \'{ we want braces }\' and substitution of {0} and {1}', f = 'foo', b = 'bar';
String message = String.format(value, new List<String>{f,b});
System.assertEquals('Braces { we want braces } and substitution of foo and bar',message);

Of course Apex uses single quotes to enclose literal strings, so we have to use backslashes in order to embed single quotes in our string text.

But if single quotes have a special meaning in a format string, how can we include a single quote in our formatted text? Here’s a typical scenario:

String value = 'I do not recognise the term \'{0}\'', term = 'foo';
String message = String.format(value, new List<String>{term});
System.assertEquals('I do not recognise the term \'foo\'',message);

This doesn’t work as intended:

Assertion Failed: 
Expected: I do not recognise the term 'foo', 
Actual: I do not recognise the term {0}

Our substitution hasn’t been done.

From the Java documentation we find that we need two consecutive single quotes to be interpreted as a literal single quote:

String value = 'I do not recognise the term \'\'{0}\'\'', term = 'foo';
String message = String.format(value, new List<String>{term});
System.assertEquals('I do not recognise the term \'foo\'',message);

Links:
Apex String Methods
Visualforce apex:outputText
Java MessageFormat class

Posted in Documentation | Tagged , | Leave a comment

Constructive Forces at Work

A constructor in Apex is called when an instance of a class is created. Code-wise constructors might look like member methods, but they differ in a couple of significant ways.

First, constructors can’t be called any other time than when creating an instance of a class, and second, constructors do not return values. Ever had the build error “Invalid constructor name: foo“? We soon learn that this actually means “Doh! You forgot to provide a return type for your method”.

This does mean, as it does in Java, that Apex constructors are not members that can be inherited:

A subclass inherits all the members (fields, methods, and nested classes) from its superclass. Constructors are not members, so they are not inherited by subclasses, but the constructor of the superclass can be invoked from the subclass.

http://docs.oracle.com/javase/tutorial/java/IandI/subclasses.html

So, a subclass constructor can call a constructor of its superclass. Actually, constructors call superclass constructors automatically in Apex if we don’t do it explicitly. Try running this anonymous code block:

public class ClassOne {
    ClassOne() {
        system.debug('1');
    }
}
public class ClassTwo extends ClassOne {
    ClassTwo() {
        system.debug('2');
    }
}
public class ClassThree extends ClassTwo {
    ClassThree() {
        system.debug('3');
    }
}
ClassThree c = new ClassThree();

Note that class definitions in anonymous blocks are virtual by default, so we don’t need to supply the virtual modifier here.

And here’s a part of my debug log:

19:19:35.059 (59838092)|SYSTEM_CONSTRUCTOR_ENTRY|[16]|<init>()
19:19:35.059 (59944318)|SYSTEM_CONSTRUCTOR_ENTRY|[12]|<init>()
19:19:35.060 (60019420)|SYSTEM_CONSTRUCTOR_ENTRY|[7]|<init>()
19:19:35.062 (62725333)|USER_DEBUG|[3]|DEBUG|1
19:19:35.062 (62740937)|SYSTEM_CONSTRUCTOR_EXIT|[7]|<init>()
19:19:35.062 (62772600)|USER_DEBUG|[8]|DEBUG|2
19:19:35.062 (62780817)|SYSTEM_CONSTRUCTOR_EXIT|[12]|<init>()
19:19:35.062 (62807978)|USER_DEBUG|[13]|DEBUG|3
19:19:35.062 (62816118)|SYSTEM_CONSTRUCTOR_EXIT|[16]|<init>()

We have defined three classes where the second extends the first and the third extends the second. In each case we have defined a constructor, for debugging the order of execution, but we haven’t explicitly called a superclass constructor.

We find that each constructor implicitly invokes the superclass constructor before executing its own body. This works recursively until there are no more superclasses, so that the constructor method bodies are executed in order going down the inheritance chain from most-super-class to the actual class we are constructing.

We can influence the chaining behaviour to some degree by calling specific constructors with this and super.

If we have a class with several constructors, each with different arguments, we can delegate from one constructor to another of the same class using this(…), or select a particular constructor of a superclass using super(…):

public class ClassOne {
    ClassOne() {
        system.debug('1');
    }
}
public class ClassTwo extends ClassOne {
    ClassTwo() {
        system.debug('2');
    }
    ClassTwo(String value) {
        this();
        system.debug('2 ' + value);
    }
}
public class ClassThree extends ClassTwo {
    ClassThree() {
        system.debug('3');
    }
    ClassThree(String value) {
        super(value);
        system.debug('3 ' + value);
    }
}
ClassThree c = new ClassThree('foo');
19:39:29.054 (54167458)|SYSTEM_CONSTRUCTOR_ENTRY|[24]|<init>(String)
19:39:29.054 (54305037)|SYSTEM_CONSTRUCTOR_ENTRY|[20]|<init>(String)
19:39:29.054 (54384541)|SYSTEM_CONSTRUCTOR_ENTRY|[11]|<init>()
19:39:29.054 (54460886)|SYSTEM_CONSTRUCTOR_ENTRY|[7]|<init>()
19:39:29.057 (57619207)|USER_DEBUG|[3]|DEBUG|1
19:39:29.057 (57636883)|SYSTEM_CONSTRUCTOR_EXIT|[7]|<init>()
19:39:29.057 (57673115)|USER_DEBUG|[8]|DEBUG|2
19:39:29.057 (57684149)|SYSTEM_CONSTRUCTOR_EXIT|[11]|<init>()
19:39:29.057 (57743445)|USER_DEBUG|[12]|DEBUG|2 foo
19:39:29.057 (57766853)|SYSTEM_CONSTRUCTOR_EXIT|[20]|<init>(String)
19:39:29.057 (57829601)|USER_DEBUG|[21]|DEBUG|3 foo
19:39:29.057 (57848548)|SYSTEM_CONSTRUCTOR_EXIT|[24]|<init>(String)

It is important to note that in order to preserve the order of execution of the chain of constructors, super or this must be the first line of the body of a constructor.

Constructors however are optional, and so to preserve the constructor chain, the compiler will provide a default zero-argument constructor for us just as long as we don’t provide any constructors ourselves:

If you write a constructor that takes arguments, you can then use that constructor to create an object using those arguments. If you create a constructor that takes arguments, and you still want to use a no-argument constructor, you must include one in your code. Once you create a constructor for a class, you no longer have access to the default, no-argument public constructor. You must create your own.

http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_classes_constructors.htm

Which also means of course that if you do create a constructor in an abstract/virtual class that takes argument(s), you must then also provide a zero-argument constructor in order to extend the class, even if you don’t need one (it has an empty body). If you don’t then you will get a build error when you extend the class:

Parent class has no 0-argument constructor for implicit construction

Links

http://salesforce.stackexchange.com/questions/24275/inheriting-non-implicit-constructors-on-apex-classes

Force.com Apex Code Developer’s Guide – Using Constructors

Posted in Documentation | Tagged | Leave a comment

Dynamic Field Sets

One of the things I love about Force.com is how easy it can be to stitch together blocks of functionality provided in the platform, saving you development effort. Spending a little extra time working out how to best use platform features can also save effort downstream – when you align yourself with the platform you can usually benefit from enhancements and additional functionality very quickly as it is added by Salesforce.

I was recently asked whether we could assign Field Sets for Visualforce pages to a user’s profile (without using Field Level Security to limit access to the fields entirely). Although this is not out-of-the-box functionality, I felt sure that this should be possible with a little effort by stitching together Field Sets and Hierarchical Custom Settings.

Continue reading

Posted in Tutorials | Tagged , , , | 1 Comment