Salesforce Connector

Salesforce Connector



Salesforce Destination

Be aware that if you are importing into a Salesforce Sandbox, the size is limited. If you import somewhere around 150,000 records, you will run out of space. You should only import what is necessary for testing.

Standard

Input your Username and Password, making sure to append your user's API Token to the end of your password. If you are not using Bulk Mode, be careful to not run too many transactions or you could hit your rolling 24 hour limit. To get your API Token, open your Personal Settings and click Reset My Security Token. (The Salesforce UI changes over time, so you may have to google the current method of getting your API Token.)

Bulk

Read the Bulk API Limits Documentation. Skip this document at your own peril. https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_concepts_limits.htm

Check the "Use Bulk Mode" checkbox to use Bulk mode. This creates local csv files of your changes and uploads the csv files to Salesforce to process in batches.

YOU CAN ONLY HAVE 1 STAGE IN A BULK JOB. If you need more than one stage, you will have to break it up into multiple jobs. Write anything you need to copy from one stage to the next into a local SQLite database or XREF.

Bulk Attachments

To insert Bulk attachments, you must have "Use Bulk Mode" checked on the Destination Tab. If "Use Bulk Mode" is checked, then you can use the "@@Bulk Attachments" "table" in your stage. Note that the Bulk API can only handle attachments up to 10MB among other limitations. I have always used the standard API for importing attachments, even if I have to do it over the course of several days. The standard API for attachments seems more stable.

Bulk Update vs Bulk Update/Insert

When performing Bulk Updates or Updates/Inserts, you must match on the Id field or on any text field that is marked as an ExternalID.

Concurrency

Parallel vs Serial. Parallel is a Salesforce API option which tells it to process multiple records at the same time on their side, serial is one at a time. Parallel mode doesn't seem very stable so we recommend using serial mode.

Mapping

Stages

Update

When running an Update Stage in standard mode, you can match on any field. When running a Bulk Update stage, you must match on the ID or a field that is marked a unique identifier from a foreign program.

Field Types

Datetime Fields

You must format datetime values to a specific format for Salesforce to accept them.

Function ScriptedField
  ScriptedField=FormatDate("@@ORG:CREATEDATE@@", "yyyy-MM-ddTHH:mm:ss")
End Function

You can enable the StarfishETL option for stages to Skip Blank Writes to get past errors on date or numeric fields in Salesforce for which some source values will be blank.

FAQ

How can I see the results of a Bulk Job?

In Salesforce, surf to Setup -> Jobs -> Bulk Data Load Jobs. On this page, you should see a list of your recent bulk jobs. Open the Starfish Job's Destination tab and click the Job Results button on the Salesforce Connector tab. Paste the Job ID into this and Starfish will save and open an Excel file of errors.

Settings

Password Never Expires

For an integration user, you probably don't want your password to expire else you'll have to keep updating your integration. To do this, clone the System Administrator Profile, rename it whatever you'd like, I like "API User", then you can check the Password Never Expires box. You can't do this on standard Profiles.

I can't write CreatedDate or CreatedBy.

You can only write to CreatedDate and CreatedBy if you turn on "Create Audit Fields" for your instance. This will allow you to set the CreatedBy and CraetedDate fields when importing. It will not work when updating records. For more information about Enabling Create Audit Fields check the Salesforce knowledge base. Article #000232909. https://help.salesforce.com/articleView?id=000232909&type=1&language=en_US

How can I import into the audit fields, such as Created By and Last Modified By, etc.

For a System Administrator to enable the feature they can follow the click-path below: Setup -> Customize -> User Interface -> Enable "Set Audit Fields upon Record Creation" and "Update Records with Inactive Owners" User Permissions

After enabling this organization wide permission, you need to grant to the user the profile permissions.

https://help.salesforce.com/apex/HTViewSolution?urlname=Enabling-Create-Audit-Fields&language=en_US

How can I assign records to inactive Users?

For a System Administrator to enable the feature they can follow the click-path below: Setup -> Customize -> User Interface -> Enable "Set Audit Fields upon Record Creation" and "Update Records with Inactive Owners" User Permissions

After enabling this organization wide permission, you need to grant to the user the profile permissions

https://help.salesforce.com/apex/HTViewSolution?urlname=Enabling-Create-Audit-Fields&language=en_US

Records fail due to State / Country mismatches

Salesforce has a feature that forces users to select a matching State and Country combination from their dropdowns. We recommend that this feature be disabled for imports. Source data is rarely clean and this can cause records to fail to import. After the import, this feature can be turned on and a wizard will help clean up existing data.

See this page for information about enabling and disabling State and Country picklists: https://help.salesforce.com/articleView?id=admin_state_country_picklist_enable.htm&type=5. This page also includes a link to information about how to convert existing text based data to picklist values, which can be useful after an import.

Errors

SFBulkPost operation: /job error: System.Net.WebException: The remote server returned an error: (400) Bad Request.

When I received this error, I was matching against a standard text field. You must go into setup and check the "External ID" checkbox for the field you are matching on.

System.Web.Services.Protocols.SoapException: REQUEST_LIMIT_EXCEEDED: TotalRequests Limit exceeded.

When I received this error, I was using the non-bulk API and upserting Contacts one at a time. This caused me to very quickly go over my rolling 24 hour limit. We contacted Salesforce and were able to raise our API limit for 2 weeks. It's best to use the non-bulk API for troubleshooting and then switch to the bulk API for mass imports. To check your API usage, open Setup -> System Overview and look at API Usage.

The Bulk API seems to hang.

When using the Bulk API, Starfish builds a csv file locally which is then uploaded to Salesforce and processed by Salesforce. Salesforce then returns the results of the process. Sometimes, Salesforce never responds - or the response doesn't match what Starfish is expecting. When this happens, Starfish continues to wait for a response. If this occurs, check the results of the data load inside Salesforce by going to Setup -> Monitor -> Jobs -> Bulk Data Load Jobs. Based on the results in Salesforce, you can choose when to Cancel the Job in Starfish and choose what to do as a result.

The Bulk API seems to fail all rows after processing tens of thousands of records.

When using Parallel Concurrency, Salesforce sometimes begins throwing errors due to internal page locking or other exceptions. It is recommended to use the Serial Concurrency setting.

invalid cross reference id

This occurrs when you are populating a reference (18) field with an invalid value. For example, if you are inserting into Users and you put an invalid Profile ID into the ProfileId field, you will see this error.

Bulk API Fails with 1 error for each batch to upload

You can find this error in your StarfishEngine's ExceptionCSV folder. Error text:

  "SFBulkPost operation: /job/75037000000TphJAAS/batch error: System.Net.WebException: The remote server returned an error: (400) Bad Request.
  at System.Net.HttpWebRequest.GetResponse()
  at StarfishEngine.StarfishService.SFBulkPost(String operation, String dataType, Byte[] requestData, String responseElement)"

This error could be caused by any limit in the Bulk API Limits documentation. You did read the Bulk API Limits documentation, didn't you? I ran into this issue when importing history from Act to Salesforce. The history had huge bodies that added up to over the maximum of 10,000,000 characters for all data in the batch. I had to make my batch smaller than the default 5000 records. I used 312 records.

System.Exception: [500] Could not execute the specified command: DUPLICATES_DETECTED:It looks like you are creating a duplicate Account. We recommend you use an existing Account instead..

This is caused by a Duplicate Rule. To disable Duplicate Rules, open Setup and type Duplicate into the quick search. Then open Duplicate Rules. From there, click on the Rule name and click Deactivate.


Parameter
Description
Security Token
Use Sandbox


Salesforce Connection Edit Screen


    • Related Articles

    • Act! Connector

      ACT! Premium Cloud: StarfishETL supports two-way integration. ACT! On-Prem: Starfish does not have a native Act! connector for versions less than Act v18 so we read directly from the database. Configuration Options Connect Act Premium (v18+) using ...
    • Hubspot Connector

      Hubspot Connector Note: As of September 8, 2020 the Hubspot API does not allow external applications to write into their Quoting module.  See below. Overview The StarfishETL Hubspot Connector enables you to easily connect to Hubspot for integrations ...
    • Sharepoint Connector

      If connecting to Sharepoint using Single Sign-on (SSO) You might see this error if not connected appropriately - "[500] Could not execute the specified command: unrecognized response when getting SharePoint online security cookies, please check your ...
    • MySQL Connector

      MySQL Origin Note that the connection string must match the sample connection string shown in Starfish: server=<server_name>;uid={0};pwd={1};database=<db_name>; The string components must be placed in that order. {0} is a variable that refers to the ...
    • CSV Connector

      CSV Via Upload (in Cloud Designer) When you upload a new file, you need to clear the connector cache before the file will show in the Query Builder and before Origin Preview will work. If you run into any issues with the data being read in unexpected ...