Salesforce said no to Data Loader for custom metadata. Here is what actually works.
The first time most admins try to load custom metadata type records in bulk, they open Data Loader out of habit. Data Loader does not support custom metadata types. It never has.
That is not an oversight. Custom metadata lives in the metadata layer, not the data layer, which means the tools built for data records simply do not apply. The good news is that there are three approaches that do work, and choosing the right one depends on who is doing the loading and how many records are involved.
Why custom metadata types are different
Custom metadata types store configuration, not transactional data. Validation rules, routing logic, feature flags, mapping tables, rate cards, and similar reference data all live there. Because they are part of the metadata layer, they are deployable between environments, version-controllable, and accessible in formula fields and flows without additional SOQL queries.
That architecture is what makes them useful. It is also what makes bulk loading feel counterintuitive at first. You are not inserting records into a database table. You are deploying metadata through the Metadata API. Once that distinction is clear, the available approaches make considerably more sense.
There are three reliable ways to load custom metadata type records from a CSV file. The Custom Metadata Loader app, the Salesforce CLI, and a Flow-based component. Each has a different profile in terms of setup effort, permissions required, and practical limits.
Option one: the Custom Metadata Loader app
The Custom Metadata Loader is a Salesforce-built tool available on GitHub. It was the standard approach before CLI commands became generally available, and it remains the most admin-friendly option for teams not using a developer toolchain.
Setup requires a one-time deployment to the org, after which admins with the correct permission set can load records directly from the UI without touching a terminal. The tool uses the Metadata API in the background and can process up to 200 records per call.
The setup process follows these steps:
- Download the Custom Metadata Loader from the Salesforce GitHub repository and create a zip file from the contents of the custom_md_loader directory. The package.xml file should sit at the top level of the zip, not inside a subfolder.
- Log in to Workbench with the target org credentials, navigate to Migration and then Deploy, and upload the zip file.
- Once deployed, go to Setup and assign the Custom Metadata Loader permission set to anyone who will use the tool.
- Open the Custom Metadata Loader app from the App Picker and configure Remote Site Settings if prompted.
To load records, prepare a CSV file where the header row contains the API names of the custom metadata type fields. The Label or DeveloperName field is required in every file. Either one is sufficient to identify new records or update existing ones.
If the org has a namespace, include the namespace prefix in the field API names in the CSV header. Duplicate Label or DeveloperName entries in the file will result in only the last row being processed.
Upload the CSV file, select the corresponding custom metadata type from the dropdown, and click Create/Update. The tool will confirm how many records were processed and flag any errors in the output.
The 200-record limit per call is worth noting. For larger datasets, the file needs to be split. For very large migrations, the CLI approach removes this constraint entirely.
Option two: Salesforce CLI
As of Summer 2020, the Salesforce CLI includes dedicated commands for custom metadata types. This is the approach Salesforce now recommends for development workflows, and it has no record limit.
The relevant command for inserting records from a CSV file is:
sf cmdt generate records --csv CountryMapping.csv --type-name CountryMapping__mdt
This command generates the custom metadata record files locally in the project directory. The records then need to be deployed to the org using the standard deploy command:
sf project deploy start
This command generates the custom metadata record files locally in the project directory. The records then need to be deployed to the org using the standard deploy command:
The CSV file format follows the same rules as the Loader approach. The header row must contain field API names, and either Label or DeveloperName is required. The DeveloperName value can only contain alphanumeric characters and underscores, must begin with a letter, and cannot end with an underscore or contain two consecutive underscores. Spaces in name values should be replaced with underscores.
The CLI approach fits naturally into a DevOps pipeline. Records can be committed to version control, reviewed before deployment, and promoted through environments using the same workflow as any other metadata change. For teams already running a source-driven development model, this is the more sustainable long-term approach.
The steps for a first-time setup follow this sequence:
- Install the Salesforce CLI and authenticate with the target org using sf org login.
- Create or open an existing SFDX project in VS Code.
- Retrieve the custom metadata type definition from the org so the project is aware of its field structure.
- Prepare the CSV file with the correct field API names in the header row.
- Run the cmdt generate records command, review the generated files in the CustomMetadata folder, and deploy to org.
Option three: a Flow-based screen component
For orgs where neither GitHub deployment nor CLI access is practical, a third option exists in the form of a community-built Flow screen component. This approach allows admins to upload a CSV directly from a screen flow in the org, with no external tooling required.
The component was created by Salesforce MVP Narender Singh and is available through the UnofficialSF community. It handles the Metadata API calls internally, so the user experience is simply uploading a file and selecting the metadata type.
This approach is most appropriate for one-off loads in environments with restricted developer access or where deploying external tools to the org is not straightforward. It is less suitable for recurring bulk operations or CI/CD pipelines.
Getting the CSV format right
Regardless of which tool is used, the CSV format follows the same rules and the same common mistakes appear frequently.
- The header row must contain field API names, not field labels. A field called Billing Region in the UI has an API name like Billing_Region__c. The header must use the API name.
- Either Label or DeveloperName is required in every row. Both can be included. If updating existing records, the value in one of these fields must match the existing record identifier exactly.
- DeveloperName values follow strict naming rules. No spaces, no special characters other than underscores, must start with a letter, cannot end with an underscore, and cannot contain two consecutive underscores.
- Long text area fields do not support line breaks within a CSV cell. If source data contains newlines inside a field value, those need to be cleaned before the file is loaded.
- For namespaced orgs, field API names in the header must include the namespace prefix in the format namespace__FieldName__c.
Testing the CSV on a small subset of records in a sandbox before loading the full dataset is worth the extra ten minutes. Error messages from the Metadata API are specific enough to identify the problem, but they are less pleasant to read through when 180 out of 200 records have already been processed.
Which approach to choose
The Custom Metadata Loader is the right starting point for admin-led operations, especially for teams without an established CLI workflow. It requires a one-time setup investment and covers the majority of bulk loading scenarios within the 200-record-per-call limit.
The Salesforce CLI is the better long-term choice for any organisation running a DevOps pipeline. It has no record limit, fits version control, and treats metadata changes with the same rigour as code changes.
The Flow component is a practical fallback for specific environments where external tool deployment is not feasible and the volume of records is manageable.
All three approaches produce the same outcome. The choice is about how the work fits into the existing team workflow and what the ongoing maintenance model looks like after the initial load.
Custom metadata types exist to keep configuration out of code. Loading them well is the last step in making that architecture actually work.
TrueSolv helps teams structure Salesforce configuration layer correctly from the start.



