The Ultimate Guide to the Salesforce Screen Flow File Preview Component

The Spring ’26 Release introduced the File Preview Screen Flow Component. This native tool allows Admins to embed document viewing directly into the flow of work. In this post, we’ll explore the technical requirements, real-world observations, and the strategic implications of this functionality.

Beyond the “Files” Tab: Why This Matters

Historically, viewing a file in Salesforce required navigating to the “Files” related list, clicking the file, and waiting for the standard previewer to launch in a separate overlay. If you were in the middle of a Screen Flow, perhaps a guided survey or a lead conversion process, leaving that flow to check a document meant breaking your concentration.

Salesforce introduced a file thumbnail preview that shows visually what is in the file without having to click into it. Please note that the thumbnails show beautifully in the Single Related List component for lightning record pages. In the multiple related list view, I did not see the thumbnails.

In addition to the lightning record page and related list functionality, Salesforce introduced a file preview component that allows the user to see the preview of the file they have just uploaded, or they find attached to an object record in Salesforce.

Technical Blueprint: Configuring the Component

Setting up this component requires a shift in how Admins think about file data. Files data model is unique. To make the component work, you need to navigate the relationship between ContentDocumentLink, ContentDocument, and ContentVersion.

Core Attribute Requirements

 When you drag the File Preview component onto a screen in Flow Builder, you must configure the following:

  • Content Document ID (Required): This is the most critical field. The component needs the unique 18-character ID of the ContentDocument record. It will not accept the ContentVersion ID (which represents a specific iteration) or the Attachment ID (the legacy file format). Please note: the preview component always shows the latest version of the file.

  • Label: This attribute allows you to provide instructions above the preview window. This is highly effective for compliance-heavy roles, where the label can say: “Verify that the signature on this ID matches the physical application.”

  • API Name: The unique identifier for the element within your flow logic, following standard alphanumeric naming conventions.

Using Conditional Visibility

Because the preview window takes up significant screen real estate, it should not be set to “Always Display”, if it will be driven by a data table reactively. Salesforce allows you to specify logic that determines when the component appears. You can set it to display only if a specific file type is selected in the collection and hide the component if the ContentDocumentID variable is null to avoid showing an empty box.

Lessons from the Field: Our “Around the Block” Test

In our recent hands-on testing, we put the component through its paces to see where it shines and where its boundaries lie.

[youtube https://www.youtube.com/watch?v=_k3F2eX4rdM?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent&w=640&h=360]

The File Extension

The previewer is highly dependent on the browser’s ability to interpret file headers and extensions. During our test, we uploaded a standard log file. While the content was technically plain text, the file had a .log extension. The component struggled to render this because it didn’t recognize it as a standard format. However, once we switched to a .txt extension, the preview was crisp and readable. The admin takeaway here is that if your business process involves non-standard file types, you may need to implement a naming convention to ensure files are saved in formats the previewer can handle: primarily .pdf, .jpg, .png, and .txt.

Real-World Use Case

How can you use this component in a live production environment? Here is a scenario where the File Preview component adds immediate value:

Imagine a customer service representative handling a shipping insurance claim. The customer has uploaded a photo of a broken item. Instead of the agent navigating to the “Files” tab, the Screen Flow surfaces the photo on the “Review Claim” screen. The agent sees the damage, verifies the details, and clicks “Approve” all on one page.

Conclusion: A New Era of Flow

The File Preview component represents Salesforce being a holistic workspace. By integrating document viewing into the automation engine of Flow, Salesforce has empowered Admins to build tools that feel like custom-coded applications without writing a single line of Apex. As we saw in our testing, the component is robust and user-friendly. Most importantly, it keeps users focused. Whether you are streamlining an approval process or simplifying a complex data entry task, the ability to see what you are working on without leaving the screen is *chef’s kiss.*

Explore related content:

What’s New With Salesforce’s Agentblazer Status in 2026

Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger

Profiles and Permissions in Salesforce: The Simple Guide for Admins

#Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #Spring26 #Winter25

Salesforce Spring ’26 Brings Major Debug Improvements to Flow Builder

If you’ve been building flows for any length of time, you already know this: a lot of the real work and time goes into debugging. It’s re-running the same automation over and over. Swapping out record IDs. Resetting input values. Clicking Debug, making a small change, saving, and sometimes starting the whole setup again. That loop is where Flow builders spend a lot of their time, especially once flows get even moderately complex.

Salesforce’s Spring ’26 release finally takes aim at that reality. Instead of piling on new features, this update focuses on removing friction from the debugging experience itself. The result is a Flow Builder that feels faster, less disruptive, and much closer to a modern development environment.

Debug Sessions That Don’t Forget Everything

One of the most impactful improvements in Spring ’26 is also one of the simplest: Flow Builder now remembers your debug configuration while you’re actively editing a flow. When you debug a flow, make a change, and save, Salesforce preserves the triggering record you used, your debug options, and your input variable values. That means no more losing your setup every time you click Save, no more re-pasting record IDs, and no more rebuilding your test scenario from scratch.

Your debug session stays intact until you refresh your browser, close Flow Builder, or manually click Reset Debug Settings. This is a big quality-of-life upgrade, especially if you work with record-triggered flows that have edge cases, complex decision logic, multi-screen flows with test data, or anything that requires several small iterations to get right. The practical impact is simple: you can now fix, save, and re-run flows much faster, without constantly breaking your momentum.

Flow Tests Are No Longer “Latest Version Only”

Spring ’26 also changes how flow tests work behind the scenes.

Previously, flow tests were tied only to the latest version of a flow. As soon as you created a new version, older tests were essentially left behind. If a test no longer applied, you deleted it. If it still applied, you recreated it. Now, tests can be associated with specific flow versions.

Source: https://help.salesforce.com/s/articleView?id=release-notes.rn_automate_flow_debug_test_versions.htm&release=260&type=5

You can now reuse the same test across multiple flow versions or limit it to only the versions it truly belongs to, and when you create a new version, Salesforce automatically carries those tests forward from the version you cloned. This gives you much tighter control over scenarios like preserving regression tests for older logic, maintaining multiple supported versions, validating breaking changes, and keeping historical test coverage intact. Instead of treating tests as disposable, they become part of your flow’s lifecycle. This is a foundational shift for teams building mission-critical automation.

Compare Screen Flow Versions to See What Changed

Salesforce has had version comparison in other areas of the platform, but Spring ’26 brings it to screen flows. You can now compare any two versions of a screen flow and instantly see what changed across elements, resources, fields, components, properties, and styles.

This makes it much easier to answer the first question most debugging starts with: what changed? Instead of manually opening versions side by side, you get a clear view of differences, helping you pinpoint where issues may have been introduced and focus your testing where it actually matters.

Source: https://help.salesforce.com/s/articleView?id=release-notes.rn_automate_flow_mgmt_compare_screen_flow_versions.htm&release=260&type=5

More Control When Debugging Approvals and Orchestrations

Debugging long approval chains or orchestrations has always been painful. You’d often have to run the entire thing just to test one step. Spring ’26 introduces several upgrades that make this far more surgical.

Complete work items directly in Flow Builder

You can now complete orchestration and approval work items without leaving Flow Builder.

While debugging, interactive steps can be opened directly on the canvas. Once completed, the orchestration or approval process resumes immediately.

This keeps the entire test cycle inside the builder instead of bouncing between apps, emails, and work queues.

Debug only the part you care about

You can now define a start point, an end point, or both when debugging orchestration and approval flows, which gives you much more control over what actually runs. Instead of being forced to execute the entire automation, you can skip earlier stages, stop before downstream logic, isolate a single phase, or focus on one problematic section. When you skip steps, you can also provide test inputs to simulate outputs from earlier stages. In other words, you no longer have to run the whole machine just to test one gear.

Selectively control which steps execute

Salesforce has expanded test output controls beyond rollback mode.

You can now decide which orchestration or approval steps should run while debugging, and which should be skipped, directly from the new Configure Test Output experience.

This makes it much easier to validate edge cases, exception handling, and conditional behavior without unnecessary noise.

Smarter Debugging for More Advanced Flow Types

Spring ’26 also delivers improvements for more specialized use cases.

Segment-Triggered Flows: Testing multiple records at once

For segment-triggered flows, you can now debug up to ten records at the same time instead of testing one record after another. You can select multiple segment members, run the debugger, and cycle through each result to see exactly how different records move through your flow.

The canvas highlights the active path for the selected record, and you can filter results by successes or failures, making it much easier to spot inconsistencies. This is especially useful when validating logic across different customer types, messy or incomplete data, and edge cases that would normally take many separate test runs to uncover.

Why This Release Actually Matters

It’s easy to skim release notes and see “debug improvements” as minor polish, but debugging speed directly affects how confidently people build automation, how complex flows can realistically become, how quickly teams can ship fixes, and how much risk is involved in every change.

With these changes, you can rerun the same scenarios without constantly rebuilding your debug setup, test individual flow versions with far more precision, and isolate only the parts of your logic you actually care about. You can walk through approvals and orchestrations directly inside Flow Builder instead of jumping between tools, and even validate how a flow behaves across multiple records in a single debug run. This is the kind of release that changes how Flow Builder feels to use.

Conclusion

Salesforce has spent the last few releases expanding what Flow can do, and Spring ’26 is about improving how Flow is built. Persistent debug sessions, version-aware tests, selective execution, in-builder work items, and targeted debugging all point in the same direction. Flow Builder is evolving from a configuration tool into a true development environment.

If you build anything non-trivial in Flow, these changes will save you time immediately. And if you teach, support, or scale Flow across teams, they open the door to far better testing practices going forward.

Explore related content:

Top Spring ’26 Salesforce Flow Features

Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger

Spring ’26 Release Notes: Highlights for Admins and Developers

#FlowBuilder #LowCode #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials

Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger

Flow builders, rejoice! Now with the Spring 26 Release you can trigger your flow automations on ContentDocument and ContentVersion Flow objects for Files and Attachments. Salesforce had delivered a new event type in the previous release that supported flow triggers for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.

Use case: When a document is uploaded to a custom object with lookups to other objects like contact and account, add links to these objects, so that the same file is visible and listed under the related lists.

You could easily expand this use case to add additional sharing to the uploaded file, which is also a common pain point in many organizations. I will leave out this use case for now which you can easily explore by expanding the functionality of this flow.

Objects that are involved when you upload a file

In Salesforce, three objects work together to manage files: ContentDocument, ContentVersion and ContentDocumentLink.

Think of them as a hierarchy that separates the file record, the actual data, and the location where it is shared. The definition for these three core objects are:

ContentDocument: Represents the “shell” or the permanent ID of a file. It doesn’t store the data itself but acts as a parent container that remains constant even if you upload new versions.
ContentVersion: This is where the actual file data (the “meat”) lives. Every time you upload a new version of a file, a new ContentVersion record is created. It tracks the size, extension, and the binary data.
ContentDocumentLink: This is a junction object that links a file to other records (like an Account, Opportunity, or Case) or users. It defines who can see the file and what their permissions are.

Object Relationships:

The relationship is structured to allow for version control and many-to-many sharing:
ContentDocument > ContentVersion: One-to-Many. One document can have many versions, but only one is the “Latest Published Version.
ContentDocument > ContentDocumentLink: One-to-Many. One document can be linked to many different records or users simultaneously.

ContentDocumentLink is a junction object that does not allow duplicates. If you attempt to create the relationship between a linked entity and the content document when it already exists, your attempt will fail.

What happens when a file is uploaded to the files related list under an object?

Salesforce creates the ContentDocument and ContentVersion records. Salesforce will also create the necessary ContentDocumentLink records; often one for the object record relationship, one for the user who uploaded the file.

For each new file (not a new version of the same file) a new ContentDocument record will be created. You can trigger your automation based on this record being created, and then create additional ContentDocumentLink records to expand relationships and sharing.

Building Blocks of the Content Document Triggered Automation

For this use case I used a custom object named Staging Record which has dedicated fields for Contact and Account (both lookups). This method of uploading new documents and updating new field values to a custom record is often used when dealing with integrations and digital experience users. You can easily build a similar automation if a ContentDocumentLink for the Account needs to be created when the file is uploaded to a standard object like Contact.

Follow these steps to build your flow:

  • Trigger your record-triggered flow when a ContentDocument record is created (no criteria)
  • Add a scheduled path to your flow and set it up to execute with 0 min delay. Under advanced settings, set up the batch size as 1. Async seems to work, as well. I will explain the reason for this at the end of the post.
  • Get all ContentDocumentLink records for the ContentDocument
  • Check null for the get in the previous step (may not be necessary, but for good measure)
  • If not null, use a collection filter to filter for all records where the LinkedEntity Id starts with the prefix of your custom object record (I pasted the 3 character prefix into a constant and referenced it). Here is the formula I used: LEFT({!currentItem_Filter_Staging.LinkedEntityId},3)= {!ObjectPrefixConstant}
  • Loop through the filtered records. There should be only one max. You have to loop, because the collection filter element creates a collection as an output even for one record.
  • Inside the loop, get the staging record. I know, it is a get inside the loop, but this will execute once. You can add a counter and a decision to execute it only in the first iteration if you want.
  • Build two ContentDocumentLink records using an assignment. One between the ContentDocument and the Contact on the staging record, the other one between the ContentDocument and the Account. You could add additional records here for sharing.
  • Add your ContentDocumentLink records to a collection.
  • Exit the loop and create the ContentDocumentLink records using the collection you built in one shot.
  • Here is a screenshot of the resulting flow.

    Here is what happens when you create a staging record and upload a file to Salesforce using the related list under this record.

    Here is the resulting view on the Contact and Account records.

    Why is the Scheduled Path or Async Path Necessary?

    When a file is uploaded, a ContentDocument record and a ContenDocumentVersion record are created. The junction object for the ContentDocumentLink record will need to be created after these records are created, because the relationship is established by populating these Ids on this record. When you build the automation on the immediate path, your get will not find the ContentDocumentLink record. To ensure Salesforce flow can find the record, use either async path or scheduled path.

    When you build the automation on the immediate path, the ContentDocumentLink records are not created. You don’t receive a fault email, either, although the automation runs well in debug mode. I wanted to observe this behavior in detail, and therefore I set up a user trace to log the steps involved. This is the message I have found that is stopping the flow from executing:
    (248995872)|FLOW_BULK_ELEMENT_NOT_SUPPORTED|FlowRecordLookup|Get_Contact_Document_Links|ContentDocumentLink
    According to this the get step for ContentDocumentLink records cannot be bulkified, and therefore the flow cannot execute. Flow engine attempts to always bulkify gets. There is nothing fancy about the get criteria here. What must give us trouble is the unique nature of the ContentDocumentLink object.

    The async path seems to bypass this issue. However, if you want to ensure this element is never executed in bulk, the better approach is to use a scheduled path with zero delay and set the batch size to one record in advanced settings. I have communicated this message to the product team.

    Please note that the scheduled path takes a minute to execute in my preview org. Be patient and check back if you don’t initially see the new ContentDocumentLink records.

    Conclusion

    In the past, handling file uploads gave flow builders a lot of trouble, because the related objects did not support flow triggers.

    Now that we have this functionality rolling out in the latest release, the opportunities are pretty much limitless. The functionality still has its quirks as you can see above.

    I would recommend that you set up a custom metadata kill switch for this automation so that it can easily be turned off for bulk upload scenarios.

    Watch the video on our YouTube channel.

    [youtube https://www.youtube.com/watch?v=Gl0XCtMAhmc?feature=oembed&w=800&h=450]

    Explore related content:

    Top Spring 26 Salesforce Flow Features

    Should You Use Fault Paths in Salesforce Flows?

    How to Use Custom Metadata Types in Flow

    See the Spring 26 Release Notes HERE.

    #Automation #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Spring26 #UseCases

    Top Spring ’26 Salesforce Flow Features

    What are the new features about? Spring 26 brings new screen, usability and platform enhancement features. Let’s dive into the details.

    Top Screen Flow Spring 26 Features

    It seems like most of the new features involve screen flows.

    I will not go into further detail, but this release introduces yet another file upload component for screen flows: LWR File Upload Component for Experience Cloud.

    Here are the rest of the screen flow improvements.

    Screen Flow Screen Element and Component Style Enhancements

    Screen flow screen element gets features that allow you do set the background, text and border colors. Border weight and radius can be adjusted. For input components, in-focus color for text can be differentiated. Flow buttons also get similar adjustments gaining the ability to change colors on hover over.

    Any styling changes you set override your org or Experience Cloud site’s default theme.

    Remember to keep your color and contrast choices in check for accessibility. Don’t do it as I did below. Go to the WebAIM contrast checker website and plug in your color codes to check whether their contrast is sufficient for accessibility.

    Screen Flow Message Element

    Screen Flow Message Element leverages the new styling options to display a message on the screen. It has a pulldown that allows you to create an information, success, warning or an error message. These come with standard color sets, which will direct flow developers in using a standard visual language.

    This functionality is compliant with A11y for accessibility.

    See all the four types on the same screen below.

    Screen Flow Kanban Component (Beta)

    The new Kanban component allows you to organize records into cards and columns. This is particularly useful for visualizing process phases and managing transitions across your workflow.

    Use the new Kanban Board component to show records as cards in columns that represent workflow stages, without custom Lightning implementations. The Kanban Board is read-only, so users can’t drag cards between stages at run time.

    Data Table Column Sort and Row Value Edit (TBD)

    Now the user can sort the data table by columns and edit text fields in rows. This feature is not available in the preview orgs. The product team is working hard in the background to make this into the Spring 26 release. This functionality is slated to make it to the release at the last minute.

    Preview Files Natively in Screen Flows

    Elevate document-based processes by enabling your users to review file content directly within a screen flow. The new File Preview screen component removes the requirement to download files externally, ensuring easier document review and approval workflows.

    This component seems to be already in production.

    Open Screen Flows in Lightning Experience with a URL

    Previously, when you opened a flow via URL, it did not launch in lightning experience. Now, it will launch in lightning preserving the experience your user is used to especially when they are working on a customized lightning console app.

    I will quote the release notes for this one.

    “To open a flow in Lightning Experience, append /lightning/flow/YourFlowNameHere to your URL. To run a specific flow version, append /lightning/flow/YourFlowNameHere/versionId to your URL. Flows that open in Lightning Experience have improved performance because most required Lightning components are already loaded into the browser session. In Lightning console apps, your tabs are preserved when a flow opens, and you can switch to other tabs while the flow is working. Using the new URL format also ensures that your browser behaves consistently, with forward, back, and your browser history working as expected.

    To pass data into a flow through its URL, append ?flow__variableIdHere=value to the end of your URL. For example, to pass a case number into a flow, /lightning/flow/YourFlowNameHere?flow__variableIdHereID={!Case.CaseNumber}.

    Use & to append multiple variables into a flow. For example, /lightning/flow/YourFlowNameHere?flow__varUserFirst={!$User.FirstName}&flow__varUserLast={!$User.LastName} passes both the user first name and last name into the flow.”

    Usability and Platform Features

    I listed all of the screen flow features above. The following two items are huge usability improvements that also involves screen management for the flow canvas, not just only for screen flows.

    Collapse and Expand Decision and Loop Elements

    When your flow gets to big and you need to Marie Kondo (tidy up) your flow canvas, you can collapse the decision and loop elements that take up a lot of real estate. You can always expand them back when needed.

    Now you can collapse and expand branching elements with Flow Builder, including Wait, Decision, Loop, Path Experiment, and Async Actions, helping you focus on the key parts of your flow.

    This layout is saved automatically and locally in your browser, making it easier to return to your work without changing the view for other users.

    Mouse, Trackpad and Keyboard Scroll

    Now you don’t have to drag or use the scroll bar to move the flow around on the flow canvas. You can use vertical and horizontal wheels on your mouse, the arrows keys on your keyboard or your trackpad if you have one.

    No need to use Salesforce Inspector Reloaded to get this functionality any more. Thanks to Salesforce Inspector Relaoded for filling the gap in the mean time.

    Content Document and Content Version Flow Triggers for Files and Attachments (Beta)

    Salesforce delivered a new event type in the last release that could trigger flows for standard object files and attachments. The functionality was limited. In this release, Salesforce gave us the ability to trigger on all new files/attachments and their updates for all objects.

    I was told by the product team that this functionality will be released as beta.

    Flow Logging

    I am not exactly sure what has been improved here. Salesforce had previously announced additional flow logging capabilities leveraging Data Cloud. Now, a new flow logging tab has been added to the Automation Lightning App.

    Debug Improvements

    The debug in the flow builder will now remember the record that it ran on and the updated field value if it is running in an update scenario. Debug inputs such as triggering record values, debug options, and input variable values now remain set when you save flow changes within your Flow Builder session. The user will need to click a reset button to disassociate the debug run from the input for the last run. This change is intended to make debug reruns faster.

    Flow builder will preserve debug configurations when you save changes to your flow. Refreshing your browser or closing Flow Builder clears all debug settings.

    Conclusion

    Salesforce product teams work hard delivering new features for every release. Spring 26 release brings significant new improvements for the flow builder. I would have liked to see additional capabilities coming for flow types other than screen flows. This release seems to be a lighter release in that area.

    Additional bonus features include request for approval component for lightning page layouts (highly-requested feature), compare screen flow versions, and associating flow tests with flow versions.

    The release notes are still in preview. And we could still have new functionalities removed or added in the release cycle.

    This post will be updated as additional details are made available.

    [youtube https://www.youtube.com/watch?v=eZC_8W1IbUs?feature=oembed&w=800&h=450]

    Explore related content:

    Salesforce Optimizer Is Retired: Meet Org Check

    One Simple Salesforce Flow Hack That Will Change Your Workflow Forever!

    Automate Permissions in Salesforce with User Access Policies

    Spring ’26 Release Notes: Highlights for Admins and Developers

    ​​​​What Is Vibe Coding? And What’s New in Agentforce Vibes for Developers?

    #Kanban #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #SalesforceUpdate #ScreenFlow #Spring26

    Should You Use Fault Paths in Salesforce Flows?

    If you build enough Flows, you’ll eventually see the dreaded flow fault email. Maybe a record you tried to update was locked, a required field value was not set in a create operation, or a validation rule tripped your commit. Regardless of the root cause, the impact on your users is the same: confusion, broken trust, and a support ticket. The good news is you can catch your faults using the fault path functionality. In this post, we’ll walk through practical patterns for fault handling, show how and when to use custom error element, and explain why a dedicated error screen in screen flows is worth the extra minute to build. We’ll also touch on the roll back records element for screen flows where this functionality can make a difference.

    Why Fault Paths Matter

    Faults are opportunities for your Salesforce Org automation to improve. While unhandled faults are almost always trouble, handled faults do not have to be a huge pain in our necks.

    The Core Building Blocks of Flow Fault Handling

    1) Fault paths
    Gets (SOQLs), DMLs (create, update, and deletes) and actions support fault paths. Fault paths provide a way for the developer to determine what to do in the event of an error.
    2) Fault actions
    You can add elements to your fault path to determine the next steps. You can also add a custom error element in record-triggered flows or error screens in screen flows for user interactivity. Multiple fault paths in the flow can be connected to the same element executing the same logic. A subflow can be used to standardize and maintain the fault actions such as temporarily logging the fault events.

    Logging Errors

    Here is a list of data that may be important to include in your fault communications and logging:

    • Flow label
    • User Name
    • Date/Time
    • Technical details (e.g. $Flow.FaultMessage)
    • Record Id(s) and business context (e.g., Opportunity Id, Stage)
    • User-friendly message (plain English)

    Subflow Solution

    The advantage of a subflow when dealing with fault paths is that you can modify the logic once on a central location. If you want to start logging temporarily, you can do that without modifying tons of flows. If you want to stop logging, this change can be completed fairly easily, as well.

    Inside the subflow, decide whether to:

    • Log to a custom object (e.g., Flow_Error__c)
    • Notify admins via Email/Slack

    Meet the Custom Error Element

    The Custom Error element in Salesforce Flow is a powerful yet often underutilized tool that allows administrators and developers to implement robust error handling and create more user-friendly experiences. Unlike system-generated errors that can be cryptic or technical, the Custom Error element gives you complete control over when to halt flow execution and what message to display to your users.

    The Custom Error element lets you intentionally raise a validation-style error from inside your flow, without causing a system fault, so you can keep users on the same screen, highlight what needs fixing, and block navigation until it’s resolved. Think of it as flow-native inline validation.

    What The Custom Error Element Does

    It displays a message at a specific location (the entire screen or a specific field) and stops the user from moving forward. This functionality does present a less than ideal self-disappearing red banner message if you make a change to a picklist using the path component, though. Refrain from using the custom error messages in these situations.

    The unique thing about the custom error message is that it can be used to throw an intentional exception to stop the user from proceeding. In these use cases, it works very similarly to a validation rule on the object.

    This becomes particularly valuable in complex business processes where you need to validate data against specific business rules that can’t be easily captured in standard validation rules. For instance, you might use a Custom Error to prevent a case from being closed if certain required child records haven’t been created, or to stop an approval process if budget thresholds are exceeded.

    Please note that custom error messages block the transaction from executing, while a fault path connected to any other element will allow the original DML (the triggering DML) to complete when the record-triggered automation is failing.

    Custom Error Screen in Screen Flows

    Incorporating a dedicated custom error screen in your screen flows dramatically improves the user experience by transforming potentially frustrating dead-ends into helpful, actionable moments. When users encounter an error in a screen flow without a custom error screen, they’re often left with generic system messages that don’t explain what went wrong in business terms or what they should do next, leading to confusion, repeated help desk tickets, and abandoned processes.

    A well-designed custom error screen, however, allows you to explain the specific issue in plain language that resonates with your users’ understanding of the business process. Beyond clear messaging, custom error screens give you the opportunity to provide contextual guidance, such as directing users to the right person or department for exceptions, offering alternative paths forward, or explaining the underlying business rule that triggered the error. You can also leverage display text components with dynamic merge fields to show users what caused the problem turning the error into a learning moment rather than a roadblock. Additionally, custom error screens maintain your organization’s branding and tone of voice, include helpful links to documentation or knowledge articles, and pair with logging actions to give you valuable insights into potential process improvements or additional training needs.

    Here is an example custom error screen element format (customize to your liking):

    Error Your transaction has not been completed successfully. Everything has been rolled back. Please try again or contact your admin with the detailed information below. Account Id: {!recordId} Time and Date: {!$Flow.CurrentDateTime} User: {!$User.Username} System fault message: {!$Flow.FaultMessage} Flow Label: Account - XPR - Opportunity Task Error Screen Flow

    The “Roll Back Records” Element

    There are use cases in screen flows where you create a record and then update this record based on follow-up screen actions. You could be creating related records for a newly created record, which would require you to create the parent record to get the record Id first. If you experience a fault in your screen flow, record(s) can remain in your system that are not usable. In these situations the Roll Back Records element lets you undo database changes made earlier in the same transaction. Roll Back Records does not roll back all changes to its original state, it only rolls back the last transaction in a series of transactions.

    Tips for fewer faults in the first place

    Here are some practical tips:

    • Validate early on screens with input rules (Required, min/max, regex).
    • Use Decisions to catch known conflicts before DML.
    • Place DMLs strategically in screen flows: Near the end so success is all-or-nothing (plus Roll Back Records if needed) or after each screen to record the progress without loss.

    The fewer faults you surface, the more your users will trust your flows.

    Putting it all together

    Here’s a checklist you can apply to your next Screen Flow:

    • Every DML/Callout element has a Fault connector.
    • A reusable Fault Handler subflow logs & standardizes messages.
    • Custom Error is used for predictable, user-fixable issues on screens.
    • A custom error screen presents clear actions and preserves inputs.
    • Technical details are available, not imposed (display only if helpful).
    • Roll Back Records is used when it matters.
    • Prevention first: validate and decide before you write.

    Other Considerations

    When you use a fault path on a record-triggered flow create element, and your create fails, please keep in mind that you will get a partial commit. This means the records that fail won’t be created while others may be created.

    Example: You are creating three tasks in a case record-triggered flow. If one of your record field assignments writes a string longer than the text field’s max length (for example, Subject) and you use a fault path on that create element, one task fails while the other two create successfully.

    Conclusion

    My philosophy regarding fault paths is to add them to your flows, but never go down them if possible. When you see you are going down fault paths, then that means you have opportunity for improvement in your automation design.

    Every fault you handle offers insight into how your flow behaves in the real world. Each one reveals something about the assumptions built into your automation, the data quality in your org, or the user experience you’ve designed. Treating faults as signals rather than setbacks helps you evolve your automations into resilient, reliable tools your users can trust. Over time, these lessons refine both your technical build patterns and your understanding of how people interact with automation inside Salesforce.

    Explore related content:

    How to Use a Salesforce Action Button to Validate Lookup Fields in Screen Flows

    Should You Leave Unused Input and Output Flow Variables?

    How To Build Inline Editing for Screen Flow Data Tables in Salesforce

    Salesforce Flow Best Practices

    Add Salesforce Files and Attachments to Multiple Related Lists On Content Document Trigger

    #CustomErrors #FaultHandling #FaultPath #SalesforceAdmins #SalesforceDevelopers #SalesforceHowTo #SalesforceTutorials #ScreenFlows

    One Simple Salesforce Flow Hack That Will Change Your Workflow Forever!

    What if I told you that the Flow you’ve been building could secretly hold the key to a much bigger impact in the automation world? A world where you don’t rebuild logic over and over… where one Flow powers multiple flows.

    Sounds dramatic, right? But once you learn this trick, it will be an invaluable addition to your flow arsenal that will superpower your workflows going forward.

    Use case: Create a follow-up task due in seven days for the proposal step when the stage is updated to proposal (do the same on create), if there is no existing open task already with the same subject.

    Let’s start by building this use case. Then we will get to the hack part.

    Step 1. Build the Original Record-Triggered Flow

    We’ll start with something simple: a record-triggered Flow on Opportunity that creates a Task when the Opportunity hits a certain stage. Check whether there is an open task already with the same subject related to the opportunity, before creating another one. If there is an open task already, skip the create.

    • Trigger: Opportunity → when Stage = “Proposal/Quote”
    • Action: Create Task → Assigned to Opportunity Owner
    • Due date: 7 days from the current date
    • WhatId (Related to Id) set as the triggering Opportunity

    Straightforward.

    But here’s the catch: this logic lives in a record-triggered flow. What if I wanted to leverage the task creation logic for multiple record-triggered flows (including scheduled paths), schedule-triggered flows and possibly for screen flows, as well. In addition, could I leverage the same flow for other object records in addition to opportunities? Good food for thought.

    Step 2. Save As an Autolaunched Flow

    Here’s where the hack begins.

    From the Flow Builder menu, click Save As → choose A New FlowAutolaunched (No Trigger).

    Now we have the same logic, but free from the record trigger.

    Step 3. Replace $Record With Input Variables

    The Autolaunched Flow still references $Record from the Opportunity. That won’t work anymore. Time to swap those out. The references are listed under Errors. The flow cannot be saved until these Errors are fixed.

    • Create Input Variables for everything your logic needs; e.g., recordId (WhatId), OwnerUserIdVar, DelayInDaysVar.

      • Update your Create Task, Get Task elements and the Due Date formula to reference those input variables instead of the $Record.

      Boom. Your Flow is now a Subflow – it can take in data from anywhere and run its magic.

      Step 4. Refactor the Original Record-Triggered Flow

      Time to circle back to the original record-triggered Flow.

      • Open the Flow, Save As a New Version.

      • Delete all the elements. (Yes, all. Feels risky, but trust me.)

      • Add a Subflow element.

      • Select your new Autolaunched Flow.

      • Map the input variables to $Record fields, and provide the delay in days parameter value.

      Now, instead of directly creating the Task, your record-triggered Flow just hands $Record data to the Subflow – which does the real work.

      Here is how the debug runs works.

      Why This Hack Changes Everything

      This one move unlocks a whole new way of thinking about Flows:

      • Reusability – Logic built once, used anywhere.

      • Maintainability – Update the Subflow, and every Flow that calls it stays consistent.

      • Scalability – Build a library of Subflows and assemble them like Lego pieces.

      • Testing Ease – Some flow types are hard to test. Your autolaunched subflow takes in all the necessary parameters in the debug mode, and rolls back or commits the changes based on your preference.

      Suddenly, your automation isn’t a patchwork of disconnected Flows – it’s a modular, scalable system.

      The Secret’s Out

      I call this the “Save As Subflow” hack. It’s hiding in plain sight, but most builders never use it. Once you do, your workflow will never be the same.

      Remember, you can make your subflow logic as flexible as you want. You can add input variables for subject and description. This would make your task creation even more flexible so that it can be used for other objects like Case and Custom objects.

      Try it today – and the next time you find yourself rebuilding logic, remember: you don’t have to. Just save it, strip $Record, add input variables, and let your Subflows do the heavy lifting.

      Explore related content:

      Automate Permissions in Salesforce with User Access Policies

      When Your DMLs Have Criteria Conditions Other Than Id

      Display Product and Price Book Entry Fields in the Same Flow Data Table

      How to Use a Salesforce Action Button to Validate Lookup Fields in Screen Flows

      #Hack #HowTo #RecordTriggered #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #Subflow

      Should You Leave Unused Input and Output Flow Variables?

      In Salesforce Flow, input variables are special placeholders that allow data to be passed into a flow from an external source, such as a Lightning page, a button, another flow, or even an Apex class, so that the flow can use that data during its execution. When you create an input variable in Flow Builder, you mark it as Available for Input, which makes it visible and ready to receive values from outside the flow. Output variables, on the other hand, are used to send data out of a flow so it can be consumed by whatever triggered or called the flow, such as another flow, a Lightning web component, or an Apex class. When you create a variable and mark it as Available for Output, the flow can pass its final or intermediate values back to the caller once it finishes running.

      Input variables are especially useful for building modular, reusable flows. You can design them to handle different scenarios based on the values provided at runtime. For example, a record ID provided as an input variable can help the flow retrieve and update that specific record without needing user input. By leveraging input variables, you can keep flows flexible, reduce duplication, and make them easier to maintain.

      Similarly, output variables are powerful when building modular, subflow-based solutions. The parent flow can feed inputs to the subflow, receive outputs in return, and then continue processing without extra queries or logic. For example, a subflow might calculate a discount amount or generate a new record ID. It can then return it as an output variable for the parent flow to use. Output variables make flows more reusable, keep processes streamlined, and allow different automation components to share data seamlessly.

      Security Implications of Variables Available for Input and Output

      In programming, a variable’s scope defines the region of code where it exists and can be used, such as within a specific method, a class, or an entire module. For example, a variable defined inside a method is local to that method and cannot be seen or changed by code outside it, much like keeping notes in your own locked desk drawer. This “privacy” ensures that internal details remain protected from unintended interference, which is a key aspect of encapsulation in programming. If you want other parts of the program to access the data, you must explicitly expose it through return values, public properties, parameters, or other controlled interfaces. This principle not only prevents accidental bugs but also supports security. Sensitive data and logic remain inaccessible unless intentionally shared, helping keep the system stable, predictable, and easier to maintain.

      When you allow input variables for your flow, you allow external environments that run this flow to pass parameters into it. This potentially makes your flow vulnerable to outside attacks. When you configure output variables for your flow, you are creating a risk of external environments accessing flow output data. This is often data recorded in your Salesforce org. This data may include personally identifiable information or sensitive data.

      In addition, avoid using inputs that are easy to guess. If you look up a contact record based on their email address, attackers may guess the email address after a few tries (firstname.lastname@gmail.com for example).

      What About Flows Built for Digital Experience Guest Users?

      When you build a flow and deploy it on a digital experience site, where the guest user can execute it without logging in, you are exposing your flow to the outside world. This scenario makes your flow even more vulnerable to outside attacks.

      Guest User Means Anybody Can Access Any Time

      First of all, please know that this is a very risky approach. You should assume anybody can run that flow anytime, which is what you allowed. Make sure that only limited inputs and outputs are defined and used. The flow should only execute a limited scope that it absolutely needs. You should not allow the flow to perform a multitude of operations because you aim for flexibility. Test many scenarios to ensure attacks can not derail your flow and trick it to perform operations that it is not intended to perform.

      Limit the Data

      Furthermore, you should not allow the flow to access any information it does not need to see. If you are dealing with records or record collections, make sure your gets specify fields that are absolutely necessary. Do not get the drivers license number for the contact when you just need the name. In this scenario, do not let Salesforce automatically decide what fields to get. Also, when performing updates, do not update all the field values on the record. Just update whichever field is important to update for your process.

      Isolate the Elevated Functionality

      Finally, you may be tempted to set your flow to run in system context without sharing, or to allow a guest user to view records in the org through sharing rules. Both scenarios introduce additional risks that must be carefully considered.

      When allowing your automation to run in system context without sharing, isolate the necessary part into a subflow. Ensure that logic is tightened well from a security standpoint. Do not run the whole flow in system context without sharing mode. Just run the necessary part in a subflow using this elevated setting.

      Screen Flows and Reactivity

      Whether you allow elevated access or not, screen flows present a couple of inherited risks.

      When you pass information to a data table, lightning web component or a screen action, that information is accessed by your browser locally. If you feed a collection of contact records to a datatable and get all field values before you go to the data table screen, the local browser will see all the field values on the record. This happens before the user interacts with the table. The user can see these values.

      Recent developments of reactivity for screen flows are fantastic from a UI standpoint, but further complicate the security risks. The more reactive functionality you use in your flow, the more data you handle locally in your browser.

      Conclusion

      When flow builders, especially new starters, build flow variables, they often freely check available for input and available for output checkboxes. They do this thinking the alternative would limit them. This is risky and not necessary. You can change these settings at any time without having to create or recreate variables.

      Always plan your inputs and outputs carefully and review them at the end of development. Make sure you don’t have any unused variables still accepting inputs or producing outputs.

      In this era, where we hear the Salesforce name associated with client data security breach incidents, apply extreme security caution when dealing with automation.

      This post is part of our Flow Best Practice series. See the other posts HERE.

      Sources and references:

      Building Secure Screen Flows For External User Access by Adam White

      Data Safety When Running Screen and Autolaunched Flows in System Context – Salesforce Help

      Explore related content:

      How To Attach Files Using the Flow Email Action in Salesforce

      Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights

      How To Build Flex and Field Generation Prompt Templates in the Prompt Builder

      #Apex #BestPractices #InputVariables #LowCode #OutputVariables #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #ScreenFlow #Security

      How to Build Custom Flow Approval Submission Related Lists

      In the Spring ’25 release, Salesforce introduced Flow Approvals to replace the legacy approval processes. This approval platform was based on the orchestration functionality. I recorded and released two videos and posts to share this functionality on Salesforce Break. The videos saw great interest from the community, they are about to reach 20K views soon. So, why is everyone talking about flow approvals?

      There are multiple reasons:

    • Flow approvals are orchestration-based, but they are entirely free unlike other orchestrations.
    • Legacy approvals are really old. Salesforce has not been investing in them. They are past due for a remake.
    • Legacy approvals are limited. To enhance the functionality, clients had to use AppExchange solutions or paid alternatives by Salesforce like advanced approvals for CPQ.
    • Flow approvals allow for parallel approvals, dynamic steps, and flexibility in the approval process.
    • This is why I decided to create more content in this area, starting with:

    • A live course that teaches Flow Approval processes in depth, with hands-on practice. See the details here, and reach out if you’re interested.
    • Additional resources focused on solutions that bridge the gaps between Flow Approvals and Legacy Approvals, addressing the limitations of the new platform.
    • Here is the first post detailing a solution filling one of the gaps.

      Flow Approvals Don’t Provide Sufficient Detail In The Related Lists

      Here is the first point I would like to address: Flow approvals don’t provide good detailed information in the related lists of the object record like the legacy approvals did.

      Solution: Build a screen flow with reactive data tables to show the approval submission records and their related records. Add the screen flow to a tab on the record page.

      Salesforce provided a component that can be added to the record page. It is called the Approval Trace component. This provides some information about the approval process, but is not customizable. I asked myself how I can go beyond that, and decided to build a reactive screen flow with data tables to fill this functionality gap. Here is what the output looks like:

      To build and deploy this flow, you need to follow these steps:

    • Build the screen flow.
    • Build the autolaunched flow that will fetch the data you will need. This flow will be used as the screen action in step one.
    • After testing and activation, add the screen flow to the record page.
    • If you have never built a screen flow with screen actions before, let me be the first one to tell you that step one and two are not really completed in sequence. You go back and forth building these two flows.

      Let’s get started.

      Build the Flow Approval Submission Screen Flow

      What I usually do, when building these flows is that I first get the screen flow started. Then I build the autolaunched flow, and go back to the screen flow to build out the rest of the functionality. The reason is that the screen flow data tables will need the outputs from the autolaunched flow to be fully configured.

      This is what the screen flow looks like, once it is completed.

      For now, you can just ignore the loop section. This is there to ensure that there is a default selection for the first data table, when the flow first runs.

      This is the structure of the flow excluding that part:

    • Get all approval submission records for the recordId that will be provided as input into the flow.
    • Check if there are approval submissions found.
    • Display a screen saying “no records were found,” if the get returns null.
    • Display a reactive screen mainly consisting of three data tables with conditional visibility calling an autolaunched flow as a screen action.
    • Here is what this screen looks like:

      After you build, test, and activate the autolaunched flow, configure the screen action under the screen properties as shown below.

      How the Loop Section Works

      The first data table has an input parameter that determines the default selection, when the flow first runs. This is a record variable representing one of the members of the collection record variable that supplies the data. You need to loop the collection of records to get to the record variable. Follow these steps:

    • Loop the collection record variable which is the output of your get step. Sort the data by last modified date in your get step.
    • Assign the first member to a record variable.
    • Exit the loop without condition. Connect the path to the next element outside the loop.
    • Add the resulting record variable to the default selection parameter under the configure rows section of your data table.
    • This loop always runs once, setting the default selection to the most recent approval submission. This populates the related data tables when the flow first runs.

      Build the Screen Action Autolaunched Flow for Related Lists

      The autolaunched flow receives a single approval submission recordId as input. Then it gets the related records and data the screen flow needs, and returns the data as output.

      Here is a screenshot of the autolaunched flow.

      This flow executes the following steps:

    • Gets the approval submission data.
    • Gets the user data for the submitter to resolve the full name.
    • Gets approval work items.
    • Checks null and sets a boolean (checkbox) variable when the get returns null. The output uses this variable to control conditional visibility of the relevant data table. If found this method yields the best results.
    • Get approval submission details.
    • Checks null and sets a boolean variable when the get returns null. This variable is then used in the output to drive conditional visibility of the relevant data table.
    • Assigns the get results to output collection record variables.
    • Final Deployment Steps

      After testing and activating the autolaunched flow, you need to add the flow to the screen flow as the screen action. The flow input will be fed from the selection of the first data table. You will see that this step will make all the outputs of the autolaunched flow available for the screen flow. Using these outputs build the additional two data tables and configure the conditional visibility.

      After testing and activating your screen flow, add the flow to the record page on a dedicated new tab (or to a section on an existing tab). Select the checkbox to pass the recordId to the flow. Note that this flow will work with any record for any object.

      Limitations and Suggested Improvements

      While this screen flow provides a lot of detail and customization options it has two limitations:

    • By default, the data table does not resolve and display record names in lookup fields when you add these fields as columns. To address this, I added the submitter’s full name in a read-only text field for display on the screen. Workaround: Create formula fields on the object and display those in the data table.
    • The data tables do not provide a clickable link. Combined with the limitation above, you can create a formula field on the object to address both of these gaps: show the record name and make it a clickable link. Here is the formula example you need for this (shout out goes to Brad Weller for his contribution): HYPERLINK("/" & Id, Name, '_self')
    • While I wanted to make these additions to these flows, I did not want to add custom fields to the objects. It should be your decision whether you want to do that or not.

      Install the Package to Your Dev Org

      Here is the second generation unprotected package for these two flows that you can install in your Dev Org:

      Install the Unprotected 2GP

      For a more visual walk through of how these flows are built, watch the Salesforce Break YouTube video below.

      [youtube https://www.youtube.com/watch?v=0QDdUNBh3qo?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent&w=640&h=360]

      With Salesforce phasing out legacy approvals, mastering Flow Approvals is essential to keep your org’s processes modern, flexible, and future-ready. Gain the confidence to handle any approval challenge with solutions that work seamlessly in real-world Salesforce environments HERE.

      Explore related content:

      Supercharge Your Approvals with Salesforce Flow Approval Processes

      When Your DMLs Have Criteria Conditions Other Than Id

      Start Autolaunched Flow Approvals From A Button

      Get Ready for the New Time Data Type – Summer ‘25 Flow Goodness

      #AutolaunchedFlow #FlowApprovals #FlowBuilder #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials

      Simplify Salesforce Integrations with Declarative Webhooks

      Salesforce continues to invest in tools that simplify integration tasks for administrators. Low-code set up for integrations are possible on the Salesforce platform today. However, the functionality still seems to be dispersed all over the platform utilizing several tools, and keeping the difficulty of setup relatively high. This is where Declarative Webhooks comes in. This platform makes inbound and outbound integrations easy, and keeps all your configurations together in one single app.

      What is Declarative Webhooks?

      Declarative Webhooks is a native Salesforce application developed by Omnitoria. It allows admins to build significant, scalable integrations with external platforms without writing code. Built to work with REST APIs that use JSON or x-www-form-urlencoded data, the app makes it possible to configure both outbound and inbound connections from within Salesforce. It’s ideal for admins, developers, and operations teams looking to connect Salesforce to third-party tools quickly and securely.

      Declarative Webhooks currently holds a 5-star rating on the AppExchange, with positive feedback from users across industries.

      Key Declarative Webhooks Features

      Declarative Webhooks enables bidirectional integrations. You can send data out of Salesforce (outbound) by triggering callouts through Flow, Process Builder, Apex, custom buttons, or scheduled batches. You can also receive data from external systems (inbound) by defining endpoints within Salesforce that respond to external webhooks.

      Unlike standard Salesforce tools, Declarative Webhooks actually creates and hosts inbound URLs—eliminating the need for middleware, and enabling real-time sync with external systems directly from your org.

      The interface is entirely point-and-click, making setup approachable even for non-developers. The app includes template-based configurations that streamline implementation without the need for custom Apex. Help and guidance is provided throughout the UI each step of the way.

      Security and flexibility are top priorities. Declarative Webhooks supports a variety of authentication methods, including OAuth and Basic Authentication, and allows you to configure secure handling of credentials and external tokens.

      For more advanced use cases, the app includes features like retry logic, callout sequences, and detailed error handling. You can tailor integrations to your needs using scheduling tools or triggering logic from inside Salesforce.

      Real-World Use Cases

      Slack Webhook

      Simple use case: Trigger Slack actions via Slack workflows from Salesforce – Send a message to a channel and add a name to a Slack list.

      Now granted, this can also be achieved with Salesforce-Slack actions, however, I wanted to take this opportunity to trigger Slack workflows with Webhooks, and demo the Declarative Webhooks functionality with a simple use case.

      I set up a Slack workflow that triggers based on a webhook. This workflow posts a message to a channel and adds the name of the person that is passed into the workflow via the webhook to a list of contacts.

      You can see the configuration of the Slack workflow and the Slack output results below.

      How Did I Configure Declarative Webhooks to Achieve This Result?

      First you need to install Declarative Webhooks from Salesforce AppExchange. I will give you the link further down on this post. This app is free to install and try.

      • Complete the Slack configuration of the workflow. Slack will give you a webhook URL.
      • Configure Declarative Webhooks and add the URL to the configuration page. Make sure you add the domain URL to Salesforce Remote Site Settings.

      • Test and activate your callout
      • Use one of the many methods available in Declarative Webhooks to trigger the callout from Salesforce.

      Inbound Call Template for Zoho Campaigns

      Use case: Zoho Campaigns can generate a webhook callout when a contact unsubscribes from an email list. When a contact unsubscribes from the list, make a callout to Salesforce and activate the Do Not Call checkbox on the contact.

      How Did I Configure Declarative Webhooks and Zoho Campaigns to Achieve This Outcome?

      • Set up an Inbound Call Template on Declarative Webhooks. The magic on this platform is that it generates an external endpoint URL for you. You can chose to authenticate or not.

      • Create a webhook on the Zoho Campaign side and pass the Name and Email of the contact to Salesforce. Enter the URL generated by Declarative Webhooks here.

      • Build an autolaunched flow to update the checkbox on the matching record.

      • Test and activate your flow and Declarative Webhooks template.
      • Unsubscribe the contact from the list on the Zoho Campaigns side and see the magic unfold.

      I really liked this functionality. The logs show whether the flow executed successfully. For future enhancements, I would like for the Declarative Webhooks logs to also show output variable values coming from the flow.

      Pricing Overview

      Declarative Webhooks is free to install and use in Salesforce sandbox environments indefinitely. In a production or developer org, users get a 30-day free trial. After that, the app remains free for basic use, up to 100 inbound and 100 outbound calls per month, using one outbound and one inbound template.

      For organizations that need more capacity or advanced functionality, paid plans are available. These plans scale with usage and support additional templates, retries, and enhanced features. Nonprofit discounts are available, making the app accessible to mission-driven organizations.

      Follow this link to find out more about the product and try it yourself.

      Why Declarative Webhooks?

      This app removes the need for manual data entry and reduces the likelihood of human error. It lets teams centralize their business operations within Salesforce, replacing disconnected workflows with streamlined automations. Whether you’re connecting to popular SaaS tools or custom-built systems, Declarative Webhooks empowers teams of all skill levels to build reliable integrations that scale with their business.

      How to Get Started

      You can install Declarative Webhooks directly from the AppExchange. The installation process is quick, and the setup guide walks you through each step. Start experimenting in a sandbox or production trial, and configure your first outbound or inbound connection using the built-in templates. Whether you’re an admin looking to eliminate duplicate entries or a developer needing a fast integration framework, this tool provides the support you need to get started quickly. 

      Final Thoughts

      I liked how Declarative Webhooks brought various integration methods together in one app. I especially like the inbound call functionality. Ease of setup, flexible pricing, and native integration with Salesforce automation tools are attractive features for Salesforce Admins. If you are in the market for integration solutions, I recommend you check out Declarative Webhooks by Omnitoria here.

      This post was sponsored by Omnitoria.

      Explore related content:

      Getting Started with Salesforce Data Cloud: Your Roadmap to Unified Customer Insights

      How To Use Custom Permissions In Salesforce Flow

      Create Document Zip Archives in Salesforce Flow

      Dynamically Create Documents Using PDF Butler

      #DeclarativeWebhooks #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials

      When Your DMLs Have Criteria Conditions Other Than Id

      The Update Records element in Salesforce Flow is a powerful tool that allows you to modify existing records without writing any code. It’s commonly used to change field values and update statuses. You can configure it to update a specific record (like a record from the trigger or a record you’ve retrieved in a prior element), or you can set conditions to update multiple records that meet certain criteria. Best practice is to keep your updates efficient. Limit the number of records updated when possible, and always ensure that your flow logic avoids unnecessary updates to prevent hitting governor limits or creating infinite loops. Use it thoughtfully to streamline processes and maintain clean, accurate data.

      Update Records

      When you update records, there are three ways you can configure the update element:

    • Update using Id(s): Your update element can point to one record Id or multiple record Ids using the IN operator when executing the update. This is an efficient alternative, as the record(s) are uniquely identified. This alternative consumes one DML against your governor limit.
    • Update using a collection: This method is efficient, because the update element always consumes one DML against your governor limit, regardless of how many records your are updating in one show. You can update up to 10K records in one update element.
    • Update using criteria conditions for field values other than Id: When updating multiple records, we can also set conditions and update all the records that meet the conditions. In this case, Salesforce queries the database and gets the records that will be updated, and performs the update. This method therefore consumes one SOQL and one DML against your governor limit. It is possible that one or no record meets the conditions, as well.
    • Update Using Criteria Conditions For Field Values Other Than Id

      Let’s expand on the last method. For an inactive account, you may want to update all open cases to closed status. In a flow we could configure the update element with the following conditions:

      • AccountId = Inactive Account
      • Closed = false (case status is not closed)

      And for these accounts the field update that will be performed is as follows:

      Status = Closed (set status to closed)

      In this scenario, what Salesforce will do is query and find the records using the two conditions listed above (SOQL) and set the Status field on these records to Closed (DML).

      Now, is this a bad thing? Not necessarily. This is a little known fact, that you should keep in mind when optimizing your flow for governor limit usage.

      What is the alternative? I guess you could perform an update using one of the other alternatives listed above. Let’s look at these alternatives in detail:

      Update Using Id(s)

      If you wanted to use this method you could get the records according to the criteria conditions, and extract the Ids and put them into a text collection using the transform element, and do the update using the IN element. This alternative is more complicated. It does not bring any efficiencies.

      Update Using a Collection

      You could get a collection of records using the conditions, loop through each item to update the case status, or possibly use the transform element to update the status in one shot – depending on your use case – then go to update using the processed collection. Too complicated. This alternative still uses one SOQL and one DML.

      Conclusion

      Updates that include conditions beyond specifying the Id of the record consume one SOQL and one DML against your execution governor limits; make sure you check and control your governor limit usage.

      Explore related content:

      Salesforce Flow Best Practices

      Flow Naming Convention Tips

      Can You Start With a Decision Inside Your Record-Triggered Flow?

      How Many Flows Per Object?

      #Automation #DML #Salesforce #SalesforceAdmins #SalesforceDevelopers #SalesforceTutorials #UpdateElement