Amazon AppFlow:双向同步B/W 2 Salesforce组织

发布于 2025-02-04 20:15:49 字数 127 浏览 1 评论 0原文

我需要使用Amazon AppFlow创建双向同步B/W 2 Salesforce组织。所有关系记录都必须同步,b/w这些组织。我正在考虑为创建流量的所有对象上的每个记录创建外部ID,以确保在ORG上保存关系。进行双向同步的最佳方法是什么?

I need to create a bidirectional sync b/w 2 Salesforce ORGs using Amazon appflow. All the relationship records needs to be in sync as well b/w these ORGs. I'm thinking to create External ID on each of these records for all objects for which flows are created to make sure relationships are preserved across ORGs. What is the best way to do bidirectional-sync ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

坚持沉默 2025-02-11 20:15:49

AWS already has fairly similar article on this: https://aws.amazon.com/blogs/apn/using-amazon-appflow-to-achieve-bi-directional-sync-between-salesforce- and-amazon-rds-for-postgresql/

尽管我以下我的体系结构是无服务器的,因为我便宜并且不在乎EC2的成本。

我建议您在做什么事。我个人对每个对象打算的所有字段/值进行集中式DynamoDB。然后,您可以使用事件驱动的lambdas将数据推向S3 CSV。然后,这些CSV更新将通过AppFlow为您推动。

应该在所有这一切中都有一个DynamoDB表。或每个对象的单独表格,但我没有看到倍数的优势。您只需要一个S3桶。只需要多个文件夹。

您的数据库结构将是以下类似的:

{ 
    "ID" : randomly generated GUID,
    "SF_ID": "Salesforce ID",
    "DEST_SF_ID" : "SF ID when created in the other org",
    "SOURCE_ORG": "SOURCE_ORG_ID",
    "record_details" : {
       *ALL THE SF FIELDS*
    }
}

S3文件夹结构:

root/
    SF_ORG_1 /
        Inbound
        Outbound
    SF_ORG_2 /
        Inbound
        Outbound

您需要一个lambda来消耗DynamoDB触发事件,并知道要推到哪个S3桶文件夹。

您需要另一个Lambda来消费S3存储桶的事件。您可以在一个lambda中进行简单的分支,以了解S3_BUCKET_FOLDER_1是否来自org_1,s3_bucket_folder_2来自org_2。这将同步DynamoDB,并知道将CSV推到另一个存储桶文件夹。

为了确保您在Lambdas上没有周期性的电话,请确保您有入站和出站推动的目录。流程使您可以设置铲斗前缀。

然后,您只需收听创建,更新和删除事件即可。我个人没有在AppFlow中处理删除事件,但是最坏的情况您只是要制作一个连接的应用程序并使用Salesforce REST API来调用DELETE。

AWS already has fairly similar article on this: https://aws.amazon.com/blogs/apn/using-amazon-appflow-to-achieve-bi-directional-sync-between-salesforce-and-amazon-rds-for-postgresql/

Although, my architecture below is serverless as I'm cheap and don't care for EC2 costs.

enter image description here

I recommend one source of truth in whatever you're doing. I'd personally do centralized DynamoDB with all the field/values you're intending per object. Then you can have event-driven lambdas to push data to S3 CSVs. Then those CSV updates get pushed via AppFlow for you.

You should have a single DynamoDB table in all of this. Or a separate table for each object but I'm not seeing the advantage of multiples. You only need one S3 bucket. Just need multiple folders.

Your DB structure would be something like below:

{ 
    "ID" : randomly generated GUID,
    "SF_ID": "Salesforce ID",
    "DEST_SF_ID" : "SF ID when created in the other org",
    "SOURCE_ORG": "SOURCE_ORG_ID",
    "record_details" : {
       *ALL THE SF FIELDS*
    }
}

S3 Folder Structure:

root/
    SF_ORG_1 /
        Inbound
        Outbound
    SF_ORG_2 /
        Inbound
        Outbound

You'd need a Lambda to consume the DynamoDB trigger events and know which S3 bucket folder to push to.

You'd need another Lambda to consume events of the S3 buckets. You can have simple branching in one lambda to know if S3_Bucket_Folder_1 is from Org_1 and S3_Bucket_Folder_2 is from Org_2. This would sync up the DynamoDB and know to push a CSV to the other bucket folder.

To make sure you don't have cyclical calls on the Lambdas, make sure you have directories for inbound and outbound pushes. The Flows allow you to set the Bucket prefix.

Then you just listen for create, update, and delete events. I personally haven't dealt with deletion event in AppFlow but worst-case you're just gonna make a Connected App and use Salesforce REST API to call delete.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文