- 1. Introduction
- 2. About Data Gateway
- 3. Key Features
- 4. Glossary
- 5. System Requirements
- 6. Application Access
- 7. Roles
- 8. Dashboard Reports (Statistics)
- 9. Cloud Configurations
- 10. Access Management
-
11. Endpoint Management Module
- 11.1 Create Endpoint
- 11.2 Manage Endpoint
-
11.3 Protocols
- 11.3.1 FTP (File Transfer Protocol)
- Pull-Push
- Push-Pull
- Push–Push Scenario
- 11.3.2 FTPS (FTP Secure)
- Pull-Push
- Push-Pull
- Push-Push Scenario
- 11.3.3 SFTP (SSH File Transfer Protocol)
- Pull-Push
- Push-Pull
- Push-Push Scenario
- 11.3.4 API Based File Transfers
- 11.3.4.1 Pull-Push
- 11.3.4.2 Scenario: File Transfer through API, where You connect to Remote Server
- 11.3.4.3 Scenario: File Transfer through API, where Partner connects to Your Server
- 11.3.4.4 Push-Pull
- 11.3.5 AS2 (Applicability Statement 2)
- 11.3.5.1 AS2 Organizations
- 11.3.5.2 AS2 Endpoints
- 11.3.5.3 AS2 Relationships
- 11.4 GUID
- 12. File Management Module
-
13. Settings
- 13.1 Scheduler Configuration
- 13.2 PGP Manager
- 13.3 Application Configuration
- 13.4 Queue Management
- 13.4.1 Queue Management – Field Descriptions
- 13.4.2 Operational Summary
- 13.4.3 Key Benefits
- 13.5 Priority Handling
- 13.5.1 Priority Handling – Field Descriptions
- 13.5.2 Operational Summary
- 13.5.3 Key Benefits
- 13.6 Adapter Configuration
- 13.6.1 Adapter Configurations – Field Descriptions
- 13.6.2 Operational Behavior Example
- 13.6.3 Key Benefits
- 13.7 License Module
- 13.7.1 License Management – Field Descriptions
- 13.7.2 Operational Workflow
- 13.7.3 Key Benefits
- 14. Data Gateway Components
-
15. Connectivity and Authentication
- 15.1 Scenario: File Transfer through File Client, where Partner Connects to Your Server
- 15.2 Scenario: File Transfer through File Client, where You connect to Partner’s Remote Server
- 15.3 Push-Push Scenario
- 15.4 Scenario: File Transfer through AS2, push to partner and push to gateway
- 15.5 IP Allowlist & Rate Limiting
- 15.5.1 IP allowlisting
- 15.5.2 Rate Limiting
-
16. SAML Authentication and Authorization with Okta
- 16.1 What is SAML?
- 16.2 What is SAML Used For?
- 16.3 How SAML Works
- 16.4 Configuring SAML Authentication and Authorization in Okta
- 16.4.1 Prerequisites
- 16.4.2 Steps to Configure SAML in Okta
- 16.4.3 Download Identity Provider Metadata
- 16.4.4 Application Configuration (application.yml)
- 16.5 User Management for IDP Users
- 16.6 Common Troubleshooting Issues
-
17. Alert Management
- 17.1 File Not Received (FNR) Alert
- 17.2 File Not Received (FNR) Alert Timing Options
- 17.2.1 FNR Current Day Minutes
- 17.2.2 FNR Current Day Hours Scenario
- 17.2.3 FNR Daily Days Scenario
- 17.2.4 FNR Daily Weekdays Scenario
- 17.2.5 FNR Weekly Between Scenario
- 17.2.6 FNR Weekly Day of Week Scenario
- 17.2.7 FNR Monthly Specific Day Scenario
- 17.2.8 FNR Monthly On Scenario
- 17.2.9 FNR Monthly Interval Check Scenario
- 17.2.10 FNR Quarterly Scenario
- 17.2.11 FNR Yearly Every Scenario
- 17.2.12 FNR Yearly On The Scenario
- 17.3 File Load Alert (FLA Alert)
- 17.4 Manage Alerts
- 18. Cloud-Cloud File Transfer
- 19. OAuth 2.0 Authentication
- 20. ICAP Integration
- 21. Data Gateway APIs
9. Cloud Configurations
Cloud Configurations are a pivotal component of the Data Gateway, providing users with the ability to seamlessly integrate with various cloud providers for efficient storage and retrieval of files. This section guides users through the configuration process for different cloud providers, ensuring a secure and reliable data transfer experience.
9.1 GCS Cloud Configurations
9.1.1 Service account creation
- Sign in to [Google Cloud Console]
- Select an existing Project or create Google Cloud Project.




6. Create and download a JSON key file.
7. Go to the KEYS –> ADD KEY

8. By Clicking on the ADD KEY we can see
- Create new key
- Upload existing Key
9. We can generate as .JSON or .P12 files as shown below.

10. Select the Key Type as JSON and click on the Create button we can get one JSON file as shown below.


Example .json file
9.1.2 Pub/Sub Creation
Select the Pub/Sub category in the Google Cloud account and set up the Topics and Subscriptions

9.1.2.1 Topic Creation:
- Click on the Topics –> Create Topic, and provide the Topic ID.

2. When you create or update a topic, you must specify its properties.
- Add a default subscription. Adds a default subscription to the Pub/Sub topic. You can create another subscription for the topic after the topic is created. The default subscription has the following properties:
- Subscription ID of -sub
- Pull delivery type
- Message retention duration of seven days
- Expiration after 31 days (about 1 month) of inactivity
- Acknowledgment deadline of 10 seconds
- Immediate retry policy
- Use a customer-managed encryption key (CMEK). Specifies if the topic is encrypted with a CMEK. Pub/Sub encrypts messages with Google-owned and Google-managed keys by default. If you specify this option, Pub/Sub uses the envelope encryption pattern with CMEK. In this approach, Cloud KMS does not encrypt the messages. Instead, Cloud KMS encrypts the Data Encryption Keys (DEKs) that Pub/Sub creates for each topic. Pub/Sub encrypts the messages using the newest DEK that was generated for the topic. Pub/Sub decrypts the messages shortly before they are delivered to subscribers.
9.1.2.2 Create Subscription:
- Click on the Subscription –> Create Subscription, provide the Subscription ID



4. Schema. A schema is a format that the message data field must follow. A schema is a contract between the publisher and subscriber that Pub/Sub enforces. Topic schemas help standardize message types and permissions to allow them to be consumed by different teams in your organization. Pub/Sub creates a central authority for message types and permissions.
- Enable ingestion. Enabling this property lets you ingest streaming data from external sources into a topic so that you can leverage the features of Google Cloud.
- Message retention duration. Specifies how long the Pub/Sub topic retains messages after publication. After the message retention duration is over, Pub/Sub might discard the message regardless of its acknowledgment state. Message storage fees are charged for storing all messages published on the topic.
- Default = Not enabled
- Minimum value = 10 minutes
- Maximum value = 31 days (about 1 month)
9.1.3 Bucket Creation
Step 1: Log in to Google Cloud Console
1. Open your web browser and go to Google Cloud Console.
2. Log in with your Google account.
Step 2: Select Your Project
- Click the project selector dropdown at the top.
2. Choose an existing project or create a new project.
3. Ensure billing is enabled for your project.
Step 3: Navigate to Cloud Storage
- In the left sidebar menu, go to Buckets and Click “Create” at the top.
Step 4: Enter a Unique Bucket Name
- The bucket name must be globally unique across all of Google Cloud.
- It must:
- Contain only lowercase letters, numbers, dashes (-), and underscores (_).
- Be 3-63 characters long.
- Not start or end with a dash (-).
- Example: my-gcs-bucket-123
Step 5: Select a Storage Location
The location determines where your data is stored and affects performance and cost. You have three options:
- Region (e.g., us-central1): Data is stored in a single region.
- Dual-region (e.g., nam4): Data is replicated in two locations.
- Multi-region (e.g., US): Data is stored across multiple locations for higher availability.
Step 6: Choose a Storage Class
The storage class determines the cost and retrieval speed:
- Standard (default) – Best for frequently accessed data.
- Nearline – Best for data accessed once a month.
- Coldline – Best for data accessed once a year.
- Archive – Best for long-term storage (10+ years).
Step 7: Configure Access Control
You can choose between:
- Uniform (Recommended) – Permissions apply at the bucket level.
- Fine-grained – Permissions apply to individual objects.
Step 8: Set Advanced Settings (Optional)
8.1. Soft-delete policy (for data recovery)
- When enabled, objects deleted from the bucket are not immediately erased. Instead, they are retained for a specified period before permanent deletion.
- This allows recovery of mistakenly deleted objects within the retention period.
Options:
- Use default retention duration -> The default retention period is 7 days unless changed by an administrator.
- Set custom retention duration -> Allows you to specify a custom duration for how long deleted objects should be retained.
8.2. Object Versioning (for version control)
- When enabled, Cloud Storage keeps multiple versions of an object, allowing rollback to previous versions.
- Useful for protecting against accidental overwrites or deletions.
Fields:
- Max. number of versions per object -> Defines how many previous versions of an object are stored.
- Setting this to 1 allows only one previous version (not recommended for full overwrite protection).
- Increasing this number ensures more backup versions are kept.
- Expire non-current versions after X days -> Automatically deletes older object versions after the specified period.
- Helps manage storage costs.
- Recommended: 7 days for Standard storage class.
8.3. Retention (for compliance)
- Prevents objects from being deleted or modified for a specific period.
- Useful for regulatory compliance and data protection policies.
Options:
- Set bucket retention policy -> Applies a uniform retention period for all objects in the bucket.
- Enable object retention -> Allows retention settings at the individual object level.
Step 9: Enable Encryption (GCP-managed or Customer-managed keys) and Create the Bucket
- Default google will provide Google-managed encryption key and Click “Create” to finalize the setup.
9.1.4 Google Cloud Configuration in Data Gateway
The Cloud Provider configurations are setup for onboarding the Partners. The Data Gateway allows users to store partner-related keys or passwords in cloud-native secret managers, enhancing security and flexibility. In the absence of specific configurations for cloud secret managers, the keys or passwords are encrypted and stored in the database by default.
To leverage cloud-native Secret Managers, users need to enable the service by clicking on the ‘Secret Manager’ checkbox. Authentication is then conducted based on the configured settings. It’s important to note that, currently, Data Gateway supports single user accounts for cloud providers.
- Navigate to Cloud Configurations.
- Click on add icon (+) that display over GCS Configurations tab.
3. Select the GCS Storage Service checkbox.
- This option is to enable Google Cloud Storage (GCS) service integration.
- When checked, it allows users to configure storage buckets and Pub/Sub topics in Google Cloud.
- Use this when your service needs to store data in Google Cloud Storage
4. You can select the GCS Secret Manager Service
- This option is for enabling Google Cloud Secret Manager service.
- It allows users to store, manage, and access sensitive information such as API keys, passwords, or certificates securely.
- Only select this option if the integration is related to secret management instead of database.
5. Configuration Details:
- Storage Service Name: Provide any unique name for the identification of the GCS Storage Service.
- Auth File: Users need to provide the Google-Auth.json file.
- Project ID: Provide the Project ID which was created in GCS.
- Topic Name: Provide the Pub/Sub Topic Name where messages are published whenever an event happens in the bucket (like object creation, update, or deletion). Currently we are checking only object creation events.
- Subscription Name: Provide the Subscription Name associated with the Pub/Sub topic. Subscriptions allow services to receive messages from the topic
- Region: Provide the region where the GCS Storage is hosted.
6. Enable Versioning:
- You can select this to enable Object Versioning in GCS.
- It retains the older versions of the object whenever objects are updated/deleted.
9.2 Amazon Web Services (AWS) Configuration
9.2.1 IAM User Creation
1. Sign in to the [AWS Management Console] (https://aws.amazon.com/console/).
2. Navigate to “IAM” -> “Users”.

3. Click “Add user” and provide a username.

4. Assign “Programmatic access” and click on next

5. Attach necessary policies (e.g., AmazonS3FullAccess).

6. Once User Created successfully, inside the user details page, navigate to the Security credentials tab
–> Create Access Key: Scroll down to the Access keys section, then click on Create access key


7. Click on Create access key and click on download the access key for (Access Key ID and Secret Access Key).

9.2.2 Steps to Configure AWS
1. Navigate to Cloud Configurations.
2. Click on add icon that display over AWS Configurations tab.
3. Select the AWS Storage Service checkbox.
- This option is to enable AWS Cloud Storage service integration.
- When checked, it allows users to configure storage buckets in AWS.
4. You can select the AWS Secret Manager Service
- This option is for enabling AWS Secret Manager service.
- It allows users to store, manage, and access sensitive information such as API keys, passwords, or certificates securely.
- Only select this option if the integration is related to secret management instead of database.
5. Configuration Details:
- Bucket Name: Provide the name of the AWS S3 bucket where files will be stored. This bucket acts as a container for data that needs to be archived or backed up. You can provide a new bucket name or can provide the existing bucket name.
- Access Key: The unique AWS Access Key ID used to authenticate API requests. It is required to grant programmatic access to AWS services.
- Secret Key: The secret key associated with the Access Key ID. This key ensures secure authentication when accessing AWS resources.
- Region: Provide the AWS region where the S3 bucket is hosted. The region selection ensures the data is stored in the correct geographical location based on performance and compliance requirements.
- Account Id: The unique AWS account ID to identify the AWS user or account. It helps to differentiate between multiple AWS accounts.
- Queue Name: The name of the AWS SQS queue used for processing messages or events. This queue handles asynchronous tasks related to file storage. We can provide any custom name in this field.
- Queue Name: This is the name assigned to the queue in the cloud service (e.g., AWS SQS). It helps identify the queue used for message processing and communication between different services.
- Queue URL: The unique URL endpoint for the queue. This URL is used by applications to send and receive messages from the queue. It is automatically generated when the queue is created in services like AWS SQS.
- Queue ARN: The Amazon Resource Name (ARN) is a globally unique identifier assigned to the queue in AWS. It provides a standardized way to reference the queue across AWS services and policies, ensuring proper access control and integration.
6. Archive File Interval:
Provide the lifecycle policy for AWS, facilitating the replay of files within a specified period. Specifies the number of days after which files will be automatically archived from the storage bucket.
9.3 Microsoft Azure Configuration
9.3.1 Storage Account Creation
1. Sign in to the [Azure Portal](https://portal.azure.com/).
2. Navigate to “Storage accounts”.

3. Click “+ Add” and fill in the required details to create new storage account or choose existing one.
4. Navigate to the storage account -> “Access keys”.

5. Click on “Access Keys” to get connection string.

9.3.2 Steps to Configure Azure:
1. Navigate to Cloud Configurations.
2. Click on add icon that display over Azure Configurations tab.

3. Select the Azure Storage Service checkbox.
- This option is to enable Azure Cloud Storage service integration.
- When checked, it allows users to configure storage buckets in Azure.
4. Storage Service Name:
- Provide any unique name to identify the storage service configuration.
5. Connection String:
- The Azure Storage Account Connection String, which is required to authenticate and connect to the Azure Blob Storage service.
6. Container Name:
- The name of the Azure Blob Storage Container where the files will be stored.
7. Region:
- Specifies the Azure Region where the storage account is hosted, such as East US, West Europe, or Central India.
8. Archive File Interval:
- Provide the lifecycle policy for Azure Storage, facilitating the replay of files within a specified period. Specifies the number of days after which files will be automatically archived from the storage bucket.
9. Event Hub:
- The Event Hub Name that will be used to publish or subscribe to events.
10. Event Connection String:
- The Azure Event Hub Connection String used to connect to the Event Hub for monitoring file activities or real-time data streaming.
11. You can select the Azure Secret Manager Service
- This option is for enabling Azure Secret Manager service.
- It allows users to store, manage, and access sensitive information such as API keys, passwords, or certificates securely.
- Only select this option if the integration is related to secret management instead of database.
12. Key Vault Name:
- Provide the Azure Key Vault where sensitive information like secrets, certificates, and keys are securely stored.
13. Subscription Id:
- The Subscription Id is a unique identifier assigned to your Azure subscription.
- This ID links the Key Vault service to the specific Azure subscription under which the services are being used.
- You can find this Subscription Id in the Azure Portal under Subscriptions.
14. Resource Group Name:
- A Resource Group Name is a container that holds related Azure resources for your application.
- It organizes all the Azure services like Key Vault, Storage Accounts, and Virtual Machines into a group.
- Enter the exact resource group name under which the Key Vault is created.
15. App Client Id:
- The App Client Id is the Application (Client) ID registered in Azure Active Directory.
- This ID identifies the application that needs to access Azure resources.
- You can get this ID from the Azure App Registration under Azure Active Directory.
16. Client Secret:
- The Client Secret is a secret key generated during the App Registration process.
- It acts like a password and is used to authenticate the application to access the Azure services.
- This value needs to be copied while generating the secret in the Azure portal, as it will not be visible again.
17. Tenant Id:
- The Tenant Id is the Directory ID of your Azure Active Directory.
- It identifies the Azure AD instance where your application is registered.
- You can find this value in the Azure Active Directory Overview page.
9.4 IBM Cloud Configuration
9.4.1 Service Credential Creation
1. Sign in to the [IBM Cloud Console] (https://cloud.ibm.com/).
2. Navigate to “Resource List” -> “Storage” –> “Cloud Object Storage”.

3. Click on your instance and navigate to **Service credentials**.

4. Click “New credential” and provide a name

5. After clicking on New Credential, a popup window will appear as shown below. Please fill in the
required details and click on Add to proceed.

6. Create and save the generated credentials like API key, Access Key, Secret Key and service instance
ID details which will be required at later stage.
9.4.2 Steps to Configure IBM Cloud:
1. Navigate to Cloud Configurations.
2. Click on add icon that display over IBM-Cloud Configurations tab.
3. Select the IBM Storage Service checkbox.
- This option is to enable IBM Cloud Storage service integration.
- When checked, it allows users to configure storage buckets in IBM Cloud.
4. Access Key:
- The Access Key generated from the IBM Cloud Object Storage service, used to authenticate API requests.
5. Secret Key:
- The Secret Key associated with the Access Key, providing secure authentication.
6. Endpoint URL:
- The API Endpoint URL for the IBM Cloud Object Storage service, which specifies the cloud storage service location (e.g.,
https://s3.us-south.cloud-object-storage.appdomain.cloud).
7. Bucket Name:
- The name of the IBM Cloud Object Storage Bucket where files will be stored.
8. Region:
- The IBM Cloud Region where the object storage service is hosted.
9. Archive File Age:
- Like other providers, this parameter enables the lifecycle policy for IBM Cloud, facilitating the replay of files within a specified period.
10. You can select the IBM Secret Manager Service
- This option is for enabling IBM Secret Manager service.
- It allows users to store, manage, and access sensitive information such as API keys, passwords, or certificates securely.
- Only select this option if the integration is related to secret management instead of database.
11. API Key:
- The API Key is a unique identifier that provides secure access to IBM Cloud services.
- This key is used to authenticate the application or service when connecting to IBM Secret Manager.
- It is generated from the IBM Cloud Console while creating service credentials under the Secret Manager Service.
- Without the API key, the system will not allow access to the secret manager.
12. Service URL:
- The Service URL is the endpoint URL of the IBM Secret Manager Service.
- It specifies the location where the API requests are sent.
- The URL typically follows the format:
https://<region>.secrets-manager.appdomain.cloud. - The correct service URL is mandatory to connect with the respective IBM cloud secret manager service.
13. Secret Type:
- This field defines the type of secret stored in the IBM Secret Manager.
- Supported secret types include:
- UsernamePassword: Store credentials like username and password.
- IAM Credentials: Store IBM cloud IAM access credentials.
- API Key: Store API keys for various services.
- Certificates: Store SSL/TLS certificates.
- Based on the selected secret type, the application will fetch and use the respective secret details.
14. Secret Group Name:
- A Secret Group Name is a logical container to organize multiple secrets within the IBM Secret Manager.
- It helps in categorizing secrets based on environment, project, or application.
- For example, separate secret groups for Development, Testing, and Production environments.
- This field ensures that secrets are accessed from the correct group.

