- Latest version: v2.1.0
- Compatible with Fusion version: 5.2.0 and later

- CSV
- JSON
- Word docs
- Other rich text formats
Features
- Service account authentication
- Full crawl of storage buckets and objects
- Recrawl buckets and objects
- Remove deleted objects
- Update objects
- Cascade deletion of objects in deleted buckets
- Document parsing support
- Bucket and object filtering
Prerequisites
Perform these prerequisites to ensure the connector can reliably access, crawl, and index your data. Proper setup helps avoid configuration or permission errors, so use the following guidelines to keep your content available for discovery and search in Fusion.- Set up a Google Cloud project with access to the target GCS bucket.
- Enable the Google Cloud Storage API.
- Create a service account with appropriate permissions:
- The minimum permissions are
storage.objects.list
andstorage.objects.get
. - For bucket metadata access, also add the permission
storage.buckets.get
.
- The minimum permissions are
- Download a JSON key file for that service account to use in the connector authentication properties in Fusion.
- Fusion must have outbound internet access to reach
https://storage.googleapis.com
. - If you are using a proxy or VPC, make sure traffic to GCS is permitted.
- A Fusion user with the
remote-connectors
oradmin
role for gRPC authentication. - The
connector-plugin-standalone.jar
alongside the plugin ZIP on the remote host. - A configured connector backend gRPC endpoint (
hostname:port
) in your YAML. - If the remote host doesn’t trust Fusion’s TLS cert, point to a truststore file path in your config.
Authentication
Setting up the correct authentication according to your organization’s data governance policies helps keep sensitive data secure while allowing authorized indexing. The connector uses Google Cloud service account credentials to access buckets through JSON key authentication. The service account must havestorage.objects.list
and storage.objects.get
permissions and the full JSON key content must be copied and pasted into the service account JSON key box.
You can restrict the service account from accessing content you don’t want indexed using IAM roles or signed URLs.
Create a service account in Google Cloud and authenticate the GCS V2 Connector:
- Go to the Google Cloud Console and navigate to IAM & Admin > Service Accounts.
- Click Create Service Account.
- Give the service account a name such as
fusion-gcs-crawler
. - Grant the service account
roles/storage.objectViewer
or use custom roles withstorage.objects.list
andstorage.objects.get
.
- Give the service account a name such as
- Click Create Key and choose JSON. Download the JSON key file to paste the key’s value in Fusion under Authentication Settings.
- Verify access and bucket permissions to ensure:
- The service account has access to the buckets and objects you’re trying to crawl.
- The service account is authorized in the GCS project to access the buckets.
Crawl specific buckets
If the account used has limited permissions, or to only crawl specific buckets, use theSpecify buckets to crawl
setting. Add the name of the buckets you would like to crawl to download bucket objects and metadata only for those buckets.
Recrawl
The GCS V2 connector updates the Solr index with any content changes since the previous crawl for items added, updated, or deleted.Remote connectors
V2 connectors support running remotely in Fusion versions 5.7.1 and later.Configure remote V2 connectors
Configure remote V2 connectors
If you need to index data from behind a firewall, you can configure a V2 connector to run remotely on-premises using TLS-enabled gRPC.The gRPC connector backend is not supported in Fusion environments deployed on AWS.The
Prerequisites
Before you can set up an on-prem V2 connector, you must configure the egress from your network to allow HTTP/2 communication into the Fusion cloud. You can use a forward proxy server to act as an intermediary between the connector and Fusion.The following is required to run V2 connectors remotely:- The plugin zip file and the connector-plugin-standalone JAR.
- A configured connector backend gRPC endpoint.
- Username and password of a user with a
remote-connectors
oradmin
role. - If the host where the remote connector is running is not configured to trust the server’s TLS certificate, you must configure the file path of the trust certificate collection.
If your version of Fusion doesn’t have the
remote-connectors
role by default, you can create one. No API or UI permissions are required for the role.Connector compatibility
Only V2 connectors are able to run remotely on-premises. You also need the remote connector client JAR file that matches your Fusion version. You can download the latest files at V2 Connectors Downloads.Whenever you upgrade Fusion, you must also update your remote connectors to match the new version of Fusion.
System requirements
The following is required for the on-prem host of the remote connector:- (Fusion 5.9.0-5.9.10) JVM version 11
- (Fusion 5.9.11) JVM version 17
- Minimum of 2 CPUs
- 4GB Memory
Enable backend ingress
In yourvalues.yaml
file, configure this section as needed:-
Set
enabled
totrue
to enable the backend ingress. -
Set
pathtype
toPrefix
orExact
. -
Set
path
to the path where the backend will be available. -
Set
host
to the host where the backend will be available. -
In Fusion 5.9.6 only, you can set
ingressClassName
to one of the following:nginx
for Nginx Ingress Controlleralb
for AWS Application Load Balancer (ALB)
-
Configure TLS and certificates according to your CA’s procedures and policies.
TLS must be enabled in order to use AWS ALB for ingress.
Connector configuration example
Minimal example
Logback XML configuration file example
Run the remote connector
logging.config
property is optional. If not set, logging messages are sent to the console.Test communication
You can run the connector in communication testing mode. This mode tests the communication with the backend without running the plugin, reports the result, and exits.Encryption
In a deployment, communication to the connector’s backend server is encrypted using TLS. You should only run this configuration without TLS in a testing scenario. To disable TLS, setplain-text
to true
.Egress and proxy server configuration
One of the methods you can use to allow outbound communication from behind a firewall is a proxy server. You can configure a proxy server to allow certain communication traffic while blocking unauthorized communication. If you use a proxy server at the site where the connector is running, you must configure the following properties:- Host. The hosts where the proxy server is running.
- Port. The port the proxy server is listening to for communication requests.
- Credentials. Optional proxy server user and password.
Password encryption
If you use a login name and password in your configuration, run the following utility to encrypt the password:- Enter a user name and password in the connector configuration YAML.
-
Run the standalone JAR with this property:
- Retrieve the encrypted passwords from the log that is created.
- Replace the clear password in the configuration YAML with the encrypted password.
Connector restart (5.7 and earlier)
The connector will shut down automatically whenever the connection to the server is disrupted, to prevent it from getting into a bad state. Communication disruption can happen, for example, when the server running in theconnectors-backend
pod shuts down and is replaced by a new pod. Once the connector shuts down, connector configuration and job execution are disabled. To prevent that from happening, you should restart the connector as soon as possible.You can use Linux scripts and utilities to restart the connector automatically, such as Monit.Recoverable bridge (5.8 and later)
If communication to the remote connector is disrupted, the connector will try to recover communication and gRPC calls. By default, six attempts will be made to recover each gRPC call. The number of attempts can be configured with themax-grpc-retries
bridge parameters.Job expiration duration (5.9.5 only)
The timeout value for irresponsive backend jobs can be configured with thejob-expiration-duration-seconds
parameter. The default value is 120
seconds.Use the remote connector
Once the connector is running, it is available in the Datasources dropdown. If the standalone connector terminates, it disappears from the list of available connectors. Once it is re-run, it is available again and configured connector instances will not get lost.Enable asynchronous parsing (5.9 and later)
To separate document crawling from document parsing, enable Tika Asynchronous Parsing on remote V2 connectors.connector-plugins
entry in your values.yaml
file:
Configuration
Name | Title | Description |
---|---|---|
authenticationProperties | Authentication settings | Connect to the bucket store using a service account. The service account requires the following permissions: storage.buckets.list to crawl all the available buckets; storage.objects.list and storage.objects.get to access to the objects in the buckets. |
applicationProperties | Limit documents | Bucket and Object filtering options. |
jsonKey | Service account Json key | Json key contents from authorized service account. |
buckets | Bucket list | Add the bucket names to crawl. Leave blank to crawl all the available buckets. |
includedFileExtensions | Included file extensions | Set of file extensions to be fetched. If specified all non-matching files will be skipped. |
excludedFileExtensions | Excluded file extensions | A set of all file extensions to be skipped from the fetch. |
inclusiveRegexes | Inclusive regexes | Regular expressions for bucket or object name patterns to include. This will limit this datasource to only items that match the regular expression. |
exclusiveRegexes | Exclusive regexes | Regular expressions for bucket or object name patterns to exclude. This will limit this datasource to only items that do not match the regular expression. |
maxSizeBytes | Maximum File Size | Used for excluding objects when the objects size is larger than the configured value. |
minSizeBytes | Minimum File Size | Used for excluding objects when the objects size is smaller than the configured value. |
bucketPrefix | Bucket prefix | Filter results to buckets whose names begin with this prefix. Useful only when ‘Bucket List’ property is empty. |
blobsPrefix | Object prefix | Filter results to objects whose names begin with this prefix. |
pageSize | Buckets and Objects page size | Maximum number of buckets or objects returned per page. |