Fusion Server

Version 4.1
How To
    Learn More

      Fusion Server 4.1.1 Release Notes

      Release date: 7 November 2018

      Component versions:

      Solr 7.4.0

      ZooKeeper 3.4.13

      Spark 2.3.1

      Jetty 9.3.25.v20180904

      Ignite 2.3.0

      As of Fusion 4.1.1, the SSL configuration procedure has changed. See SSL security for Unix and SSL security for Windows for updated instructions.

      New features


      • Jetty has been upgraded from version 9.3.8.v20160314 to 9.3.25.v20180904.

      • Two new blob types were added, to support the Managed Javascript index stage and query stage:

        • file:js-index

        • file:js-query

      • Web connector improvements:

        • The connector can now send custom headers with a new addedHeaders property.

        • When the Web connector crawls a Web site and Server Name Indication (SNI) is enabled but the Web site does not support it, you may receive an unrecognized name error. A new property, useIpAddressForSslConnections configures the connector to use the IP address instead of the hostname.

        • Kerberos is now supported with these new properties:

          • kerberosEnabled - Boolean; default false. If true, the connector attempts to perform Kerberos/Spnego authentication when a Web request returns a 401 WWW-Negotiate challenge. If false, the Web connector refuses to attempt Kerberos/Spnego authentication.

          • kerberosPrincipalName - Optional - Use this principal name as the logged-in Kerberos user instead of the environment’s default. If set, you must also specify a keytab with one either kerberosKeytabFilePath or kerberosKeytabBase64 (see below).

          • kerberosKeytabFile - Optional - The path to the Kerberos keytab file that contains the credentials.

          • kerberosKeytabBase64 - Optional - A base64-encoded Kerberos keytab file that contains the credentials.

          See the main topic for additional configuration details.

      • The JDBC Connector has a new convert_type parameter for binary streaming data from a SQL Server

        By default, a JDBC column will automatically use the Solr field type that matches the underlying database’s type of the column. If checked, this will use the field name of the column to choose the Solr field type.

      • The Slack connector can now filter by channel names using a new channel_filters property.

      • When you import objects using the Objects API, a new context parameter specifies the name of an existing app which will be the new context for the imported objects. To support this, the Links API has two new linkType values:

        • inContextOf

        • hasContext

      • Performance improvements in proxy.

        The proxy service now reuses HTTP client instances for better throughput. Customers with high QPS expectations (~ 1000 QPS) are strongly encouraged to upgrade to 4.1.1.

      Other changes

      • The Fusion SQL Service (bin/sql) has been added to the default group in conf/fusion.properties. If you do not intend to run the SQL service for self-service analytics, you can remove it from the default group. See Fusion SQL service for more information on using the SQL service.

      • When exporting and importing objects with secret keys, the format is now secret.{object_type}.{object_id}.{number}.password. In the case of data source objects, the format is now secret.{object_type}.{object_id}.{datasource_type}.{number}.password.

      • JavaScript is now thread-safe in pipeline stages.

        This resolves an issue in previous releases where variables in Javascript stages that were not declared with var were shared between threads and other stages using variables with the same name.

      • Headers missing from request.headers in the trusted-http security realm are available again. This includes:

        • fusion-user-realm-name

        • fusion-user-realm-type

        • fusion-user-name

        • fusion-user-permissions now has the role restriction assigned with the security realm.

      Known issues

      • The Tika parser and Tika stage Return parsed content as XML option works only if the input document is also HTML/XML.

      • Vulnerability scanners are reporting false positives for security vulnerabilities.

      • Fusion may not redirect to the login page for users of Google Chrome release version 73.0.3683.75.

      • When exporting configuration data during a Fusion upgrade, the migrator may fail to log the complete process.

      • Fusion UI may not display the complete job history, despite the history being complete in the jobs endpoint.

      • In Red Hat Enterprise Linux (RHEL) 7.6, running the install-high-perf-web-deps.sh script fails to install Google Chrome.

      • Fusion Proxy service’s SAML library does not fully support Windows.

      • The responseHeaders_ss field may be missing from PDF documents acquired by a web crawl search.

      • In Fusion UI, the job history status icons may not accurately show the job status.

      • Running count(distinct query) in the Fusion SQL service results in an inaccurate number of results. This can be mitigated by using a query similar to select count(1) from (select distinct query from table) a.

      • When using ZooKeeper Import/Export API to export an app, the process may fail if the app contains large blobs.

      • Using JavaScript dedupe may result in the creation of a ScriptRunner with a large default cache size. This may cause the connectors to run out of heap space during large crawls.

      • Attempting to use JSONPath $.id in the JSON Parser to parse all document IDs only processes the first document in the list.

      • Some Fusion SQL queries do not work with time-based partitions of collections.

      • Clicking "Ok" while Fusion UI is in the process of deleting an app will cause a 503 Service Unavailable error.

      • In Fusion UI, deleting an app with a datasource and creating a new app with the same name may result in 404 Not Found and 500 Server Error errors when uploading a datasource. Reloading the page prior to uploading the datasource will resolve this issue.

      • Updating a datasource may result in the loss of unsaved changes to pipeline or parser stages.

      • User-created index pipeline or parser fields are not included in the Index Profile search’s autocomplete function.

      • When deleting a collection, selecting the option "Switch to another collection", not making a choice, and re-selecting "Or return to the launcher to pick a different app" results in the "Proceed to Delete Collection" button being disabled.

      • In Fusion UI’s Collection Manager, unchecking "Enable Search Logs" in Manage All Collections > Collection Features before saving results in a 500 Server Error error.

      • In Fusion UI’s Collection Manager, unchecking "Enable Signals" in Manage All Collections > Collection Features when creating a collection does not result in signals being disabled.

      • Starting and stopping Fusion services while in the Fusion UI may result in the components and fields being inaccessible.

      • Setting a parser’s maxParserRecursionDepth field to a negative value will not result in an error until attempting to implement it in the Index Workbench.

      • The SQL service will not initialize by default unless the API is located on the same host.

      • Indexing an Oracle JDBC datasource may fail due to an inability to update ConvertType.

      • The Hadoop Authentication Plugin hadoop-auth-3.1.1.jar is incompatible with Jetty 9.4.12.v20180830.jar. Reverting to hadoop-auth-2.7.1.jar inside ROOT.war fixes the issue.

      • Users in a JSON Web Token (JWT) realm in Fusion may fail to authenticate. Use of a base64 encoded token may fix this issue.

      • Stopping Fusion with a TERM signal fails to stop all Fusion services.

      • Deleting folders from the fusion/var/log/ directory causes Fusion services to fail during start.

      • Creating an app with the same name as a previously deleted app may result in the error "Datasource ID does not exist".

      • Fusion UI’s Index Workbench may fail to display the correct job status until the page is reloaded.

      • When performing a Head/Tail Analysis Job, the user may receive a Job execution error: Error during job execution: empty collection error, even if there are documents in the app collections.

      • Some connectors are not visible in the datasource panel until after they have been installed.

      • In Windows, SDK connectors may time out during the first run. Subsequent runs are successful.

      • Blob types file:js-index and file:js-query cannot be added in Fusion UI but can be added with the API.

      • If an incorrectly configured stage is disabled, results will not generate despite active stages being valid.

      • No error message is produced during a signal aggregation job if the field name value being searched is inaccurate or non-existent.