Fusion SQL Overview

Most organizations that deploy Fusion also have SQL-compliant business intelligence (BI) or dashboarding tools to facilitate self-service analytics.

The Fusion SQL service:

  • Lets organizations leverage their investments in BI tools by using JDBC and SQL to analyze data managed by Fusion. For example, Tableau is popular data analytics tool that connects to Fusion SQL using JDBC to enable self-service analytics.

  • Helps business users access important data sets in Fusion without having to know how to query Solr.

Important
In addition to the specified System Requirements, Fusion on Windows requires Visual C++ Redistributable for Visual Studio 2015 to start the SQL service successfully.

Fusion SQL architecture


layout: default title: Fusion SQL Architecture toc: true skip: all ---

The following diagram depicts a common Fusion SQL service deployment scenario using the Kerberos network authentication protocol for single sign-on. Integration with Kerberos is optional. By default, the Fusion SQL service uses Fusion security for authentication and authorization.

Fusion SQL service architecture

The numbered steps in the diagram are:

  1. The JDBC/ODBC client application (for example, TIBCO Spotfire or Tableau) uses Kerberos to authenticate a Fusion data analyst.

  2. After authentication, the JDBC/ODBC client application sends the user’s SQL query to the Fusion SQL Thrift Server over HTTP.

  3. The SQL Thrift Server uses the keytab of the Kerberos service principal to validate the incoming user identity.

    The Fusion SQL Thrift Server is a Spark application with a specific number of CPU cores and memory allocated from the pool of Spark resources. You can scale out the number of Spark worker nodes to increase available memory and CPU resources to the Fusion SQL service.

  4. The Thrift Server sends the query to Spark to be parsed into a logical plan.

  5. During the query planning stage, Spark sends the logical plan to Fusion’s pushdown strategy component.

  6. During pushdown analysis, Fusion calls out to the registered AuthZ FilterProvider implementation to get a filter query to perform row-level filtering for the Kerberos-authenticated user.

    By default, there is no row-level security provider but users can install their own implementation using the Fusion SQL service API.

  7. Spark executes a distributed Solr query to return documents that satisfy the SQL query criteria and row-level security filter. To leverage the distributed nature of Spark and Solr, Fusion SQL sends a query to all replicas for each shard in a Solr collection. Consequently, you can scale out SQL query performance by adding more Spark and/or Solr resources to your cluster.

Fusion pushdown strategy

The pushdown strategy analyzes the query plan to determine if there is an optimal Solr query or streaming expression that can push down aggregations into Solr to improve performance and scalability. For example, the following SQL query can be translated into a Solr facet query by the Fusion pushdown strategy:

select count(1) as the_count, movie_id from ratings group by movie_id

The basic idea behind Fusion’s pushdown strategy is it is much faster to let Solr facets perform basic aggregations than it is to export raw documents from Solr and have Spark perform the aggregation. If an optimal pushdown query is not possible, then Spark pulls raw documents from Solr, and then performs any joins or aggregations needed in Spark. Put simply, the Fusion SQL service tries to translate SQL queries into optimized Solr queries. But failing that, the service simply reads all matching documents for a query into Spark, and then performs the SQL execution logic across the Spark cluster.

Which collections are registered

By default, all Fusion collections except system collections are registered in the Fusion SQL service so you can query them without any additional setup. However, empty collections cannot be queried, or even described from SQL, so empty collections won’t show up in the Fusion SQL service until they have data. In addition, any fields with a dot in the name are ignored when tables are auto-registered. You can use the Catalog API to alias fields with dots in the names to include these fields.

If you add data to a previously empty collection, then you can execute either of the following SQL commands to ensure that the data gets added as a table:

show tables

show tables in `default`

The Fusion SQL service checks previously empty collections every minute and automatically registers recently populated collections as a table.

You can describe any table using:

describe table-name

See the movielens lab in the Fusion Spark Bootcamp for a complete example of working with the Fusion Catalog API and Fusion SQL service. Also read about the Catalog API.

Hive configuration

Behind the scenes, the Fusion SQL service is based on Hive. Use the hive-site.xml file in /opt/fusion/4.2.x/conf/ (on Unix) or C:\lucidworks\fusion\4.2.x\conf\ (on Windows) to configure Hive settings.

If you change hive-site.xml, you must restart the Fusion SQL service with ./sql restart (on Unix) or sql.cmd restart (on Windows).

Key features

Searching and Sorting

Scoring

The WHERE and ORDER BY clauses can be used to search and sort results using the underlying search engine. The score (lower case) keyword can be used to sort by the relevance score of a full text query.

An example of a query that uses the score keyword is below:

select id, title, score from books where abstract = 'hello world' order by score desc

Searching

Search predicates are specified in the WHERE clause. Search predicates on text fields will perform full text searches. Search predicates on string fields will perform exact matches unless the LIKE expression is used.

By default all multi-term predicates are sent to the search engine as phrase queries. In the example above 'hello world' is searched as a phrase query.

To stop the auto-phrasing of multi-term predicates wrap parenthesis around the terms. For example:

select id, title, score from books where abstract = '(hello world)' order by score desc

In the example above the '(hello world)' search predicate will be sent to the search engine without phrasing and perform the query hello OR world

When parenthesis are used the search expression is sent to Solr unchanged. This allows for richer search predicates such as proximity search.

The example below performs a proximity search:

select id, title, score from books where abstract = '("hello world"~4)' order by score desc

Lucene/Solr wildcards can be sent to the search engine directly using this syntax:

select id, title from books where abstract = '(he?lo)'

The LIKE clause can be used to perform wildcard searches with either the Solr wildcard symbol * or the SQL wildcard symbol %.

When using the traditional SQL % wildcard only leading and trailing wildcards are supported. Use Lucene/Solr wildcards as described above for more complex wildcards.

The example below shows a LIKE query with a trailing % wildcard.

select id, title from books where abstract like 'worl%'

The following operators are supported for numeric and datetime predicates: <, >, >=, <=, =, !=.

Both the IN and BETWEEN clauses can be used to specify predicates.

Boolean predicates can be used and are translated to boolean search queries.

The example below specifies a boolean query:

select id from products where prod_desc = 'bike' and price < 125 order by price asc

Sorting

Numeric, datetime and string fields can be sorted on using the ORDER BY clause. The sort is pushed down to the search engine for optimal performance. Multiple sorts can be specified using the standard SQL sytnax.

The example below sorts on a numeric field:

select id, prod_name, price_f from products where prod_desc = 'bike' order by price_f desc

Single and Multi-dimension SQL aggregations

SQL aggregations are translated to Solr facet queries to take advantage of Solr’s distributed aggregation capabilities. This allows for interactive data analysis over large data sets.

Single and multi-dimension aggregation using supported aggregation functions operate over the entire query result and are designed to return accurate results. The supported aggregation functions that are fully pushed down to the search engine are: count(*), count(distinct), sum, avg, min, max.

An example of a SQL aggregation that is translated to a Solr facet query is below:

select company_name, count(*) as cnt from orders group by company_name
order by cnt desc

Having Clause

A HAVING clause can also be applied to single and multi-dimension aggregations.

Time series aggregations

Fusion SQL provides a powerful and flexible time series aggregation query through the use of the date_format function. Aggregations that group by a date_format are translated to a Solr range facet query. This allows for fast, interactive time series reporting over large data sets.

An example of a time series aggregation is shown below:

select date_format(rec_time, 'yyyy-MM') as month, count(*) as cnt
from logrecords where rec_time > '2000-01-01' and rec_time < '2010-01-01'
group by month

The date_format function is used to specify both the output format and the time interval in one compact pattern as specified by the Java SimpleDateFormat class.

The example above is performing a monthly time series aggregation over the rec_time field which is a datetime field.

To switch to a daily time series aggregation all that is needed is to change the date pattern:

select date_format(rec_time, 'yyyy-MM-dd') as day, count(*) as cnt
from logrecords where rec_time > '2000-01-01' and rec_time < '2000-12-31'
group by day

Date math predicates

Fusion SQL also supports date math predicates through the date_add, date_sub, and current_date functions.

Below is an example of the use of date math predicates.

select date_format(rec_time, 'yyyy-MM-dd') as day, count(*) as cnt
from logrecords where rec_time > date_sub(current_date(), 30)
group by day

Auto-filling of time intervals

Fusion SQL automatically fills any time interval that does not contain data with with zeroes. This ensures that the full time range is included in the output which makes the time series results easy to visualize in charts.

Sort Order

Time series aggregations are sorted by default in time ascending order. The ORDER BY clause can be used to sort time series aggregation results in a different order.

Having Clause

A HAVING clause can also be applied to a time series query to limit the results to rows that meet specific criteria.

Sampling and Statistics

Sampling is often used in statistical analysis to gain an understanding of the distribution, shape and dispersion of a variable or the relationship between variables.

Fusion SQL returns a random sample for all basic selects that do not contain an ORDER BY clause. The random sample is designed to return a uniform distribution of samples that match a query. The sample can be used to infer statistical information about the larger result set.

The example below returns a random sample of single field:

select filesize_d from logs where year_i = 2019

If no limit is specified the sample size will be 25000. To increase the sample size add a limit larger then 25000.

select filesize_d from logs where year_i = 2019 limit 50000

The ability to subset the data with a query and then sample from that subset is called Stratified Random Sampling. Stratified Random Sampling is an important statistical technique used to better understand sub-populations of a larger data set.

Descriptive Statistics

Sub-queries can be used to return random samples for fast, often sub-second, statistical analysis. For example:

select count(*) as samplesize,
       mean(filesize_d) as mean,
       min(filesize_d) as min,
       max(filesize_d) as max,
       approx_percentile(filesize_d, .50) as median,
       variance(filesize_d) as variance,
       std(filesize_d) as standard_dev,
       skewness(filesize_d) as skewness,
       kurtosis(filesize_d) as kurtosis,
       sum(filesize_d) as sum
           from (select filesize_d from logs where year_i = 2019 limit 50000)

In the example above the sub-query is returning a random sample of 50000 results which is operated on by the main statistical query. The statistical query returns aggregations which describe the distribution, shape and dispersion of the sample set.

Correlation and Covariance

Sub-queries can be used to provide random samples for correlation and covariance:

select corr(filesize_d, response_d) as correlation,
       covar_samp(filesize_d, response_d) as covariance
            from (select filesize_d, response_d from logs limit 50000)

In the example above the random sample returns two fields to the corr and covar_samp functions in the main query. Correlation and covariance are used to show the strength of the linear relationship between two variables.

Numeric Histograms

Sub-queries can be used to provide random samples as input for numeric histograms:

select histogram_numeric(filesize_d, 12) as hist
    from (select filesize_d from testapp limit 50000)

In the example above the random sample is operated on by the histogram_numeric function which is returning a histogram with 12 bins. Histograms are used to visualize the shape of a distrubition.

The histogram_numeric function returns an array containing a struct for each bin. For visualization tools to display the histogram it will often need to be exploded into a result table. The explode function can be combined with the LATERAL_VIEW clause to return the histograms as a table.

SELECT CAST(hist.x as double) as bin_center,
       CAST(hist.y as double) as bin_height
FROM (select histogram_numeric(filesize_d, 12) as response_hist from (select filesize_d from testapp limit 50000)) a
LATERAL VIEW explode(response_hist) exploded_table as hist

Pushed Down Statistical Queries

A narrower set of statistical aggregations can be pushed down to the search engine and operate over entire result sets. These functions are: count(*), count(distinct), sum, min, max, avg and approx_percentile.

Below is an example of a fully pushed down statistical query:

select count(*) as cnt, avg(filesize_d) as avg, approx_percentile(filesize_d, .50) as median from logs where year_i = 2018

Statistical queries that contain a mix of the queries above and non-pushdown such as skewness or kurtosis will be operate over a random sample that matches the query.

Below is an example of a statisical query that operates over a random sample:

select count(*) as cnt,  skewness(filesize_d) as skewness from logs where year_i = 2018