Perform the default SQL aggregation
This is the default SQL aggregation of signals for a base collection namedproducts
. It produces the same results as legacy aggregation:
SELECT SUM(count_i) AS aggr_count_i
.count_i
is summed asaggr_count_i
.time_decay(count_i, date) AS weight_d
. Thetime_decay
function computes the aggregatedweight_d
field. This function is a Spark UserDefinedAggregateFunction (UDAF) that is built into Fusion. The function computes a weight for each aggregation group, using the count and an exponential decay on the signal timestamp, with a 30-day half life.GROUP BY query, doc_id
. The GROUP BY clause defines the fields used to compute aggregate metrics, which are typically thequery
,doc_id
, and any filters. With SQL, you have more options to compute aggregated metrics without having to write custom JavaScript functions (which would be needed to supplement legacy aggregations). You can also use standard WHERE clause semantics, for example,WHERE type_s = 'add'
, to provide fine-grained filters.- The
time_decay
function uses an abbreviated function signature,time_decay(count_i, timestamp_tdt)
, instead of the full function signature shown in Use Different Weights Based on Signal Types.
q1
and document 1
:
q1
:
Use different weights based on signal types
This is a slightly more complex example that uses a subquery to compute a custom weight for each signal based on the signal type (add
vs. click
):
Compute metrics for sessions
This aggregation query uses a number of Spark SQL functions to compute some metrics for sessions:Learn more
Increase SQL Resource Allocations
Increase SQL Resource Allocations
So as to not conflict with the CPU and memory settings used for Fusion driver applications (default & script), the Fusion SQL service uses a unique set of configuration properties for granting CPU and memory for executing SQL queries.You can use the Configurations API to override the default values shown here.
Here is an example of increasing the resources for the Fusion SQL service:If you change any of these settings, you must restart the Fusion SQL service with
Configuration Property and Default | Description |
---|---|
fusion.sql.cores 1 | Sets the max number of cores to use across the entire cluster to execute SQL queries. Give as many as possible while still leaving CPU available for other Fusion jobs. |
fusion.sql.executor.cores 1 | Number of cores to use per executor |
fusion.sql.memory 1g | Memory per executor to use for executing SQL queries |
fusion.sql.default.shuffle.partitions 20 | Default number of partitions when performing a distributed group-by-type operation, such as a JOIN |
fusion.sql.bucket_size_limit.threshold 30,000,000 | Threshold that determines when to use Solr streaming rollup instead of facet when computing aggregations; rollup can handle high cardinality dimensions but is much slower than using facets to compute aggregate measures. |
fusion.sql.max.no.limit.threshold 10,000 | Sets a limit for SQL queries that select all fields and all rows, that is, select * from table-name . |
fusion.sql.max.cache.rows 5,000,000 | Do not cache tables bigger than this threshold. If a user sends the cache-table command for large collections with row counts that exceed this value, then the cache operations will fail. |
fusion.sql.max_scan_rows 2,000,000 | Safeguard mechanism to prevent queries that request too many rows from large tables. Queries that read more than this many rows from Solr will fail; increase this threshold for larger Solr clusters that can handle streaming more rows concurrently. |
The Fusion SQL service is designed for executing analytics-style queries over large data sets. You need to provide ample CPU and memory so that queries execute efficiently and can leverage Spark’s in-memory caching for joins and aggregations.
./sql restart
(on Unix) or sql.cmd restart
(on Windows).The Fusion SQL service is a long-running Spark application and, as such, it holds on to the resources (CPU and memory) allocated to it using the aforementioned settings. Consequently, you might need to reconfigure the CPU and memory allocation for other Fusion Spark jobs to account for the resources given to the Fusion SQL service. In other words, any resources you give to the Fusion SQL service are no longer available for running other Fusion Spark jobs. For more information on adjusting the CPU and memory settings for Fusion Spark jobs, see the Spark configuration settings.Use Virtual Tables with a Common Join Key
Use Virtual Tables with a Common Join Key
With Solr, you can index different document types into the same shard using the composite ID router based on a common route key field. For example, a customer 360 application can index different customer-related document types (contacts, apps, support requests, and so forth) into the same collection, each with a common In the example request above, we create a collection named In the example above, we set the In the example above, we compute the number of feature enhancement requests for Search applications from customers in the US-East region by performing a 3-way join between the
customer_id
field. This lets Solr perform optimized joins between the document types using the route key field. This configuration uses Solr’s composite ID routing, which ensures that all documents with the same join key field end up in the same shard. See Document Routing.Providing a compositeIdSpec for the Fusion collection
Before indexing, you need to provide acompositeIdSpec
for the Fusion collection. For example:customer
with the route key field set to customer_id_s
. When documents are indexed through Fusion, the Solr Index pipeline stage uses the compositeIdSpec
to create a composite document ID, so documents get routed to the correct shard.Exposing document types as virtual tables
If you configure your Fusion collection to use a route key field to route different document types to the same shard, then the Fusion SQL service can expose each document type as a virtual table and perform optimized joins between these virtual tables using the route key. To create virtual tables, you simply need to use the Fusion Catalog API on the data asset for the main collection to set the name of the field that determines the document type. For example, if you have a collection named customer that contains different document types (contacts, support tickets, sales contracts, and so forth), then you would set up virtual tables using the following Catalog API update request:virtualTableField
to doc_type_s
. Fusion sends a facet request to the customer collection to get the unique values of the doc_type_s
field and creates a data asset for each unique value. Each virtual table is registered in the Fusion SQL service as a table.Performing optimized joins in SQL
After you have virtual tables configured and documents routed to the same shard using acompositeIdSpec
, you can perform optimized joins in SQL that take advantage of Solr’s domain-join facet feature. For example, the following SQL statement results in a JSON facet request to Solr to perform the aggregation:customer
, support
, and apps
virtual tables using the customer_id
join key. Behind the scenes, Fusion SQL performs a JSON facet query that exploits all documents with the same customer_id
value being in the same shard. This lets Solr compute the count
for the industry
, app_id
, feature_id
group by key more efficiently than is possible using table scans in Spark.Write SQL Aggregations
Write SQL Aggregations
In this article, we provide guidance to help you make the most of the Fusion SQL aggregation engine.You are not required to use this approach, but if you do not use dynamic field suffixes as shown above, you will need to change the boosting stages in Fusion to work with different field names.
Project fields into the signals_aggr collection
For legacy reasons, theCOLLECTION_NAME_signals_aggr
collection relies on dynamic field names, such as doc_id_s
and query_s
instead of doc_id
and query
. Consequently, when you project fields to be written to the COLLECTION_NAME_signals_aggr
collection, you should use dynamic field suffixes as shown in the SQL snippet below:Use WITH to organize complex queries
A common pattern in SQL aggregation queries is the use of subqueries to break up the logic into comprehensible units. For more information about the WITH clause, see https://modern-sql.com/feature/with. Let us work through an example to illustrate the key points:- At line 1, we declare a statement scoped view named
signal_type_groups
using the WITH keyword. - Lines 2-9 define the subquery for the
signal_type_groups
view. - At line 7, we read from the
product_signals
collection in Fusion. - Line 8 filters the input to only include
click
,cart
, andpurchase
signals. Behind the scenes, Fusion translates this WHERE IN clause to a Solr filter query, e.g.fq=type:(click OR cart OR purchase)
. Thesignal_type_groups
view produces rows grouped byuser_id
,doc_id
, andtype
(line 9). - Starting at line 10, we define a subquery that performs a rollup over the rows in the
signal_type_groups
view by grouping ondoc_id
anduser_id
(line 15). Notice how the WITH statement helps break this complex query up into two units that help make aggregation queries easier to comprehend. You are encouraged to adopt this pattern in your own SQL aggregation queries.