Machine Learning

Machine learning with Spark

Apache Spark is an open-source cluster-computing framework that serves as a fast and general execution engine for large-scale data processing jobs that can be decomposed into stepwise tasks, which are distributed across a cluster of networked computers.

Spark improves on previous MapReduce implementations by using resilient distributed datasets (RDDs), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner.

Fusion manages a Spark cluster that is used for all signal aggregation processes.

With a Fusion AI license, you can also use the Spark cluster to train and compile machine learning models, as well as to run experiments via the Fusion UI or the Spark Jobs API.

See Machine Learning Jobs for details about each pre-defined machine learning job in Fusion.

Spark in Fusion on Kubernetes

These topics explain Spark administration in Fusion on Kubernetes:

Spark in Fusion On-Prem

These topics provide information about Spark administration in Fusion Server on premises:

Additionally, you can configure and run Spark jobs in Fusion, using the Spark Jobs API or the Fusion UI.

The Data Science Toolkit Integration (DSTI)

Beginning with Fusion 5.0, data scientists and machine learning engineers can deploy end-user-trained Python machine learning models to Fusion using the Data Science Toolkit Integration (DSTI). This offers real-time prediction and seamless integration with query and index pipelines.