Product Selector

Fusion 5.9
    Fusion 5.9

    Outlier DetectionJob configuration specifications

    Outlier detection jobs are run against your data collections, and also perform the following actions:

    • Identify information that significantly differs from other data in the collection

    • Attach labels to designate each outlier group

    To create an Outlier Detection job, sign in to Managed Fusion and click Collections > Jobs. Then click Add+ and in the Clustering and Outlier Analysis Jobs section, select Outlier Detection. You can enter basic and advanced parameters to configure the job. If the field has a default value, it is populated when you click to add the job.

    Basic parameters

    To enter advanced parameters in the UI, click Advanced. Those parameters are described in the advanced parameters section.
    • Spark job ID. The unique ID for the Spark job that references this job in the API. This is the id field in the configuration file. Required field.

    • Input/Output Parameters. This section includes these parameters:

      • Training collection. The Solr collection that contains documents that will be clustered. The job will be run against this information. This is the trainingCollection field in the configuration file. Required field.

      • Output collection. The Solr collection where the job output is stored. The job will write the output to this collection. This is the outputCollection field in the configuration file. Required field.

      • Data format. The format that contains training data. The format must be compatible with Spark and options include solr, parquet, and orc. This is the dataFormat field in the configuration file. Required field.

    • Only save outliers? If this checkbox is selected (set to true), only outliers are saved in the job’s output collection. If not selected (set to false), the entire dataset is saved in the job’s output collection. This is the outputOutliersOnly field in the configuration file. Optional field.

    • Field Parameters. This section includes these parameters:

      • Field to vectorize. The Solr field that contains text training data. To combine data from multiple fields with different weights, enter field1:weight1,field2:weight2, etc. This is the fieldToVectorize field in the configuration file. Required field.

      • ID field name. The unique ID for each document. This is the uidField field in the configuration file. Required field.

      • Output field name for outlier group ID. The field that contains the ID for the outlier group. This is the outlierGroupIdField field in the configuration file. Optional field.

      • Top unique terms field name. The field where the job output stores the top frequent terms that, for the most part, are unique for each outlier group. The information is computed based on term frequency-inverse document frequency (TF-IDF) and group ID. This is the outlierGroupLabelField field in the configuration file. Optional field.

      • Top frequent terms field name. The field where the job output stores top frequent terms in each cluster. Terms may overlap with other clusters. This is the freqTermField field in the configuration file. Optional field.

      • Output field name for doc distance to its corresponding cluster center. The field that contains the document’s distance from the center of its cluster. This is based on the arithmetic mean of all of the documents in the cluster. This denotes how representative the document is in the cluster. This is the distToCenterField field in the configuration file. Optional field.

    • Model Tuning Parameters. This section includes these parameters:

      • Max doc support. The maximum number of documents that can contain the term. Values that are <1.0 indicate a percentage, 1.0 is 100 percent, and >1.0 indicates the exact number. This is the maxDF field in the configuration file. Optional field.

      • Min doc support. The minimum number of documents that must contain the term. Values that are <1.0 indicate a percentage, 1.0 is 100 percent, and >1.0 indicates the exact number. This is the minDF field in the configuration file. Optional field.

      • Number of keywords for each cluster. The number of keywords required to label each cluster. This is the numKeywordsPerLabel field in the configuration file. Optional field.

    • Featurization Parameters. This section includes the following parameter:

      • Lucene analyzer schema. This is the JSON-encoded Lucene text analyzer schema used for tokenization. This is the analyzerConfig field in the configuration file. Optional field.

    Advanced parameters

    If you click the Advanced toggle, the following optional fields are displayed in the UI.

    • Spark Settings. The Spark configuration settings include the following:

      • Spark SQL filter query. This field contains the Spark SQL query that filters your input data. For example, SELECT * from spark_input registers the input data as spark_input. This is the sparkSQL field in the configuration file.

      • Data output format. The format for the job output. The format must be compatible with Spark and options include solr and parquet. This is the dataOutputFormat field in the configuration file.

      • Partition fields. If the job output is written to non-Solr sources, this field contains a comma-delimited list of column names that partition the dataframe before the external output is written. This is the partitionCols field in the configuration file.

    • Input/Output Parameters. This advanced option adds these parameters:

      • Training data filter query. If Solr is used, this field contains the Solr query executed to load training data. This is the trainingDataFilterQuery field in the configuration file.

    • Read Options. This section lets you enter parameter name:parameter value options to use when reading input from Solr or other sources. This is the readOptions field in the configuration file.

    • Write Options. This section lets you enter parameter name:parameter value options to use when writing output to Solr or other sources. This is the writeOptions field in the configuration file.

    • Dataframe config options. This section includes these parameters:

      • Property name:property value. Each entry defines an additional Spark dataframe loading configuration option. This is the trainingDataFrameConfigOptions field in the configuration file.

      • Training data sampling fraction. This is the fractional amount of the training data the job will use. This is the trainingDataSamplingFraction field in the configuration file.

      • Random seed. This value is used in any deterministic pseudorandom number generation to group documents into clusters based on similarities in their content. This is the randomSeed field in the configuration file.

    • Field Parameters. The advanced option adds this parameter:

      • Fields to load. This field contains a comma-delimited list of Solr fields to load. If blank, the job selects the required fields to load at runtime. This is the sourceFields field in the configuration file.

    • Model Tuning Parameters. The advanced option adds these parameters:

      • Number of outlier groups. The number of clusters to help find outliers. This is the outlierK field in the configuration file.

      • Outlier cutoff. The fraction out of the total documents to designate as an outlier group. Values that are <1.0 indicate a percentage, 1.0 is 100 percent, and >1.0 indicates the exact number. This is the outlierThreshold field in the configuration file.

      • Vector normalization. The p-norm value used to normalize vectors. A value of -1 turns off normalization. This is the norm field in the configuration file.

    • Miscellaneous Parameters. This section includes this parameter:

      • Model ID. The unique identifier for the model to be trained. If no value is entered, the Spark Job ID is used. This is the modelId field in the configuration file.

    Use this job when you want to find outliers from a set of documents and attach labels for each outlier group.

    id - stringrequired

    The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_). Maximum length: 63 characters.

    <= 63 characters

    Match pattern: [a-zA-Z][_\-a-zA-Z0-9]*[a-zA-Z0-9]?

    sparkConfig - array[object]

    Spark configuration settings.

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    trainingCollection - stringrequired

    Solr Collection containing documents to be clustered

    >= 1 characters

    fieldToVectorize - stringrequired

    Solr field containing text training data. Data from multiple fields with different weights can be combined by specifying them as field1:weight1,field2:weight2 etc.

    >= 1 characters

    dataFormat - stringrequired

    Spark-compatible format that contains training data (like 'solr', 'parquet', 'orc' etc)

    >= 1 characters

    Default: solr

    trainingDataFrameConfigOptions - object

    Additional spark dataframe loading configuration options

    trainingDataFilterQuery - string

    Solr query to use when loading training data if using Solr

    Default: *:*

    sparkSQL - string

    Use this field to create a Spark SQL query for filtering your input data. The input data will be registered as spark_input

    Default: SELECT * from spark_input

    trainingDataSamplingFraction - number

    Fraction of the training data to use

    <= 1

    exclusiveMaximum: false

    Default: 1

    randomSeed - integer

    For any deterministic pseudorandom number generation

    Default: 1234

    outputCollection - stringrequired

    Solr Collection to store model-labeled data to

    >= 1 characters

    dataOutputFormat - string

    Spark-compatible output format (like 'solr', 'parquet', etc)

    >= 1 characters

    Default: solr

    sourceFields - string

    Solr fields to load (comma-delimited). Leave empty to allow the job to select the required fields to load at runtime.

    partitionCols - string

    If writing to non-Solr sources, this field will accept a comma-delimited list of column names for partitioning the dataframe before writing to the external output

    writeOptions - array[object]

    Options used when writing output to Solr or other sources

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    readOptions - array[object]

    Options used when reading input from Solr or other sources.

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    modelId - string

    Identifier for the model to be trained; uses the supplied Spark Job ID if not provided.

    >= 1 characters

    outlierGroupIdField - string

    Output field name for unique outlier group id.

    Default: outlier_group_id

    outlierGroupLabelField - string

    Output field name for top frequent terms that are (mostly) unique for each outlier group as computed based on TF-IDF and group Id.

    Default: outlier_group_label

    outputOutliersOnly - boolean

    If true, only outliers are saved in the output collection, otherwise, the whole dataset is saved.

    Default: false

    uidField - stringrequired

    Field containing the unique ID for each document.

    >= 1 characters

    Default: id

    analyzerConfig - string

    LuceneTextAnalyzer schema for tokenization (JSON-encoded)

    >= 1 characters

    Default: { "analyzers": [{ "name": "StdTokLowerStop","charFilters": [ { "type": "htmlstrip" } ],"tokenizer": { "type": "standard" },"filters": [{ "type": "lowercase" },{ "type": "KStem" },{ "type": "length", "min": "2", "max": "32767" },{ "type": "fusionstop", "ignoreCase": "true", "format": "snowball", "words": "org/apache/lucene/analysis/snowball/english_stop.txt" }] }],"fields": [{ "regex": ".+", "analyzer": "StdTokLowerStop" } ]}

    freqTermField - string

    Output field name for top frequent terms in each cluster. These may overlap with other clusters.

    Default: freq_terms

    distToCenterField - string

    Output field name for doc distance to its corresponding cluster center (measure how representative the doc is).

    Default: dist_to_center

    norm - integer

    p-norm to normalize vectors with (choose -1 to turn normalization off)

    Default: 2

    Allowed values: -1012

    minDF - number

    Min number of documents the term has to show up. value<1.0 denotes a percentage, value=1.0 denotes 100%, value>1.0 denotes the exact number.

    Default: 5

    maxDF - number

    Max number of documents the term can show up. value<1.0 denotes a percentage, value=1.0 denotes 100%, value>1.0 denotes the exact number.

    Default: 0.75

    numKeywordsPerLabel - integer

    Number of Keywords needed for labeling each cluster.

    Default: 5

    outlierK - integer

    Number of clusters to help find outliers.

    Default: 10

    outlierThreshold - number

    Identify as outlier group if less than this percent of total documents. value<1.0 denotes a percentage, value=1.0 denotes 100%, value>1.0 denotes the exact number.

    Default: 0.01

    type - stringrequired

    Default: outlier_detection

    Allowed values: outlier_detection