Product Selector

Fusion 5.9
    Fusion 5.9

    SharePoint Online V1 OptimizedConnector Configuration Reference

    Table of Contents

    The SharePoint Online V1 Optimized connector retrieves data from cloud-based SharePoint repositories. Authentication requires a Sharepoint user who has permissions to access Sharepoint via the SOAP API. This user must be registered with the Sharepoint Online authentication server; it is not necessarily the same as the user in Active Directory or LDAP.

    Deprecation and removal notice

    This connector is deprecated as of Fusion 4.2 and is removed or expected to be removed as of Fusion 5.0. Use the SharePoint Optimized V2 connector instead.

    For more information about deprecations and removals, including possible alternatives, see Deprecations and Removals.

    Configuration

    When entering configuration values in the UI, use unescaped characters, such as \t for the tab character. When entering configuration values in the API, use escaped characters, such as \\t for the tab character.

    A crawler for Sharepoint online - Using sharepoint Rest

    description - string

    Optional description for this datasource.

    id - stringrequired

    Unique name for this datasource.

    >= 1 characters

    Match pattern: ^[a-zA-Z0-9_-]+$

    parserId - string

    Parser used when parsing raw content. Retry parsing setting is available under crawl performance (advance setting)

    pipeline - stringrequired

    Name of an existing index pipeline for processing documents.

    >= 1 characters

    properties - Properties

    Datasource configuration properties

    chunkSize - integer

    The number of items to batch for each round of fetching. A higher value can make crawling faster, but memory usage is also increased. The default is 1.

    Default: 1

    commitAfterItems - integer

    Commit the crawlDB to disk after this many items have been received. A smaller number here will result in a slower crawl because of commits to disk being more frequent; conversely, a larger number here will cause a resumed job after a crash to need to recrawl more records.

    Default: 10000

    crawlDBType - string

    The type of crawl database to use, in-memory or on-disk.

    Default: on-disk

    Allowed values: on-diskin-memory

    db - Connector DB

    Type and properties for a ConnectorDB implementation to use with this datasource.

    aliases - boolean

    Keep track of original URI-s that resolved to the current URI. This negatively impacts performance and size of DB.

    Default: false

    inlinks - boolean

    Keep track of incoming links. This negatively impacts performance and size of DB.

    Default: false

    inv_aliases - boolean

    Keep track of target URI-s that the current URI resolves to. This negatively impacts performance and size of DB.

    Default: false

    type - string

    Fully qualified class name of ConnectorDb implementation.

    >= 1 characters

    Default: com.lucidworks.connectors.db.impl.MapDbConnectorDb

    dedupe - boolean

    If true, documents will be deduplicated. Deduplication can be done based on an analysis of the content, on the content of a specific field, or by a JavaScript function. If neither a field nor a script are defined, content analysis will be used.

    Default: false

    dedupeField - string

    Field to be used for dedupe. Define either a field or a dedupe script, otherwise the full raw content of each document will be used.

    dedupeSaveSignature - boolean

    If true,the signature used for dedupe will be stored in a 'dedupeSignature_s' field. Note this may cause errors about 'immense terms' in that field.

    Default: false

    dedupeScript - string

    Custom javascript to dedupe documents. The script must define a 'genSignature(content){}' function, but can use any combination of document fields. The function must return a string.

    delete - boolean

    Set to true to remove documents from the index when they can no longer be accessed as unique documents.

    Default: true

    deleteErrorsAfter - integer

    Number of fetch failures to tolerate before removing a document from the index. The default of -1 means no fetch failures will be removed.

    Default: -1

    depth - integer

    Number of levels in a directory or site tree to descend for documents.

    Default: -1

    diagnosticMode - boolean

    Enable to print more detailed information to the logs about each request.

    Default: false

    emitThreads - integer

    The number of threads used to send documents from the connector to the index pipeline. The default is 5.

    Default: 5

    enable_security_trimming - Enable Security Trimming

    f.user_group_cache_collection_name - string

    The name of the sidecar ACLs collection used in security trimming.

    Default: acl

    excludeExtensions - array[string]

    File extensions that should not to be fetched. This will limit this datasource to all extensions except this list.

    excludeRegexes - array[string]

    Regular expressions for URI patterns to exclude. This will limit this datasource to only URIs that do not match the regular expression.

    f.acl_commit_after - integer

    The ACL collection's auto commit value (-1 for never auto commit)

    >= -1

    exclusiveMinimum: false

    Default: 1800000

    f.adfsStsIssuerURI - string

    The IssuerURI is used to by the authentication platform to locate the namespace that the token is designated for.

    f.app_auth_azure_login_endpoint - string

    The azure login endpoint to use.

    Default: https://login.windows.net

    f.app_auth_client_id - string

    When you want to use app authentication, this is the client ID of your application.

    f.app_auth_client_secret - string

    Applicable to SharePoint Online OAuth App-Auth only. The Azure client ID of your application.

    f.app_auth_pfx - string

    The base64 encoded value of your X509 PFX certificate file. -- To get this in Linux (bash): base64 cert.pfx | tr -d '\n' -- To get this in Windows (powershell): [Convert]::ToBase64String([IO.File]::ReadAllBytes('cert.pfx'))

    f.app_auth_pfx_password - string

    The password of the x509 pfx certificate.

    f.app_auth_refresh_token - string

    Applicable to SharePoint Online OAuth App-Auth only. This is a refresh token which is reusable for up to 12 hours. You must obtain a new tokenusing the OAuth login process if the token becomes expired.

    f.app_auth_tenant - string

    The Office365 tenant of the app. E.g. exampleapp.onmicrosoft.com

    f.avoid_ssl_hostname_verification - boolean

    Enable this in cases when the CN on the SSL certificate does not match the host name of the server.

    Default: true

    f.connect_timeout - integer

    The async http connection timeout.

    >= -1

    exclusiveMinimum: false

    Default: 5000

    f.content_commit_after - integer

    The content collection's auto commit value (-1 for never auto commit)

    >= -1

    exclusiveMinimum: false

    Default: 1800000

    f.domain - string

    The NETBIOS domain for the network. Example: LUCIDWORKS

    f.enable_http_headers_debugging - boolean

    Prints DEBUG level information to the logs.

    Default: false

    f.excludeContentsExtensions - array[string]

    File extensions of files that will not have their contents downloaded when indexing this item. The list item metadata will still be indexed but the file contents will not. The comparison is not case sensitive, and you do not have to specify the '.' but it still work if you do. For example "zip" and ".zip" are both acceptable. The whitespace will also be trimmed.

    f.fetch_all_site_collections - boolean

    When this is selected, obtain all site collections from the SharePoint web application and add them to the start links automatically.

    Default: true

    f.fetch_taxonomies - boolean

    Expand the taxonomy path of each list item with a taxonomy field.

    Default: false

    f.includeContentsExtensions - array[string]

    File extensions of files that will not have their contents downloaded when indexing this item. The list item metadata will still be indexed but the file contents will not. The comparison is not case sensitive, and you do not have to specify the '.' but it still work if you do. For example "zip" and ".zip" are both acceptable. The whitespace will also be trimmed.

    f.list_view_threshold - integer

    Set this to your SharePoint farm's list item view threshold. This threshold sets the max amount of list item metadata elements you can fetch at one time. The typical default is 5000 but your SP administrators can set it to a different value. If you see the error "The attempted operation is prohibited because it exceeds the list view threshold enforced by the administrator", check with your SP admins to get the correct value.

    >= -1

    exclusiveMinimum: false

    Default: 5000

    f.loginCookieRefreshRateMs - integer

    The amount of time in milliseconds to re-use a SharePoint online cookie until fetching a new one. Ideally this will be set to a small bit less than the max of the Cookies.

    Default: 10800000

    f.maxSizeBytes - integer

    Maximum size, in bytes, of a document to crawl.

    Default: 4194304

    f.max_connections - integer

    Number of max async http connections.

    >= -1

    exclusiveMinimum: false

    Default: -1

    f.max_connections_per_host - integer

    The number of max connections per host.

    >= -1

    exclusiveMinimum: false

    Default: -1

    f.max_list_items_per_site_collection - integer

    Setting this will cause the fetcher onlyfetch this many list items per site collection when crawling. Set to -1 for unlimited.

    >= -1

    exclusiveMinimum: false

    Default: -1

    f.max_prefetch_parallel_jobs - integer

    This is the max number of prefetch jobs that can be run in parallel. If you are crawling small site collections, then you want to use a bigger number here such as 10 or 20. But if you are crawling large site collections, a smaller number is fine here because most of the work will be done within a each prefetch job.

    >= 1

    <= 1000

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 10

    f.max_site_collections - integer

    When the "Fetch all site collections" is checked, this will limit the number of site collections we fetch. If set to -1, all site collections will be fetched.

    >= -1

    <= 99999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: -1

    f.max_sites_per_site_collection - integer

    Setting this will cause the fetcher onlyfetch this many sites per site collection when crawling. Set to -1 for unlimited.

    >= -1

    exclusiveMinimum: false

    Default: -1

    f.minSizeBytes - integer

    Minimum size, in bytes, of a document to crawl.

    Default: 0

    f.password - string

    Password for the Sharepoint user.

    f.pooled_connection_idle_timeout - integer

    The timeout for getting a connection from the pool.

    >= -1

    exclusiveMinimum: false

    Default: 600000

    f.pooled_connection_ttl - integer

    The time to live of a connection in the pool.

    >= -1

    exclusiveMinimum: false

    Default: 360000

    f.prefetch_file_download_timeout_secs - integer

    When should a file download timeout during the pre-fetch exporter (seconds).

    >= 1

    <= 999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 100

    f.prefetch_num_threads - integer

    How many threads to use when building the pre-fetch index. When crawling small site collections, you want to have a small number of prefetch threads such as 1 or 2 because the cost of allocating the prefetch threads will outweigh their benefit. But if you are crawling a very large site collection, you will want many threads like 25 - 50 because more work can be done faster with large number of fetch threads in that situation.

    >= 1

    <= 1000

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 5

    f.proxyHost - string

    The address to use when connecting through the proxy.

    f.proxyPort - integer

    The port to use when connecting through the proxy. (HTTP or SOCKS)

    f.remove_prepended_ids - boolean

    If fields have been defined to include PrependIds, this option will remove those IDs before indexing.

    Default: true

    f.request_timeout - integer

    The async http request timeout.

    >= -1

    exclusiveMinimum: false

    Default: 300000

    f.retry_attempts - integer

    How many times should we fail retryable errors before giving up and throwing an error. Set this to 1 for no retries.

    >= 1

    <= 99999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 4

    f.retry_backoff_delay_factor - number

    The multiplicative factor in the backoff algorithm.

    >= 1

    <= 9999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 1.2

    f.retry_backoff_delay_ms - integer

    The number of milliseconds in the backoff retry algorithm.

    <= 9999999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 1000

    f.retry_backoff_max_delay_ms - integer

    The maximum backoff time allowed in the backoff algorithm.

    <= 9999999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 15000

    f.retry_max_wait_ms - integer

    The maximum amount of milliseconds to wait for retries to finish before giving up and stopping any more retries.

    <= 99999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 600000

    f.sharepoint_services_timeout - integer

    Time in milliseconds to wait for a server response.

    Default: 600000

    f.user_agent - string

    The user agent header decorates the http traffic. This is important for preventing hard rate limiting by SharePoint Online.

    Default: ISV|Lucidworks|Fusion/4.2

    f.username - string

    Name of a Sharepoint user who has the required permissions to access Sharepoint via the REST API. This user must be registered with the Sharepoint Online authentication server; it is not necessarily the same as the user in Active Directory or LDAP.

    f.validate_on_save - boolean

    Validate when you save the datasource.

    Default: true

    failFastOnStartLinkFailure - boolean

    If true, when Fusion cannot connect to any of the provided start links, the crawl is stopped and an exception logged.

    Default: true

    fetchDelayMS - integer

    Number of milliseconds to wait between fetch requests. The default is 0. This property can be used to throttle a crawl if necessary.

    Default: 0

    fetchThreads - integer

    The number of threads to use during fetching. The default is 5.

    Default: 5

    forceRefresh - boolean

    Set to true to recrawl all items even if they have not changed since the last crawl.

    Default: false

    forceRefreshClearSignatures - boolean

    If true, signatures will be cleared if force recrawl is enabled.

    Default: true

    includeExtensions - array[string]

    File extensions to be fetched. This will limit this datasource to only these file extensions.

    includeRegexes - array[string]

    Regular expressions for URI patterns to include. This will limit this datasource to only URIs that match the regular expression.

    indexCrawlDBToSolr - boolean

    EXPERIMENTAL: Set to true to index the crawl-database into a 'crawldb_<datasource-ID>' collection in Solr.

    Default: false

    initial_mapping - Initial field mapping

    Provides mapping of fields before documents are sent to an index pipeline.

    condition - string

    Define a conditional script that must result in true or false. This can be used to determine if the stage should process or not.

    label - string

    A unique label for this stage.

    <= 255 characters

    mappings - array[object]

    List of mapping rules

    Default: {"operation":"move","source":"charSet","target":"charSet_s"}{"operation":"move","source":"fetchedDate","target":"fetchedDate_dt"}{"operation":"move","source":"lastModified","target":"lastModified_dt"}{"operation":"move","source":"signature","target":"dedupeSignature_s"}{"operation":"move","source":"length","target":"length_l"}{"operation":"move","source":"mimeType","target":"mimeType_s"}{"operation":"move","source":"parent","target":"parent_s"}{"operation":"move","source":"owner","target":"owner_s"}{"operation":"move","source":"group","target":"group_s"}

    object attributes:{operation : {
     display name: Operation
     type: string
    }
    source required : {
     display name: Source Field
     type: string
    }
    target : {
     display name: Target Field
     type: string
    }
    }

    reservedFieldsMappingAllowed - boolean

    Default: false

    skip - boolean

    Set to true to skip this stage.

    Default: false

    unmapped - Unmapped Fields

    If fields do not match any of the field mapping rules, these rules will apply.

    operation - string

    The type of mapping to perform: move, copy, delete, add, set, or keep.

    Default: copy

    Allowed values: copymovedeletesetaddkeep

    source - string

    The name of the field to be mapped.

    target - string

    The name of the field to be mapped to.

    maxItems - integer

    Maximum number of documents to fetch. The default (-1) means no limit.

    Default: -1

    parserRetryCount - integer

    The maximum number of times the configured parser will try getting content before giving up

    <= 99

    exclusiveMinimum: false

    exclusiveMaximum: true

    Default: 0

    refreshAll - boolean

    Set to true to always recrawl all items found in the crawldb.

    Default: true

    refreshErrors - boolean

    Set to true to recrawl items that failed during the last crawl.

    Default: false

    refreshIDPrefixes - array[string]

    A prefix to recrawl all items whose IDs begin with this value.

    refreshIDRegexes - array[string]

    A regular expression to recrawl all items whose IDs match this pattern.

    refreshOlderThan - integer

    Number of seconds to recrawl items whose last fetched date is longer ago than this value.

    Default: -1

    refreshScript - string

    A JavaScript function ('shouldRefresh()') to customize the items recrawled.

    refreshStartLinks - boolean

    Set to true to recrawl items specified in the list of start links.

    Default: false

    retryEmit - boolean

    Set to true for emit batch failures to be retried on a document-by-document basis.

    Default: true

    rewriteLinkScript - string

    A Javascript function 'rewriteLink(link) { }' to modify links to documents before they are fetched.

    startLinks - array[string]

    Sharepoint Site Collections and Sites are allowed.

    trackEmbeddedIDs - boolean

    Track IDs produced by splitters to enable dedupe and deletion of embedded content?

    Default: true