Product Selector

Fusion 5.12
    Fusion 5.12

    Edit a web data sourceManage Springboard data sources

    Table of Contents

    This topic details how to edit a data source associated with a Springboard application.

    1. Sign in to Springboard.

    2. In the Applications Manager screen, select the application to modify.

    3. In the application’s Hub screen, click the Data Sources icon in the left panel, or scroll to the Data Sources section of the screen and click Manage in the top right corner of the section.

    4. In the Data Sources screen, point to the data source to edit and click View/Edit to the right of the entry.

    5. On the Details tab of the Edit Data Source screen, enter a short name in the Data source name field.

      The Region and Start URL fields cannot be edited. If you want to change the region or start URL, you must delete the data source and then add a web data source with the correct information.
    6. In the Labels field, enter optional values to identify the data source. For example, FAQ.

    7. In the Include pages field, select one of the following options to specify which site pages to crawl:

      • Pages under the start URL

      • Pages on this site and its subdomains

    8. In the Include file types field, select all applicable file types:

    9. In the Include external domains for selected file types field, enter external domains that contain the selected file types you want to include in the crawl and press Enter. The format for the entry is example.com, not https://example.com. Do not use https: in the format for the external domain. Subdomains are automatically included, unless added to the list of exclude links.

      The URL entered displays under the field and the field clears to add another URL. To remove a URL, click the X to the right of the entry.

      Other fields affect whether files from URLs entered are included in the crawl. For more information, see Requirements for files to be included in the crawl.
    10. In the Include meta tags field, enter a metadata tag to include during ingestion and press Enter. The metadata tag entered displays under the field and the field clears to add another metadata tag. To remove a metadata tag, click the X to the right of the entry. If the metadata tags you enter exist and contain values, they are ingested in the crawl.

    11. In the Include query parameters field, enter a query parameter and then click the + sign or press Enter. The parameter entered displays under the field and the field clears to add another parameter. All of the parameters entered combine to identify a unique web page. For more information, see Include query parameters functionality details.

    12. In the Include links field, add full or partial URLs that you wish to include in the crawl, and then click the + sign.

    13. In the Exclude links field, add full or partial URLs to exclude from the web crawl, and then click the + sign.

    14. In the Data ingest run scheduling field, edit the schedule to specify when the data ingestion automatically runs. By default, the data source schedule is set to Monthly with the date and time based on your browser time when adding the data source.

    15. In the Limit crawl levels field, click and drag the scale to the number of levels to crawl.

    16. Select one of the following options:

      • To save the data source information, click Save. The changes go into effect when the next scheduled data ingestion occurs.

      • To save the data source information and initiate an immediate, full recrawl and data ingestion of the data source, click Save & Run. Any saved updates are used in the recrawl. You can also select this option to initiate the reindex even if you do not enter changes.

        Clicking Save & Run does not modify or affect the scheduled data ingestion. It only initiates an extra ad hoc data ingestion run. For example, if the scheduled ingestion runs on January 5 and you click Save & Run on January 4, the ad hoc ingestion occurs immediately and the scheduled ingestion still occurs on January 5.

        Do not click Save & Run more than one time. If the button is clicked multiple times, the jobs are not initiated, and errors cause the crawl to fail. In addition, if a job for a data source is running when you click Save & Run, the current job continues to run uninterrupted and the new job is not initiated.
      • To exit without saving changes, click Cancel.

    Additional information