メインコンテンツまでスキップ

最初のプロジェクトを作成する

topic is not yet edited

This document walks you through a simple workflow where you can use AI Unlimited and JupyterLab to:

  • Deploy on-demand, scalable compute
  • Connect to your external data source
  • Run the workload
  • Suspend the compute

Before you begin

Copy and retain these items:

  • The API key from the AI Unlimited. link to the separate topic

  • CSP environment variables from your Console. See Environment Variables.

    AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN

    Learn about AWS environment variables.

Run your first workload

Run %help or %help <command> for details on any magic command. See Magic commands for more details.

  1. Connect to JupyterLab using http://localhost:8888 and enter the token when prompted.

  2. Connect to AI Unlimited using the API Key.

    注記

    Depending on the installation type, the host value may vary.

  3. Create a new project.

    注記

    Currently, Teradata AI Unlimited supports AWS and Azure.

  4. [Optional] Create an authorization object to store the CSP credentials. Replace ACCESS_KEY_ID, SECRET_ACCESS_KEY, and REGION with your values.

  5. Deploy an engine for the project.

    Replace the Project_Name to a name of your choice. The size parameter value can be small, medium, large, or extralarge. The default size is small.

    The deployment process takes a few minutes to complete. On successful deployment, a password is generated.

  6. Establish a connection to your project.

    When a connection is established, the interface prompts you for a password. Enter the password generated in the previous step.

  7. Run the sample workload.

    注記

    Make sure that you do not have tables named SalesCenter or SalesDemo in the selected database.

    a. Create a table to store the sales center data.

    First, drop the table if it already exists. The command fails if the table does not exist.

    b. Load data into the SalesCenter table using the %dataload magic command.

    注記

    Unable to locate the salescenter.csv file? Download the file from GitHub Demo: Charting and Visualization Data.

    Verify that the data was inserted.

    c. Create a table with the sales demo data.

    d. Load data into the SalesDemo table using the %dataload magic command.

    注記

    Unable to locate the salesdemo.csv file? Download the file from GitHub Demo: Charting and Visualization Data.

    Verify that the sales demo data was inserted successfully.

    Open the Navigator for your connection and verify that the tables were created. Run a row count on the tables to verify that the data was loaded. e. Use charting magic to visualize the result. Provide X and Y axes for your chart.

    f. Drop the tables.

  8. Back up your project metadata and object definitions in your GitHub repository.

  9. Suspend the engine.

Congrats! You've successfully run your first use case in JupyterLab.