A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://learn.microsoft.com/en-us/fabric/database/sql/load-data-pipelines below:

Load data with data pipelines into SQL database - Microsoft Fabric

Load data with data pipelines into SQL database in Microsoft Fabric

In this article

Applies to: ✅ SQL database in Microsoft Fabric

In this tutorial, you create a new pipeline that loads sample data from an Azure SQL Database into a SQL database in Fabric.

A data pipeline is a logical grouping of activities that together perform a data ingestion task. Pipelines allow you to manage extract, transform, and load (ETL) activities instead of managing each one individually.

Prerequisites Create data pipeline
  1. In your workspace, select + New, then More options.
  2. Under Data Factory, select Data pipeline.
  3. Once the data pipeline is created, under Start with guidance, choose Copy data assistant.
  4. In the Choose data source page, select Azure SQL Database.
  5. Provide authentication for the connection to the source Azure SQL Database.
  6. For the destination, choose Fabric SQL database from the list in the OneLake catalog.
  7. Select Next.
Load data
  1. On the Connect to data destination page, select Load to new table for each table. Verify the mappings for each table. Select Next.
  2. Review Source and Destination details.
  3. Check the box next to Start data transfer immediately.
  4. Select Save + Run.
  5. In the Activity runs pane, you should see all green checkmarks for successful copy activities. If there are any errors, troubleshooting information is available in the failed row.

Additional resources

In this article

Was this page helpful?


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4