0% found this document useful (0 votes)
307 views

Fabric Get Started

Microsoft Fabric is a unified analytics platform that includes services for data movement, data science, real-time analytics, and business intelligence. It offers a comprehensive suite of services including a data lake, data engineering, and data integration. Fabric allows organizations to consolidate their analytics needs in one place rather than using separate services from multiple vendors. It provides a single, integrated experience that simplifies analytics tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
307 views

Fabric Get Started

Microsoft Fabric is a unified analytics platform that includes services for data movement, data science, real-time analytics, and business intelligence. It offers a comprehensive suite of services including a data lake, data engineering, and data integration. Fabric allows organizations to consolidate their analytics needs in one place rather than using separate services from multiple vendors. It provides a single, integrated experience that simplifies analytics tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 423

Tell us about your PDF experience.

Microsoft Fabric get started


documentation
Microsoft Fabric is a unified platform that can meet your organization's data and
analytics needs. Discover the Fabric shared and platform documentation from this page.

About Microsoft Fabric

e OVERVIEW

What is Fabric?

Fabric terminology

What's New

b GET STARTED

Start a Fabric trial

Fabric home navigation

End-to-end tutorials

Context sensitive Help pane

Get started with Fabric items

p CONCEPT

Find items in OneLake data hub

Promote and certify items

c HOW-TO GUIDE

Apply sensitivity labels

Workspaces

p CONCEPT
Fabric workspace

Workspace roles

b GET STARTED

Create a workspace

c HOW-TO GUIDE

Workspace access control


What is Microsoft Fabric?
Article • 11/15/2023

Microsoft Fabric is an all-in-one analytics solution for enterprises that covers everything
from data movement to data science, Real-Time Analytics, and business intelligence. It
offers a comprehensive suite of services, including data lake, data engineering, and data
integration, all in one place.

With Fabric, you don't need to piece together different services from multiple vendors.
Instead, you can enjoy a highly integrated, end-to-end, and easy-to-use product that is
designed to simplify your analytics needs.

The platform is built on a foundation of Software as a Service (SaaS), which takes


simplicity and integration to a whole new level.

SaaS foundation
Microsoft Fabric brings together new and existing components from Power BI, Azure
Synapse, and Azure Data Factory into a single integrated environment. These
components are then presented in various customized user experiences.

Fabric brings together experiences such as Data Engineering, Data Factory, Data Science,
Data Warehouse, Real-Time Analytics, and Power BI onto a shared SaaS foundation. This
integration provides the following advantages:

An extensive range of deeply integrated analytics in the industry.


Shared experiences across experiences that are familiar and easy to learn.
Developers can easily access and reuse all assets.
A unified data lake that allows you to retain the data where it is while using your
preferred analytics tools.
Centralized administration and governance across all experiences.
With the Microsoft Fabric SaaS experience, all the data and the services are seamlessly
integrated. IT teams can centrally configure core enterprise capabilities and permissions
are automatically applied across all the underlying services. Additionally, data sensitivity
labels are inherited automatically across the items in the suite.

Fabric allows creators to concentrate on producing their best work, freeing them from
the need to integrate, manage, or understand the underlying infrastructure that
supports the experience.

Components of Microsoft Fabric


Microsoft Fabric offers the comprehensive set of analytics experiences designed to work
together seamlessly. Each experience is tailored to a specific persona and a specific task.
Fabric includes industry-leading experiences in the following categories for an end-to-
end analytical need.

Data Engineering - Data Engineering experience provides a world class Spark


platform with great authoring experiences, enabling data engineers to perform
large scale data transformation and democratize data through the lakehouse.
Microsoft Fabric Spark's integration with Data Factory enables notebooks and
spark jobs to be scheduled and orchestrated. For more information, see What is
Data engineering in Microsoft Fabric?

Data Factory - Azure Data Factory combines the simplicity of Power Query with the
scale and power of Azure Data Factory. You can use more than 200 native
connectors to connect to data sources on-premises and in the cloud. For more
information, see What is Data Factory in Microsoft Fabric?

Data Science - Data Science experience enables you to build, deploy, and
operationalize machine learning models seamlessly within your Fabric experience.
It integrates with Azure Machine Learning to provide built-in experiment tracking
and model registry. Data scientists are empowered to enrich organizational data
with predictions and allow business analysts to integrate those predictions into
their BI reports. This way it shifts from descriptive to predictive insights. For more
information, see What is Data science in Microsoft Fabric?

Data Warehouse - Data Warehouse experience provides industry leading SQL


performance and scale. It fully separates compute from storage, enabling
independent scaling of both the components. Additionally, it natively stores data
in the open Delta Lake format. For more information, see What is data
warehousing in Microsoft Fabric?

Real-Time Analytics - Observational data, which is collected from various sources


such as apps, IoT devices, human interactions, and so many more. It's currently the
fastest growing data category. This data is often semi-structured in formats like
JSON or Text. It comes in at high volume, with shifting schemas. These
characteristics make it hard for traditional data warehousing platforms to work
with. Real-Time Analytics is best in class engine for observational data analytics.
For more information, see What is Real-Time Analytics in Fabric?

Power BI - Power BI is the world's leading Business Intelligence platform. It ensures


that business owners can access all the data in Fabric quickly and intuitively to
make better decisions with data. For more information, see What is Power BI?

Fabric brings together all these experiences into a unified platform to offer the most
comprehensive big data analytics platform in the industry.

Microsoft Fabric enables organizations, and individuals, to turn large and complex data
repositories into actionable workloads and analytics, and is an implementation of data
mesh architecture. To learn more about data mesh, visit the article that explains data
mesh architecture.

OneLake and lakehouse - the unification of


lakehouses
The Microsoft Fabric platform unifies the OneLake and lakehouse architecture across the
enterprises.

OneLake
The data lake is the foundation on which all the Fabric services are built. Microsoft Fabric
Lake is also known as OneLake. It's built into the Fabric service and provides a unified
location to store all organizational data where the experiences operate.

OneLake is built on top of ADLS (Azure Data Lake Storage) Gen2. It provides a single
SaaS experience and a tenant-wide store for data that serves both professional and
citizen developers. The OneLake SaaS experience simplifies the experiences, eliminating
the need for users to understand any infrastructure concepts such as resource groups,
RBAC (Role-Based Access Control), Azure Resource Manager, redundancy, or regions.
Additionally it doesn't require the user to even have an Azure account.

OneLake eliminates today's pervasive and chaotic data silos, which individual developers
create when they provision and configure their own isolated storage accounts. Instead,
OneLake provides a single, unified storage system for all developers, where discovery
and data sharing is trivial and compliance with policy and security settings are enforced
centrally and uniformly. For more information, see What is OneLake?

Organizational structure of OneLake and lakehouse


OneLake is hierarchical in nature to simplify management across your organization. It's
built into Microsoft Fabric and there's no requirement for any up-front provisioning.
There's only one OneLake per tenant and it provides a single-pane-of-glass file-system
namespace that spans across users, regions and even clouds. The data in OneLake is
divided into manageable containers for easy handling.

The tenant maps to the root of OneLake and is at the top level of the hierarchy. You can
create any number of workspaces within a tenant, which can be thought of as folders.

The following image shows the various Fabric items where data is stored. It's an example
of how various items within Fabric would store data inside OneLake. As displayed, you
can create multiple workspaces within a tenant, create multiple lakehouses within each
workspace. A lakehouse is a collection of files, folders, and tables that represents a
database over a data lake. To learn more, see What is a lakehouse?.
Every developer and business unit in the tenant can instantly create their own
workspaces in OneLake. They can ingest data into their own lakehouses, start
processing, analyzing, and collaborating on the data, just like OneDrive in Office.

All the Microsoft Fabric compute experiences are prewired to OneLake, just like the
Office applications are prewired to use the organizational OneDrive. The experiences
such as Data Engineering, Data Warehouse, Data Factory, Power BI, and Real-Time
Analytics use OneLake as their native store. They don't need any extra configuration.

OneLake is designed to allow instant mounting of existing PaaS storage accounts into
OneLake with the Shortcut feature. There's no need to migrate or move any of the
existing data. Using shortcuts, you can access the data stored in Azure Data Lake
Storage.

Additionally, shortcuts allow you to easily share data between users and applications
without moving or duplicating information. The shortcut capability extends to other
storage systems, allowing you to compose and analyze data across clouds with
transparent, intelligent caching that reduces egress costs and brings data closer to
compute.

Fabric solutions for ISVs


If you're an ISV interested in integrating your solutions with Microsoft Fabric, you can
use one of the following paths depending on the level of integration you want to
achieve:

Interop - Integrate your solution with the OneLake Foundation and establish basic
connections and interoperability with Fabric.
Develop on Fabric - Build your solution on top of the Fabric platform or seamlessly
embed Fabric's functionalities within your existing applications. It allows you to
actively leverage Fabric capabilities.
Build a Fabric workload - Create customized workloads and experiences in Fabric.
Tailor your offerings to deliver their value proposition while leveraging Fabric
ecosystem.

For more information, see the Fabric ISV partner ecosystem.

Next steps
Microsoft Fabric terminology
Create a workspace
Navigate to your items from Microsoft Fabric Home page
End-to-end tutorials in Microsoft Fabric

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric (Preview) trial
Article • 08/03/2023

Microsoft Fabric has launched as a public preview and is temporarily provided free of
charge when you sign up for the Microsoft Fabric (Preview) trial. Your use of the
Microsoft Fabric (Preview) trial includes access to the Fabric product experiences and the
resources to create and host Fabric items. The Fabric (Preview) trial lasts for a period of
60 days, but may be extended by Microsoft, at our discretion. The Microsoft Fabric
(Preview) trial experience is subject to certain capacity limits as further explained below.

This document helps you understand and start a Fabric (Preview) trial.

) Important

Microsoft Fabric is in preview.

Existing Power BI users


If you're an existing Power BI user, you can skip to Start the Fabric (Preview) trial.

Users who are new to Power BI


For public preview, the Fabric (Preview) trial requires a Power BI license. Navigate to
https://app.fabric.microsoft.com to sign up for a Power BI free license. Once you have
a Power BI license, you can start the Fabric (Preview) trial.

Start the Fabric (Preview) trial


Follow these steps to start your Fabric (Preview) trial.

1. Open the Fabric homepage and select the Account manager.


2. In the Account manager, select Start trial. If you don't see the Start trial button,
trials may be disabled for your tenant. To enable trials for your tenant, see
Administer user access to a Fabric (Preview) trial.

3. If prompted, agree to the terms and then select Start trial.


4. Once your trial capacity is ready, you receive a confirmation message. Select Got it
to begin working in Fabric.

5. Open your Account manager again. Notice that you now have a heading for Trial
status. Your Account manager keeps track of the number of days remaining in your
trial. You also see the countdown in your Fabric menu bar when you work in a
product experience.

Congratulations! You now have a Fabric (Preview) trial that includes a Power BI individual
trial (if you didn't already have a Power BI paid license) and a Fabric (Preview) trial
capacity.

Other ways to start a Microsoft Fabric (Preview)


trial
If your Fabric administrator has enabled the preview of Microsoft Fabric for the tenant
but you do not have access to a capacity that has Fabric enabled, you have another
option for enabling a Fabric (Preview) trial. When you try to create a Fabric item in a
workspace that you own (such as My Workspace) and that workspace doesn't support
Fabric items, you're prompted to start a Fabric (Preview) trial. If you agree, your Fabric
(Preview) trial starts and your workspace is upgraded to a trial capacity workspace.
What is a trial capacity?
A trial capacity is a distinct pool of resources allocated to Microsoft Fabric. The size of
the capacity determines the amount of computation power reserved for users of that
capacity. The amount of compute resources is based on the SKU.

With a Fabric (Preview) trial, you get full access to all of the Fabric experiences and
features. You also get OneLake storage up to 1 TB. Create Fabric items and collaborate
with others in the same Fabric trial capacity. With a Fabric (Preview) trial, you can:

Create workspaces (folders) for projects that support Fabric capabilities.


Share Fabric items, such as datasets, warehouses, and notebooks, and collaborate
on them with other Fabric users.
Create analytics solutions using these Fabric items.

You don't have access to your capacity until you put something into it. To begin using
your Fabric (Preview) trial, add items to My workspace or create a new workspace.
Assign that workspace to your trial capacity using the "Trial" license mode, and then all
the items in that workspace are saved and executed in that capacity.

To learn more about workspaces and license mode settings, see Workspaces.
Capacity units
When you start a Fabric (Preview) trial, Microsoft provisions one 64 capacity unit (CU)
trial capacity. These CUs allow users of your trial capacity to consume 64x60 CU seconds
every minute. Every time the Fabric trial capacity is used, it consumes CUs. The Fabric
platform aggregates consumption from all experiences and applies it to your reserved
capacity. Not all functions have the same consumption rate. For example, running a Data
Warehouse might consume more capacity units than authoring a Power BI report. When
the capacity consumption exceeds its size, Microsoft slows down the experience similar
to slowing down CPU performance.

There's no limit on the number of workspaces or items you can create within your
capacity. The only constraint is the availability of capacity units and the rate at which you
consume them.

You're the capacity owner for your trial capacity. As your own capacity administrator for
your Fabric trial capacity, you have access to a detailed and transparent report for how
capacity units are consumed via the Capacity Metrics app. For more information about
administering your trials, see Administer a Fabric trial capacity.

End a Fabric (Preview) trial


You may cancel your trial from the Account manager. When you cancel your free Fabric
(Preview) trial, the trial capacity, with all of its workspaces and their contents, is deleted.
In addition, you can't:

Create workspaces that support Fabric capabilities.

Share Fabric items, such as machine learning models, warehouses, and notebooks,
and collaborate on them with other Fabric users.

Create analytics solutions using these Fabric items.

Additionally, if you cancel your trial, you may not be able to start another trial. If you
want to retain your data and continue to use Microsoft Fabric (Preview), you can
purchase a capacity and migrate your workspaces to that capacity. To learn more about
workspaces and license mode settings, see Workspaces.

Administer user access to a Fabric (Preview)


trial
Fabric administrators can enable and disable trials for paid features for Fabric. This
setting is at a tenant level and is applied to all users or to specific security groups. This
one tenant setting applies to both Power BI and Fabric trials, so Fabric administrators
should carefully evaluate the impact of making a change to this setting.

Each trial user is the capacity admin for their trial capacity. Microsoft currently doesn't
support multiple capacity administrators per trial capacity. Therefore, Fabric
administrators can't view metrics for individual capacities. We do have plans to support
this capability in an upcoming admin monitoring feature.

Considerations and limitations


I am unable to start a trial

If you don't see the Start trial button in your Account manager:

Your Fabric administrator may have disabled access, and you can't start a Fabric
(Preview) trial. Contact your Fabric administrator to request access. You can also
start a trial using your own tenant. For more information, see Sign up for Power BI
with a new Microsoft 365 account.

If you're an existing Power BI trial user, you don't see Start trial in your Account
manager. You can start a Fabric (Preview) trial by attempting to create a Fabric
item. When you attempt to create a Fabric item, you're prompted to start a Fabric
(Preview) trial. If you don't see this prompt, your Fabric administrator may have
disabled the Fabric (Preview) feature.

If you do see the Start trial button in your Account manager:

You might not be able to start a trial if your tenant has exhausted its limit of trial
capacities. If that is the case, you have the following options:

Purchase a Fabric capacity from Azure by performing a search for Microsoft


Fabric.

Request another trial capacity user to share their trial capacity workapce with
you. Give users access to workspaces

To increase tenant trial capacity limits, reach out to your Fabric administrator to
create a Microsoft support ticket.

In Workplace settings, I can't assign a workspace to the trial capacity

This known bug occurs when the Fabric administrator turns off trials after you start a
trial. To add your workspace to the trial capacity, open the Admin portal by selecting it
from the gear icon in the top menu bar. Then, select Trial > Capacity settings and
choose the name of the capacity. If you don't see your workspace assigned, add it here.
What is the region for my Fabric (Preview) trial capacity?

If you start the trial using the Account manager, your trial capacity is located in the
home region for your tenant. See Find your Fabric home region for information about
how to find your home region, where your data is stored.

What impact does region have on my Fabric (Preview) trial?

Not all regions are available for the Fabric (Preview) trial. Start by looking up your home
region and then check to see if your region is supported for the Fabric (Preview) trial. If
your home region doesn't have Fabric enabled, don't use the Account manager to start
a trial. To start a trial in a region that is not your home region, follow the steps in Other
ways to start a Fabric (Preview) trial. If you've already started a trial from Account
manager, cancel that trial and follow the steps in Other ways to start a Fabric (Preview)
trial instead.

Can I move my tenant to another region?

You can't move your organization's tenant between regions by yourself. If you need to
change your organization's default data location from the current region to another
region, you must contact support to manage the migration for you. For more
information, see Move between regions.

What happens at the end of the Fabric (Preview) trial?

If you don't upgrade to a paid Fabric capacity at the end of the trial period, non-Power
BI Fabric items are removed according to the retention policy upon removal.

How is the Fabric (Preview) trial different from an individual trial of Power BI paid?

A per-user trial of Power BI paid allows access to the Fabric landing page. Once you sign
up for the Fabric (Preview) trial, you can use the trial capacity for storing Fabric
workspaces and items and for running Fabric experiences. All rules guiding Power BI
licenses and what you can do in the Power BI experience remain the same. The key
difference is that a Fabric capacity is required to access non-Power BI experiences and
items.

Private links and private access

During the Fabric preview, you can't create Fabric items in the trial capacity if you or
your tenant have private links enabled and public access is disabled. This limitation is a
known bug for Fabric preview.

Autoscale

The Fabric (Preview) trial capacity doesn't support autoscale. If you need more compute
capacity, you can purchase a Fabric capacity in Azure.

For existing Synapse users

The Fabric (Preview) trial is different from a Proof of Concept (POC). A Proof of
Concept (POC) is standard enterprise vetting that requires financial investment and
months' worth of work customizing the platform and using fed data. The Fabric
(Preview) trial is free for users through public preview and doesn't require
customization. Users can sign up for a free trial and start running product
experiences immediately, within the confines of available capacity units.

You don't need an Azure subscription to start a Fabric (Preview) trial. If you have an
existing Azure subscription, you can purchase a (paid) Fabric capacity.

For existing Power BI users

You can migrate your existing workspaces into a trial capacity using workspace settings
and choosing "Trial" as the license mode. To learn how to migrate workspaces, see
create workspaces.
Next steps
Learn about licenses
Review Fabric terminology

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric preview information
Article • 11/16/2023

This article describes the meaning of preview in Microsoft Fabric, and explains how
preview experiences and features can be used.

Preview experiences and features are released with limited capabilities, but are made
available on a preview basis so customers can get early access and provide feedback.

Preview experiences and features:

Are subject to separate supplemental preview terms .

Aren't meant for production use.

Are not subject to SLAs and support is provided as best effort in certain cases.
However, Microsoft Support is eager to get your feedback on the preview
functionality, and might provide best effort support in certain cases.

May have limited or restricted functionality.

May be available only in selected geographic areas.

Who can enable a preview experiences and


features
To enable a preview experience or feature, you need to have one of these admin roles:

Global administrator

Power Platform administrator

Fabric administrator

7 Note

When a preview feature is delegated, it can be enabled by a capacity admin for


that capacity.

How do I enable a preview experience or


feature
To enable a preview experience or feature, follow these steps:

1. Navigate to the admin portal.

2. Select tenant settings tab.

3. Select the preview experience or experience you want to enable.

4. Enable experience using the tenant setting.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric terminology
Article • 05/23/2023

Learn the definitions of terms used in Microsoft Fabric, including terms specific to
Synapse Data Warehouse, Synapse Data Engineering, Synapse Data Science, Synapse
Real-Time Analytics, Data Factory, and Power BI.

) Important

Microsoft Fabric is in preview.

General terms
Capacity: Capacity is a dedicated set of resources that is available at a given time
to be used. Capacity defines the ability of a resource to perform an activity or to
produce output. Different items consume different capacity at a certain time. Fabric
offers capacity through the Fabric SKU and Trials. For more information, see What
is capacity?

Experience: A collection of capabilities targeted to a specific functionality. The


Fabric experiences include Synapse Data Warehouse, Synapse Data Engineering,
Synapse Data Science, Synapse Real-Time Analytics, Data Factory and Power BI.

Item: An item a set of capabilities within an experience. Users can create, edit, and
delete them. Each item type provides different capabilities. For example, the Data
Engineering experience includes the lakehouse, notebook, and Spark job definition
items.

Tenant: A tenant is a single instance of Fabric for an organization and is aligned


with an Azure Active Directory.

Workspace: A workspace is a collection of items that brings together different


functionality in a single environment designed for collaboration. It acts as a
container that leverages capacity for the work that is executed, and provides
controls for who can access the items in it. For example, in a workspace, users
create reports, notebooks, datasets, etc. For more information, see Workspaces
article.

Synapse Data Engineering


Lakehouse: A lakehouse is a collection of files, folders, and tables that represent a
database over a data lake used by the Apache Spark engine and SQL engine for
big data processing. A lakehouse includes enhanced capabilities for ACID
transactions when using the open-source Delta formatted tables. The lakehouse
item is hosted within a unique workspace folder in Microsoft OneLake. It contains
files in various formats (structured and unstructured) organized in folders and
subfolders. For more information, see What is a lakehouse?

Notebook: A Fabric notebook is a multi-language interactive programming tool


with rich functions. Which include authoring code and markdown, running and
monitoring a Spark job, viewing and visualizing result, and collaborating with the
team. It helps data engineers and data scientist to explore and process data, and
build machine learning experiments with both code and low-code experience. It
can be easily transformed to a pipeline activity for orchestration.

Spark application: An Apache Spark application is a program written by a user


using one of Spark's API languages (Scala, Python, Spark SQL, or Java) or
Microsoft-added languages (.NET with C# or F#). When an application runs, it's
divided into one or more Spark jobs that run in parallel to process the data faster.
For more information, see Spark application monitoring.

Apache Spark job: A Spark job is part of a Spark application that is run in parallel
with other jobs in the application. A job consists of multiple tasks. For more
information, see Spark job monitoring.

Apache Spark job definition: A Spark job definition is a set of parameters, set by
the user, indicating how a Spark application should be run. It allows you to submit
batch or streaming jobs to the Spark cluster. For more information, see What is an
Apache Spark job definition?

V-order: A write optimization to the parquet file format that enables fast reads and
provides cost efficiency and better performance. All the Fabric engines write v-
ordered parquet files by default.

Data Factory
Connector: Data Factory offers a rich set of connectors that allow you to connect
to different types of data stores. Once connected, you can transform the data. For
more information, see connectors.

Data pipeline: In Data Factory, a data pipeline is used for orchestrating data
movement and transformation. These pipelines are different from the deployment
pipelines in Fabric. For more information, see Pipelines in the Data Factory
overview.

Dataflow Gen2: Dataflows provide a low-code interface for ingesting data from
hundreds of data sources and transforming your data. Dataflows in Fabric are
referred to as Dataflow Gen2. Dataflow Gen1 exists in Power BI. Dataflow Gen2
offers extra capabilities compared to Dataflows in Azure Data Factory or Power BI.
You can't upgrade from Gen1 to Gen2. For more information, see Dataflows in the
Data Factory overview.

Synapse Data Science


Data Wrangler: Data Wrangler is a notebook-based tool that provides users with
an immersive experience to conduct exploratory data analysis. The feature
combines a grid-like data display with dynamic summary statistics and a set of
common data-cleansing operations, all available with a few selected icons. Each
operation generates code that can be saved back to the notebook as a reusable
script.

Experiment: A machine learning experiment is the primary unit of organization and


control for all related machine learning runs. For more information, see Machine
learning experiments in Microsoft Fabric.

Model: A machine learning model is a file trained to recognize certain types of


patterns. You train a model over a set of data, and you provide it with an algorithm
that it uses to reason over and learn from that data set. For more information, see
Machine learning model.

Run: A run corresponds to a single execution of model code. In MLflow , tracking


is based on experiments and runs.

Synapse data warehousing


SQL Endpoint: Each Lakehouse has a SQL Endpoint that allows a user to query
delta table data with TSQL over TDS. For more information, see SQL Endpoint.

Synapse Data Warehouse: The Synapse Data Warehouse functionality is a


traditional data warehouse and supports the full transactional T-SQL capabilities
you would expect from an enterprise data warehouse. For more information, see
Synapse Data Warehouse.
Synapse Real-Time Analytics
KQL database: The KQL database is the representation of a database holding data
in a format to execute a KQL query against it. For more information, see Query a
KQL database.

KQL Queryset: The KQL Queryset is the item used to run queries, view results, and
manipulate query results on data from your Data Explorer database. The queryset
includes the databases and tables, the queries, and the results. The KQL Queryset
allows you to save queries for future use, or export and share queries with others.
For more information, see Query data in the KQL Queryset

Event stream: The Microsoft Fabric event streams feature provides a centralized
place in the Fabric platform to capture, transform, and route real-time events to
destinations with a no-code experience. An event stream consists of various
streaming data sources, ingestion destinations, and an event processor when the
transformation is needed. For more information, see Microsoft Fabric event
streams.

OneLake
Shortcut: Shortcuts are embedded references within OneLake that point to other
file store locations. They provide a way to connect to existing data without having
to directly copy it. For more information, see OneLake shortcuts.

Next steps
Navigate to your items from Microsoft Fabric Home page
Discover data items in the OneLake data hub
End-to-end tutorials in Microsoft Fabric
What's new in Microsoft Fabric?
Article • 11/30/2023

This page is continuously updated with a recent review of what's new in Microsoft
Fabric. To follow the latest in Fabric news and features, see the Microsoft Fabric Blog .
Also follow the latest in Power BI at What's new in Power BI?

For older updates, review previous updates in Microsoft Fabric.

) Important

Microsoft Fabric has been announced!

New to Microsoft Fabric?


This section includes articles and announcements for users new to Microsoft Fabric.

Learning Paths for Fabric


Get started with Microsoft Fabric
End-to-end tutorials in Microsoft Fabric
Definitions of terms used in Microsoft Fabric

Month Feature Learn more

November Microsoft Fabric, A focus on what customers using the current Platform-as-
2023 explained for a-Service (PaaS) version of Synapse can expect . We'll
existing Synapse explain what the general availability of Fabric means for
users your current investments (spoiler: we fully support them),
but also how to think about the future.

November Microsoft Fabric is Microsoft Fabric is now generally available for purchase .
2023 now generally Microsoft Fabric can reshape how your teams work with
available data by bringing everyone together on a single, AI-
powered platform built for the era of AI. This includes the
experiences of Fabric: Power BI, Data Factory, Data
Engineering, Data Science, Real-Time Analytics, Data
Warehouse, and the overall Fabric platform.

November Fabric workloads are Microsoft Fabric is now generally available! Microsoft
2023 now generally Fabric Synapse Data Warehouse, Data Engineering & Data
available! Science, Real-Time Analytics, Data Factory, OneLake, and
the overall Fabric platform are now generally available.
Month Feature Learn more

November Implement An introduction to medallion lake architecture and how


2023 medallion lakehouse you can implement a lakehouse in Microsoft Fabric.
architecture in
Microsoft Fabric

October Announcing the Announcing the Fabric Roadmap . One place you can see
2023 Fabric roadmap what we are working on and when you can expect it to be
available.

October Get started with Explore how semantic link seamlessly connects Power BI
2023 semantic link semantic models with Synapse Data Science within
Microsoft Fabric. Learn more at Semantic link in Microsoft
Fabric: Bridging BI and Data Science .

You can also check out the semantic link sample notebooks
that are now available in the fabric-samples GitHub
repository. These notebooks showcase the use of semantic
link's Python library, SemPy, in Microsoft Fabric.

September Fabric Capacities – Read more about the improvements we're making to the
2023 Everything you need Fabric capacity management platform for Fabric and Power
to know about BI users .
what's new and
what's coming

August Accessing Microsoft Learn how to enable Microsoft Fabric as a developer, as a


2023 Fabric for startup or as an enterprise has different steps. Learn more
developers, startups at Enabling Microsoft Fabric for developers, startups and
and enterprises! enterprises .

August Strong, useful, From the Data Integration Design Team, learn about the
2023 beautiful: Designing strong, creative, and function design of Microsoft Fabric,
a new way of getting as Microsoft designs for the future of data integration.
data

August Learn Live: Get Calling all professionals, enthusiasts, and learners! On
2023 started with August 29, we'll be kicking off the "Learn Live: Get started
Microsoft Fabric with Microsoft Fabric" series in partnership with Microsoft's
Data Advocacy teams and Microsoft WorldWide Learning
teams to deliver 9x live-streamed lessons covering topics
related to Microsoft Fabric!

July 2023 Step-by-Step In this comprehensive guide, we walk you through the
Tutorial: Building process of creating Extract, Transform, Load (ETL) pipelines
ETLs with Microsoft using Microsoft Fabric .
Fabric

July 2023 Free preview usage We're extending the free preview usage of Fabric
of Microsoft Fabric experiences (other than Power BI). These experiences won't
Month Feature Learn more

experiences count against purchased capacity until October 1, 2023 .


extended to October
1, 2023

June 2023 Get skilled on Who is Fabric for? How can I get skilled? This blog post
Microsoft Fabric - answers these questions about Microsoft Fabric, a
the AI-powered comprehensive data analytics solution by unifying many
analytics platform experiences on a single platform.

June 2023 Introducing the end- In this blog, we explore four end-to-end scenarios that are
to-end scenarios in typical paths our customers take to extract value and
Microsoft Fabric insights from their data using Microsoft Fabric .

May 2023 Get Started with A technical overview and introduction to everything from
Microsoft Fabric - All data movement to data science, real-time analytics, and
in-one place for all business intelligence in Microsoft Fabric .
your Analytical
needs

May 2023 Microsoft OneLake Microsoft OneLake brings the first multicloud SaaS data
in Fabric, the lake for the entire organization .
OneDrive for data

Features currently in preview


The following table lists the features of Microsoft Fabric that are currently in preview.
Preview features are sorted alphabetically.

7 Note

Features currently in preview are available under supplemental terms of use ,


review for legal terms that apply to Azure features that are in beta, preview, or
otherwise not yet released into general availability. Microsoft Fabric provides
previews to give you a chance to evaluate and share feedback with the product
group on features before they become generally available (GA).

Feature Learn more

Copilot in notebooks The Copilot in Fabric Data Science and Data Engineering notebooks is
preview designed to accelerate productivity, provide helpful answers and
guidance, and generate code for common tasks like data exploration,
data preparation and machine learning with. You can interact and engage
with the AI from either the chat panel or even from within notebooks
Feature Learn more

cells using magic commands to get insights from data faster. For more
information, see Copilot in notebooks .

Data Activator We are thrilled to announce that Data Activator is now in preview and
preview is enabled for all existing Microsoft Fabric users.

Data Engineering: We are thrilled to announce preview of the Environment in Fabric. The
Environment preview Environment is a centralized item that allows you to configure all the
required settings for running a Spark job in one place.

Data Wrangler for Data Wrangler now supports Spark DataFrames in preview. Until now,
Spark DataFrames users have been able to explore and transform pandas DataFrames using
preview common operations that can be converted to Python code in real time.
The new release allows users to edit Spark DataFrames in addition to
pandas DataFrames with Data Wrangler .

Lakehouse support The Lakehouse now integrates with the lifecycle management capabilities
for git integration in Microsoft Fabric , providing a standardized collaboration between all
and deployment development team members throughout the product's life. Lifecycle
pipelines (preview) management facilitates an effective product versioning and release
process by continuously delivering features and bug fixes into multiple
environments.

Microsoft 365 The Microsoft 365 connector now supports ingesting data into Lakehouse
connector now tables .
supports ingesting
data into Lakehouse
(preview)

Microsoft Fabric User We're happy to announce the preview of Microsoft Fabric User APIs. The
APIs Fabric user APIs are a major enabler for both enterprises and partners
to use Microsoft Fabric as they enable end-to-end fully automated
interaction with the service, enable integration of Microsoft Fabric into
external web applications, and generally enable customers and partners
to scale their solutions more easily.

Notebook Git Fabric notebooks now offer Git integration for source control using Azure
integration preview DevOps . It allows users to easily control the notebook code versions
and manage the git branches by leveraging the Fabric Git functions and
Azure DevOps.

Notebook in Now you can also use notebooks to deploy your code across different
Deployment Pipeline environments , such as development, test, and production. You can also
Preview use deployment rules to customize the behavior of your notebooks when
they are deployed, such as changing the default Lakehouse of a
Notebook. Get started with deployment pipelines to set up your
deployment pipeline, Notebook will show up in the deployment content
automatically.
Feature Learn more

Splunk add-on Microsoft Fabric add-on for Splunk allows users to ingest logs from
preview Splunk platform into a Fabric KQL DB using the Kusto python SDK.

VNet Gateways in VNet Data Gateway support for Dataflows Gen2 in Fabric is now in
Dataflow Gen2 preview. The VNet data gateway helps to connect from Fabric
preview Dataflows Gen2 to Azure data services within a VNet, without the need of
an on-premises data gateway.

Generally available features


The following table lists the features of Microsoft Fabric that have transitioned from
preview to general availability (GA) within the last 12 months.

Month Feature Learn more

November Microsoft Microsoft Fabric is now generally available for purchase .


2023 Fabric is now Microsoft Fabric can reshape how your teams work with data by
generally bringing everyone together on a single, AI-powered platform built
available for the era of AI. This includes the experiences of Fabric: Power BI,
Data Factory, Data Engineering, Data Science, Real-Time Analytics,
Data Warehouse, and the overall Fabric platform .

Community
This section summarizes new Microsoft Fabric community opportunities for prospective
and current influencers and MVPs.

Join a local Fabric User Group or join a local event .


To learn about the Microsoft MVP Award and to find MVPs, see
mvp.microsoft.com .
Are you a student? Learn more about the Microsoft Learn Student Ambassadors
program .

Month Feature Learn more

November Microsoft Fabric A special edition of the "Microsoft Fabric MVP Corner" blog
2023 MVP Corner – series highlights selected content related to Fabric and created
Special Edition by MVPs around the Microsoft Ignite 2023 conference , when
(Ignite) we announced Microsoft Fabric generally available.

November Skill up on We are excited to announce the Microsoft Ignite: Microsoft


2023 Fabric with the Fabric Challenge as part of the Microsoft Learn Cloud Skills
Microsoft Learn Challenge. Skill up for in-demand tech scenarios , and enter to
Month Feature Learn more

Cloud Skills win a VIP pass to the next Microsoft Ignite. The challenge is on
Challenge until January 15, 2024. The challenge helps you prepare for the
Microsoft Certified: Fabric Analytics Engineer Associate
certification and new Microsoft Applied Skills credentials
covering the lakehouse and data warehouse scenarios, which are
coming in the next months.

October Microsoft Fabric Highlights of selected content related to Fabric and created by
2023 MVP Corner – MVPs from October 2023 .
October 2023

September Microsoft Fabric Highlights of selected content related to Fabric and created by
2023 MVP Corner – MVPs from September 2023 .
September
2023

August Microsoft Fabric Highlights of selected content related to Fabric and created by
2023 MVP Corner – MVPs from August 2023 .
August 2023

July 2023 Microsoft Fabric Highlights of selected content related to Fabric and created by
MVP Corner – MVPs in July 2023 .
July 2023

June 2023 Microsoft Fabric The Fabric MVP Corner blog series to highlight selected content
MVP Corner – related to Fabric and created by MVPs in June 2023 .
June 2023

May 2023 Fabric User Power BI User Groups are now Fabric User Groups !
Groups

Power BI
Updates to Power BI Desktop and the Power BI service are summarized at What's new in
Power BI?

Fabric samples and guidance


This section summarizes new guidance and sample project resources for Microsoft
Fabric.

Month Feature Learn more

November Semantic Link: Semantic Link adds support for the recently released
2023 OneLake integrated OneLake integrated semantic models! You can now
Month Feature Learn more

Semantic Models directly access data using your semantic model's name
via OneLake using the read_table function and the
new mode parameter set to onelake .

November Integrate your SAP Using the built-in connectivity of Microsoft Fabric is, of
2023 data into Microsoft course, the easiest and least-effort way of adding SAP
Fabric data to your Fabric data estate .

November Fabric Changing the Follow this step-by-step example of how to explore the
2023 game: Validate functional dependencies between columns in a table
dependencies with using the semantic link . The semantic link is a feature
Semantic Link – Data that allows you to establish a connection between Power
Quality BI datasets and Synapse Data Science in Microsoft Fabric.

November Implement medallion An introduction to medallion lake architecture and how


2023 lakehouse architecture you can implement a lakehouse in Microsoft Fabric.
in Microsoft Fabric

October Fabric Change the Follow this realistic example of reading data from Azure
2023 Game: Exploring the Data Lake Storage using shortcuts, organizing raw data
data into structured tables, and basic data exploration. Our
data exploration uses as a source the diverse and
captivating city of London with information extracted
from data.london.gov.uk/ .

September Announcing an end- A new workshop guides you in building a hands-on, end-
2023 to-end workshop: to-end data analytics solution for the Snapshot
Analyzing Wildlife Serengeti dataset using Microsoft Fabric. The dataset
Data with Microsoft consists of approximately 1.68M wildlife images and
Fabric image annotations provided in .json files.

September New learning path: The new Implement a Lakehouse with Microsoft Fabric
2023 Implement a learning path introduces the foundational components of
Lakehouse with implementing a data lakehouse with Microsoft Fabric with
Microsoft Fabric seven in-depth modules.

September Fabric Readiness The Fabric Readiness repository is a treasure trove of


2023 repository resources for anyone interested in exploring the exciting
world of Microsoft Fabric.

July 2023 Connecting to How do I connect to OneLake? This blog covers how to
OneLake connect and interact with OneLake, including how
OneLake achieves its compatibility with any tool used
over ADLS Gen2!

June 2023 Using Azure How does Azure Databricks work with Microsoft Fabric?
Databricks with This blog post answers that question and more details on
Microsoft Fabric and how the two systems can work together.
OneLake
Data Factory in Microsoft Fabric
This section summarizes recent new features and capabilities of Data Factory in
Microsoft Fabric. Follow issues and feedback through the Data Factory Community
Forum .

Month Feature Learn more

November Implement medallion An introduction to medallion lake architecture and how


2023 lakehouse architecture you can implement a lakehouse in Microsoft Fabric.
in Microsoft Fabric

November Dataflow Gen2 The connectors for Lakehouse, Warehouse, and KQL
2023 General availability of Database are now generally available . We encourage
Fabric connectors you to use these connectors when trying to connect to
data from any of these Fabric experiences.

November Dataflow Gen2 To prevent unnecessary resources from being consumed,


2023 Automatic refresh we've implemented a new mechanism that stops the
cancellation refresh of a Dataflow as soon as the results of the refresh
are known to have no impact . This is to reduce
consumption more proactively.

November Dataflow Gen2 Error We made diagnostics improvement to provide meaningful


2023 message propagation error messages when Dataflow refresh fails for those
through gateway Dataflows running through the Enterprise Data Gateway.

November Dataflow Gen2 Column binding support is enabled for SAP HANA. This
2023 Support for column optional parameter results in significantly improved
binding for SAP performance. For more information, see Support for
HANA connector column binding for SAP HANA connector .

November Dataflow Gen2 When using a Dataflow Gen2 in Fabric, the system will
2023 staging artifacts automatically create a set of staging artifacts. Now, these
hidden staging artifacts will be abstracted from the Dataflow
Gen2 experience and will be hidden from the workspace
list. No action is required by the user and this change has
no impact on existing Dataflows.

November Dataflow Gen2 VNet Data Gateway support for Dataflows Gen2 in Fabric
2023 Support for VNet is now in preview. The VNet data gateway helps to
Gateways preview connect from Fabric Dataflows Gen2 to Azure data
services within a VNet, without the need of an on-
premises data gateway.

November Cross workspace You can now clone your data pipelines across workspaces
2023 "Save as" by using the "Save as" button .

November Dynamic content In the Email and Teams activities, you can now add
2023 flyout integration with dynamic content with ease. With this new pipeline
Month Email
Featureand Teams expression
Learn moreintegration, you will now see a flyout menu to
activity help you select and build your message content quickly
without needing to learn the pipeline expression
language.

November Copy activity now The Copy activity in data pipelines now supports fault
2023 supports fault tolerance for Fabric Warehouse . Fault tolerance allows
tolerance for Fabric you to handle certain errors without interrupting data
Data Warehouse movement. By enabling fault tolerance, you can continue
connector to copy data while skipping incompatible data like
duplicated rows.

November MongoDB and MongoDB and MongoDB Atlas connectors are now
2023 MongoDB Atlas available to use in your Data Factory data pipelines as
connectors sources and destinations.

November Microsoft 365 The Microsoft 365 connector now supports ingesting data
2023 connector now into Lakehouse tables .
supports ingesting
data into Lakehouse
(preview)

November Multi-task support for You can now open and edit data pipelines from different
2023 editing pipelines in workspaces and navigate between them using the
the designer multi-tasking capabilities in Fabric.

November String interpolation You can now edit your data connections within your data
2023 added to pipeline pipelines . Previously, a new tab would open when
return value connections needed editing. Now, you can remain within
your pipeline and seamlessly update your connections.

October Category redesign of We've redesigned the way activities are categorized to
2023 activities make it easier for you to find the activities you're looking
for with new categories like Control flow, Notifications,
and more.

October Copy runtime We've made improvements to the Copy runtime


2023 performance performance. According to our tests results, with the
improvement improvements users can expect to see the duration of
copying from parquet/csv files into Lakehouse table to
improve by ~25%-35%.

October Integer data type We now support variables as integers! When creating a
2023 available for variables new variable, you can now choose to set the variable type
to Integer, making it easier to use arithmetic functions
with your variables.

October Pipeline name now We've added a new system variable called Pipeline Name
2023 supported in System so that you can inspect and pass the name of your
variables. pipeline inside of the pipeline expression editor, enabling
a more powerful workflow in Fabric Data Factory.
Month Feature Learn more

October Support for Type You can now edit column types when you land data into
2023 editing in Copy your Lakehouse table(s). This makes it easier to customize
activity Mappings the schema of your data in your destination. Simply
navigate to the Mapping tab, import your schemas, if you
don't see any mappings, and use the drop-down list to
make changes.

October New certified Announcing the release of the new Emplifi Metrics
2023 connector: Emplifi connector. The Power BI Connector is a layer between
Metrics Emplifi Public API and Power BI itself. For more
information, see Emplifi Public API documentation .

October SAP HANA The update enhances the SAP HANA connector with the
2023 (Connector Update) capability to consume HANA Calculation Views deployed
in SAP Datasphere by taking into account SAP
Datasphere's additional security concepts.

October Set Activity State to Activity State is now available in Fabric Data Factory data
2023 "Comment Out" Part pipelines , giving you the ability to comment out part of
of Pipeline your pipeline without deleting the definition.

August Staging labels The concept of staging data was introduced in Dataflows
2023 Gen2 for Microsoft Fabric and now you have the ability to
define what queries within your Dataflow should use the
staging mechanisms or not.

August Secure input/output We've added advanced settings for the Set Variable
2023 for logs activity called Secure input and Secure output. When you
enable secure input or output, you can hide sensitive
information from being captured in logs.

August Pipeline run status We've recently added Pipeline status so that developers
2023 added to Output can easily see the status of the pipeline run. You can now
panel view your Pipeline run status from the Output panel.

August Data pipelines FTP The FTP connector is now available to use in your Data
2023 connector Factory data pipelines in Microsoft Fabric. Look for it in
the New connection menu.

August Maximum number of The new maximum number of entities that can be part of
2023 entities in a Dataflow a Dataflow has been raised to 50.

August Manage connections The Manage Connections option now allows you to view
2023 feature the linked connections to your dataflow, unlink a
connection, or edit connection credentials and gateway.

August Power BI Lakehouses An update to the Lakehouses connector in the August


2023 connector version of the Power BI Desktop and Gateway includes
significant performance improvements.
Month Feature Learn more

July 2023 New modern data An improved experience aims to expedite the process of
connectivity and discovering data in Dataflow, Dataflow Gen2, and
discovery experience Datamart .
in Dataflows

Data Factory in Microsoft Fabric samples and guidance

Month Feature Learn more

October Microsoft Fabric Data You are invited to join our October webinar
2023 Factory Webinar Series – series , where we will show you how to use Data
October 2023 Factory to transform and orchestrate your data in
various scenarios.

September Notify Outlook and Teams Learn how to send notifications to both Teams
2023 channel/group from a channels/groups and Outlook emails .
Microsoft Fabric pipeline

September Microsoft Fabric Data Join our Data Factory webinar series where we
2023 Factory Webinar Series – will show you how to use Data Factory to transform
September 2023 and orchestrate your data in various scenarios.

August Metadata Driven Pipelines An overview of a metadata-driven pipeline in


2023 for Microsoft Fabric – Part 2, Microsoft Fabric that follows the medallion
Data Warehouse Style architecture with Data Warehouse serving as the
Gold layer .

August Metadata Driven Pipelines An overview of a Metadata driven pipeline in


2023 for Microsoft Fabric Microsoft Fabric that follows the medallion
architecture (Bronze, Silver, Gold).

August Using Data pipelines for Real-Time Analytics' KQL DB is supported as both a
2023 copying data to/from KQL destination and a source with data pipelines ,
Databases and crafting allowing you to build and manage various extract,
workflows with the Lookup transform, and load (ETL) activities, leveraging the
activity power and capabilities of KQL DBs.

August Incrementally amass data With Dataflows Gen2 that comes with support for
2023 data destinations, you can setup your own pattern
to load new data incrementally , replace some old
data and keep your reports up to date with your
source data.

August Data Pipeline Performance Learn how to account for pagination given the
2023 Improvement Part 3: Gaining current state of Fabric Data Pipelines in preview.
more than 50% This pipeline is performant when the number of
paginated pages isn't too large. Read more at
Month Feature Learn more

improvement for Historical Gaining more than 50% improvement for Historical
Loads Loads .

August Data Pipeline Performance Examples from this blog series include how to
2023 Improvements Part 2: merge two arrays into an array of JSON objects,
Creating an Array of JSONs and how to take a date range and create multiple
subranges then store these as an array of JSONs.
Read more at Creating an Array of JSONs .

July 2023 Data Pipeline Performance Part one of a series of blogs on moving data with
Improvements Part 1: How multiple Copy Activities moving smaller volumes in
to convert a time interval parallel: How to convert a time interval
(dd.hh:mm:ss) into seconds (dd.hh:mm:ss) into seconds .

July 2023 Construct a data analytics A blog covering data pipelines in Data Factory and
workflow with a Fabric Data the advantages you find by using pipelines to
Factory data pipeline orchestrate your Fabric data analytics projects and
activities .

July 2023 Data Pipelines Tutorial: In this blog, we will act in the persona of an AVEVA
Ingest files into a Lakehouse customer who needs to retrieve operations data
from a REST API with from AVEVA Data Hub into a Microsoft Fabric
pagination ft. AVEVA Data Lakehouse .
Hub

July 2023 Data Factory Spotlight: This blog spotlight covers the two primary high-
Dataflow Gen2 level features Data Factory implements: dataflows
and pipelines .

Synapse Data Engineering in Microsoft Fabric


This section summarizes recent new features and capabilities of the Data Engineering
experience in Microsoft Fabric.

Month Feature Learn more

November Accessibility To provide a more inclusive and user-friendly interaction, we


2023 support for have implemented improvements so far to support
Lakehouse accessibility in the Lakehouse experience , including screen
reader compatibility, responsive design text reflow, keyboard
navigation, alternative text for images, and form fields and
labels.

November Enhanced We've introduced new capabilities to enhance the multi-


2023 multitasking tasking experience in Lakehouse , including multitasking
Month Feature Learn more

experience in during running operations, non-blocking reloading, and


Lakehouse clearer notifications.

November Upgraded DataGrid An upgraded DataGrid for the Lakehouse table preview
2023 capabilities in experience now features sorting, filtering, and resizing of
Lakehouse columns.

November SQL analytics You can now retry the SQL analytics endpoint provisioning
2023 endpoint re- directly within the Lakehouse experience . This means that
provisioning if your initial provisioning attempt fails, you have the option
to try again without the need to create an entirely new
Lakehouse.

November Microsoft Fabric The Microsoft Fabric Runtime 1.2 is a significant


2023 Runtime 1.2 advancement in our data processing capabilities. Microsoft
Fabric Runtime 1.2 includes Apache Spark 3.4.1, Mariner 2.0
as the operating system, Java 11, Scala 2.12.17, Python 3.10,
Delta Lake 2.4, and R 4.2.2, ensuring you have the most
cutting-edge tools at your disposal. In addition, this release
comes bundled with default packages, encompassing a
complete Anaconda installation and essential libraries for
Java/Scala, Python, and R, simplifying your workflow.

November Multiple Runtimes With the introduction of Runtime 1.2, Fabric supports
2023 Support multiple runtimes , offering users the flexibility to
seamlessly switch between them, minimizing the risk of
incompatibilities or disruptions. When changing runtimes, all
system-created items within the workspace, including
Lakehouses, SJDs, and Notebooks, will operate using the
newly selected workspace-level runtime version starting from
the next Spark Session.

November Delta as the default The default Spark session parameter


2023 table format in the spark.sql.sources.default is now delta . All tables created
new Runtime 1.2 using Spark SQL, PySpark, Scala Spark, and Spark R,
whenever the table type is omitted, will create the table as
Delta by default .

November Intelligent Cache By default, the newly revamped and optimized Intelligent
2023 Cache feature is enabled in Fabric Spark. The intelligent
cache works seamlessly behind the scenes and caches data
to help speed up the execution of Spark jobs in Microsoft
Fabric as it reads from your OneLake or ADLS Gen2 storage
via shortcuts.

November Monitoring Hub for The latest enhancements in the monitoring hub are designed
2023 Spark to provide a comprehensive and detailed view of Spark and
enhancements Lakehouse activities , including executor allocations,
Month Feature Learn more

runtime version for a Spark application, a related items link in


the detail page.

November Monitoring for Users can now view the progress and status of Lakehouse
2023 Lakehouse maintenance jobs and table load activities.
operations

November Spark application Responding to customers' requests for monitoring Spark


2023 resource Usage resource usage metrics for performance tuning and
Analysis optimization, we are excited to introduce the Spark resource
usage analysis feature , now available in preview. This newly
released feature enables users to monitor allocated
executors, running executors, and idle executors, alongside
Spark executions.

November REST API support REST Public APIs for Spark Job Definition are now available,
2023 for Spark Job making it easy for users to manage and manipulate SJD
Definition preview items .

November REST API support As a key requirement for workload integration, REST Public
2023 for Lakehouse APIs for Lakehouse are now available. The Lakehouse REST
artifact, Load to Public APIs makes it easy for users to manage and
tables and table manipulate Lakehouse artifacts items programmatically.
maintenance

November Lakehouse support The Lakehouse now integrates with the lifecycle
2023 for git integration management capabilities in Microsoft Fabric , providing a
and deployment standardized collaboration between all development team
pipelines (preview) members throughout the product's life. Lifecycle
management facilitates an effective product versioning and
release process by continuously delivering features and bug
fixes into multiple environments.

November Embed a Power BI We are thrilled to announce that the powerbiclient Python
2023 report in Notebook package is now natively supported in Fabric notebooks.
This means you can easily embed and interact with Power BI
reports in your notebooks with just a few lines of code. To
learn more about how to use the powerbiclient package to
embed a Power BI component.

November Mssparkutils new A new runMultiple API in mssparkutils called


2023 API – reference run mssparkutils.notebook.runMultiple() allows you to run
multiple notebooks multiple notebooks in parallel, or with a predefined
in parallel topological structure. For more information, see Notebook
utilities.

November Notebook We now support uploading the .jar files in the Notebook
2023 resources .JAR file Resources explorer . You can add your own compiled libs,
support
Month Feature Learn more

use drag & drop to generate a code snippet to install them


in the session, and load the libraries in code conveniently.

November Notebook Git Fabric notebooks now offer Git integration for source control
2023 integration preview using Azure DevOps . It allows users to easily control the
notebook code versions and manage the git branches by
leveraging the Fabric Git functions and Azure DevOps.

November Notebook in Now you can also use notebooks to deploy your code across
2023 Deployment different environments , such as development, test, and
Pipeline Preview production. You can also use deployment rules to customize
the behavior of your notebooks when they are deployed,
such as changing the default Lakehouse of a Notebook. Get
started with deployment pipelines to set up your deployment
pipeline, Notebook will show up in the deployment content
automatically.

November Notebook REST With REST Public APIs for the Notebook items, data
2023 APIs Preview engineers/data scientists can automate their pipelines and
establish CI/CD conveniently and efficiently. The notebook
Restful Public API can make it easy for users to manage
and manipulate Fabric notebook items and integrate
notebook with other tools and systems.

November Environment We are thrilled to announce preview of the Environment in


2023 preview Fabric. The Environment is a centralized item that allows
you to configure all the required settings for running a Spark
job in one place.

November Synapse VS Code With support for the Synapse VS Code extension on
2023 extension in vsocde.dev, users can now seamlessly edit and execute Fabric
vscode.dev preview Notebooks without ever leaving their browser window .
Additionally, all the native pro-developer features of VS Code
are now accessible to end-users in this environment.

October Create multiple Creating multiple OneLake shortcuts just got easier. Rather
2023 OneLake shortcuts than creating shortcuts one at a time, you can now browse to
at once your desired location and select multiple targets at once. All
your selected targets then get created as new shortcuts in a
single operation .

October Delta-RS The OneLake team worked with the Delta-RS community to
2023 introduces native help introduce support for recognizing OneLake URLs in
support for both Delta-RS and the Rust Object Store .
OneLake

September Import notebook The new "Import Notebook" entry on the Workspace -> New
2023 to your Workspace menu lets you easily import new Fabric Notebook items in
Month Feature Learn more

the target workspace. You can upload one or more files,


including .ipynb , .py , .sql , .scala , and .r file formats.

September Notebook file The Synapse VS Code extension now supports notebook File
2023 system support in System for Data Engineering and Data Science in Microsoft
Synapse VS Code Fabric. The Synapse VS Code extension empowers users to
extension develop their notebook artifacts directly within the Visual
Studio Code environment.

September Notebook sharing We now support checking the "Run" operation separately
2023 execute-only mode when sharing a notebook, if you just selected the "Run"
operation, the recipient would see a "Execution-only"
notebook .

September Notebook save We now support viewing and comparing the differences
2023 conflict resolution between two versions of the same notebook when there
are saving conflicts.

September Mssparkutils new We now support a new method in mssparkutils that can
2023 API for fast data enable large volume of data move/copy much faster ,
copy Mssparkutils.fs.fastcp() . You can use
mssparkutils.fs.help("fastcp") to check the detailed usage.

September Notebook We now support uploading .whl files in the Notebook


2023 resources .whl file Resources explorer .
support

August Introducing High High concurrency mode allows you to run notebooks
2023 Concurrency Mode simultaneously on the same cluster without compromising
in Notebooks for performance or security when paying for a single session.
Data Engineering High concurrency mode offers several benefits for Fabric
and Data Science Spark users.
workloads in
Microsoft Fabric

August Service principal Azure service principal has been added as an authentication
2023 support to connect type for a set of data sources that can be used in Dataset,
to data in Dataflow, Dataflow, Dataflow Gen2 and Datamart.
Datamart, Dataset
and Dataflow Gen
2

August Announcing XMLA Direct Lake datasets now support XMLA-Write operations.
2023 Write support for Now you can use your favorite BI Pro tools and scripts to
Direct Lake create and manage Direct Lake datasets using XMLA
datasets endpoints .

Synapse Data Engineering samples and guidance


Month Feature Learn more

November Fabric Changing the game: A step-by-step guide to use your own Python library
2023 Using your own library with in the Lakehouse . It is quite simple to create your
Microsoft Fabric own library with Python and even simpler to reuse it
on Fabric.

August Fabric changing the game: Learn more about logging your workload into
2023 Logging your workload OneLake using notebooks , using the OneLake API
using Notebooks Path inside the notebook.

July 2023 Lakehouse Sharing and Share a lakehouse and manage permissions so
Access Permission that users can access lakehouse data through the
Management Data Hub, the SQL analytics endpoint, and the
default semantic model.

June 2023 Virtualize your existing Connect data silos without moving or copying data
data into OneLake with with OneLake, which allows you to create special
shortcuts folders called shortcuts that point to other storage
locations .

Synapse Data Science in Microsoft Fabric


This section summarizes recent improvements and features for the Data Science
experience in Microsoft Fabric.

Month Feature Learn more

November Copilot in notebooks The Copilot in Fabric Data Science and Data Engineering
2023 preview notebooks is designed to accelerate productivity,
provide helpful answers and guidance, and generate code
for common tasks like data exploration, data preparation
and machine learning with. You can interact and engage
with the AI from either the chat panel or even from within
notebooks cells using magic commands to get insights
from data faster. For more information, see Copilot in
notebooks .

November Custom Python Data Wrangler, a notebook-based tool for exploratory


2023 Operations in Data data analysis, has always allowed users to browse and
Wrangler apply common data-cleaning operations, generating the
corresponding code in real time. Now, in addition to
generating code from the UI, users can also write their
own code with custom operations in Data Wrangler .

November Data Wrangler for Data Wrangler now supports Spark DataFrames in preview.
2023 Spark DataFrames Until now, users have been able to explore and transform
preview pandas DataFrames using common operations that can be
Month Feature Learn more

converted to Python code in real time. The new release


allows users to edit Spark DataFrames in addition to
pandas DataFrames with Data Wrangler .

November MLFlow Notebook The MLflow inline authoring widget enables users to
2023 Widget effortlessly track their experiment runs along with metrics
and parameters, all directly from within their notebook .

November New Model & New enhancements to our model and experiment tracking
2023 Experiment Item features are based on valuable user feedback. The new
Usability tree-control in the run details view makes tracking easier
Improvements by showing which run is selected. We've enhanced the
comparison feature, allowing you to easily adjust the
comparison pane for a more user-friendly experience. Now
you can select the run name to see the Run Details view.

November Recent Experiment It is now simpler for users to check out recent runs for an
2023 Runs experiment directly from the workspace list view . This
update makes it easier to keep track of recent activity,
quickly jump to the related Spark application, and apply
filters based on the run status.

November Models renamed to Microsoft has renamed "Models" to "ML Models" to


2023 ML Models ensure clarity and avoid any confusion with other Fabric
elements. For more information, see Machine learning
experiments in Microsoft Fabric.

November SynapseML v1.0 SynapseML v1.0 is now released. SynapseML v1.0 makes it
2023 easy to build production ready machine learning systems
on Fabric and has been in use at Microsoft for over six
years.

November Train Interpretable We've introduced a scalable implementation of


2023 Explainable Boosting Explainable Boosting Machines (EBM) powered by Apache
Machines with Spark in SynapseML . EBMs are a powerful machine
SynapseML learning technique that combines the accuracy of gradient
boosting with a strong focus on model interpretability.

November Prebuilt AI models in We are excited to announce the preview for prebuilt AI
2023 Microsoft Fabric models in Fabric . Azure OpenAI Service , Text
preview Analytics , and Azure AI Translator are prebuilt models
available in Fabric, with support for both RESTful API and
SynapseML. You can also use the OpenAI Python Library
to access Azure OpenAI service in Fabric.

November Reusing existing We have added support for a new connection method
2023 Spark Session in called "synapse" in sparklyr , which enables users to
sparklyr connect to an existing Spark session. Additionally, we have
contributed this connection method to the OSS sparklyr
Month Feature Learn more

project. Users can now use both sparklyr and SparkR in the
same session and easily share data between them.

November REST API Support for REST APIs for ML Experiment and ML Model are now
2023 ML Experiments and available. These REST APIs for ML Experiments and ML
ML Models Models begin to empower users to create and manage
machine-learning artifacts programmatically, a key
requirement for pipeline automation and workload
integration.

October Semantic link Announcing the preview of Semantic Link, an innovative


2023 (preview) feature that seamlessly connects Power BI semantic
models with Synapse Data Science within Microsoft Fabric.
As the gold layer in a medallion architecture, Power BI
semantic models contain the most refined and valuable
data in your organization.

October Semantic link in We are pleased to introduce the preview of semantic


2023 Microsoft Fabric: link , an innovative feature that seamlessly connects
Bridging BI and Data Power BI semantic models with Synapse Data Science
Science within Microsoft Fabric.

October Get started with Explore how semantic link seamlessly connects Power BI
2023 semantic link semantic models with Synapse Data Science within
(preview) Microsoft Fabric. Learn more at Semantic link in Microsoft
Fabric: Bridging BI and Data Science .

You can also check out the semantic link sample


notebooks that are now available in the fabric-samples
GitHub repository. These notebooks showcase the use of
semantic link's Python library, SemPy, in Microsoft Fabric.

August Harness the Power of Harness the potential of Microsoft Fabric and SynapseML
2023 LangChain in LLM capabilities to effectively summarize and organize
Microsoft Fabric for your own documents.
Advanced Document
Summarization

July 2023 Unleashing the Power In this blog post, we delve into the exciting functionalities
of SynapseML and and features of Microsoft Fabric and SynapseML to
Microsoft Fabric: A demonstrate how to leverage Generative AI models or
Guide to Q&A on PDF Large Language Models (LLMs) to perform question and
Documents answer (Q&A) tasks on any PDF document .

Synapse Data Science samples and guidance


Month Feature Learn more

November New data We've updated the Data Science Happy Path tutorial for
2023 science happy Microsoft Fabric . This new comprehensive tutorial
path tutorial in demonstrates the entire data science workflow , using a bank
Microsoft Fabric customer churn problem as the context.

November New data We've expanded our collection of data science samples to
2023 science samples include new end-to-end R samples and new quick tutorial
samples for "Explaining Model Outputs" and "Visualizing Model
Behavior." .

November New data The new Data Science sample on sales forecasting was
2023 science developed in collaboration with Sonata Software . This new
forecasting sample encompasses the entire data science workflow,
sample spanning from data cleaning to Power BI visualization. The
notebook covers the steps to develop, evaluate, and score a
forecasting model for superstore sales, harnessing the power of
the SARIMAX algorithm.

August New Machine More samples have been added to the Microsoft Fabric Synapse
2023 failure and Data Science Use a sample menu. To check these Data Science
Customer churn samples, select Synapse Data Science, then Use a sample.
samples

August Use Semantic Learn how Fabric allows data scientists to use Semantic Kernel
2023 Kernel with with Lakehouse in Microsoft Fabric .
Lakehouse in
Microsoft Fabric

Synapse Data Warehouse in Microsoft Fabric


This section summarizes recent improvements and features for Synapse Data Warehouse
in Microsoft Fabric.

Month Feature Learn more

November Mirroring in Microsoft Any database can be accessed and managed centrally
2023 Fabric from within Fabric without having to switch database
clients. By just providing connection details, your
database is instantly available in Fabric as a Mirrored
database . Azure Cosmos DB, Azure SQL Database,
and Snowflake customers will be able to use Mirroring.
SQL Server, Azure PostgreSQL, Azure MySQL,
MongoDB, and other databases and data warehouses
will be coming in CY24.
Month Feature Learn more

November TRIM T-SQL support You can now use the TRIM command to remove spaces
2023 or specific characters from strings by using the
keywords LEADING, TRAILING or BOTH in TRIM
(Transact-SQL).

November GENERATE_SERIES T-SQL Generates a series of numbers within a given interval


2023 support with GENERATE_SERIES (Transact-SQL). The interval and
the step between series values are defined by the user.

November SSD metadata caching File and rowgroup metadata are now also cached with
2023 in-memory and SSD cache, further improving
performance.

November PARSER 2.0 CSV file parser version 2.0 for COPY INTO builds an
2023 improvements for CSV innovation from Microsoft Research's Data Platform
ingestion and Analytics group to make CSV file ingestion blazing
fast on Fabric Warehouse. For more information, see
COPY INTO (Transact-SQL).

November Fast compute resource All query executions in Fabric Warehouse are now
2023 assignment enabled powered by the new technology recently deployed as
part of the Global Resource Governance component
that assigns compute resources in milliseconds.

November REST API support for With the Warehouse public APIs, SQL developers can
2023 Warehouse now automate their pipelines and establish CI/CD
conveniently and efficiently. The Warehouse REST
Public APIs makes it easy for users to manage and
manipulate Fabric Warehouse items.

November SQLPackage support for SQLPackage now supports Fabric Warehouse .


2023 Fabric Warehouse SqlPackage is a command-line utility that automates
the following database development tasks by exposing
some of the public Data-Tier Application Framework
(DacFx) APIs. The SqlPackage command line tool allows
you to specify these actions along with action-specific
parameters and properties.

November Power BI semantic Microsoft has renamed the Power BI dataset content
2023 models type to semantic model. This applies to Microsoft Fabric
semantic models as well. For more information, see
New name for Power BI datasets.

November SQL analytics endpoint Microsoft has renamed the SQL endpoint of a
2023 Lakehouse to the SQL analytics endpoint of a
Lakehouse.

November Dynamic data masking Dynamic Data Masking (DDM) for Fabric Warehouse
2023 and the SQL analytics endpoint in the Lakehouse. For
more information and samples, see Dynamic data
Month Feature Learn more
masking in Fabric data warehousing and How to
implement dynamic data masking in Synapse Data
Warehouse.

November Clone tables with time You can now use table clones to create a clone of a
2023 travel table based on data up to seven calendar days in the
past .

November User experience updates Several user experiences in Warehouse have landed. For
2023 more information, see Fabric Warehouse user
experience updates .

November Automatic data Automatic data compaction will rewrite many smaller
2023 compaction parquet files into a few larger parquet files, which will
improve the performance of reading the table. Data
Compaction is one of the ways that we help your
Warehouse to provide you with great performance and
no effort on your part.

October Support for sp_rename Support for the T-SQL sp_rename syntax is now
2023 available for both Warehouse and SQL analytics
endpoint. For more information, see Fabric Warehouse
support for sp_rename .

October Query insights The query insights feature is a scalable, sustainable, and
2023 extendable solution to enhance the SQL analytics
experience. With historic query data, aggregated
insights, and access to actual query text, you can
analyze and tune your query performance.

October Full DML to Delta Lake Fabric Warehouse now publishes all Inserts, Updates
2023 Logs and Deletes for each table to their Delta Lake Log in
OneLake.

October V-Order write V-Order optimizes parquet files to enable lightning-fast


2023 optimization reads under the Microsoft Fabric compute engines such
as Power BI, SQL, Spark and others. Warehouse queries
in general benefit from faster read times with this
optimization, still ensuring the parquet files are 100%
compliant to its open-source specification. Starting this
month, all data ingested into Fabric Warehouses use V-
Order optimization.

October Burstable capacity Burstable capacity allows workloads to use more


2023 resources to achieve better performance. Burstable
capacity is finite, with a limit applied to the backend
compute resources to greatly reduce the risk of
throttling. For more information, see Warehouse SKU
Guardrails for Burstable Capacity .
Month Feature Learn more

October Throttling and A new article details the throttling and smoothing
2023 smoothing in Synapse behavior in Synapse Data Warehouse, where almost all
Data Warehouse activity is classified as background to take advantage of
the 24-hr smoothing window before throttling takes
effect. Learn more about how to observe utilization in
Synapse Data Warehouse.

September Default semantic model The default semantic model no longer automatically
2023 improvements adds new objects . This can be enabled in the
Warehouse item settings.

September Deployment pipelines Deployment pipelines enable creators to develop and


2023 now support test content in the service before it reaches the users.
warehouses Supported content types include reports, paginated
reports, dashboards, semantic models, dataflows, and
now warehouses. Learn how to deploy content
programmatically using REST APIs and DevOps.

September SQL Projects support for Microsoft Fabric Data Warehouse is now supported in
2023 Warehouse in Microsoft the SQL Database Projects extension available inside of
Fabric Azure Data Studio and Visual Studio Code .

September Announcing: Column- Column-level and row-level security in Fabric


2023 level & Row-level Warehouse and SQL analytics endpoint are now in
security for Fabric preview, behaving similarly to the same features in SQL
Warehouse & SQL Server.
analytics endpoint

September Usage reporting Utilization and billing reporting is available for Fabric
2023 data warehousing in the Microsoft Fabric Capacity
Metrics app. For more information, read about
Utilization and billing reporting Fabric data
warehousing .

August SSD Caching enabled Local SSD caching stores frequently accessed data on
2023 local disks in highly optimized format, significantly
reducing I/O latency. This benefits you immediately,
with no action required or configuration necessary.

July 2023 Sharing Any Admin or Member within a workspace can share a
Warehouse with another recipient within your
organization. You can also grant these permissions
using the "Manage permissions" experience.

July 2023 Table clone A zero-copy clone creates a replica of the table by
copying the metadata, while referencing the same data
files in OneLake. This avoids the need to store multiple
copies of data, thereby saving on storage costs when
you clone a table in Microsoft Fabric. For more
Month Feature Learn more

information, see tutorials to Clone a table with T-SQL or


Clone tables in the Fabric portal.

May 2023 Introducing Synapse Synapse Data Warehouse is the next generation of data
Data Warehouse in warehousing in Microsoft Fabric that is the first
Microsoft Fabric transactional data warehouse to natively support an
open data format, Delta-Parquet.

Synapse Data Warehouse samples and guidance

Month Feature Learn more

November Migrate from Azure Synapse A detailed guide with a migration runbook is
2023 dedicated SQL pools available for migrations from Azure Synapse Data
Warehouse dedicated SQL pools into Microsoft
Fabric.

August Efficient Data Partitioning A proposed method for data partitioning using
2023 with Microsoft Fabric: Best Fabric notebooks . Data partitioning is a data
Practices and management technique used to divide a large
Implementation Guide dataset into smaller, more manageable subsets
called partitions or shards.

May 2023 Microsoft Fabric - How can a This blog reviews how to connect to a SQL analytics
SQL user or DBA connect endpoint of the Lakehouse or the Warehouse
through the Tabular Data Stream, or TDS
endpoint , familiar to all modern web applications
that interact with a SQL Server endpoint.

Synapse Real-Time Analytics in Microsoft Fabric


This section summarizes recent improvements and features for real-time analytics in
Microsoft Fabric.

Month Feature Learn more

November Announcing Delta You can now enable availability of KQL Database in Delta
2023 Lake support in Lake format . Delta Lake is the unified data lake table
Real-Time Analytics format chosen to achieve seamless data access across all
KQL Database compute engines in Microsoft Fabric.

November Real-Time Analytics Announcing the general availability of Real-Time Analytics


2023 in Microsoft Fabric in Microsoft Fabric ! Real-Time Analytics offers countless
general availability features all aimed at making your data analysis more
(GA) efficient and effective.
Month Feature Learn more

November Delta Parquet As part of the one logical copy promise, we are excited to
2023 support in KQL announce that data in KQL Database can now be made
Database available in OneLake in delta parquet format . You can
now access this Delta table by creating a OneLake shortcut
from Lakehouse, Warehouse, or directly via Power BI Direct
Lake mode.

November Open Source Several open-source connectors for real-time analytics are
2023 Connectors for KQL now supported to enable users to ingest data from
Database various sources and process it using KQL DB.

November REST API Support We're excited to announce the launch of REST Public APIs
2023 for KQL Database for KQL DB. The Public REST APIs of KQL DB enables
users to manage and automate their flows
programmatically.

November Eventstream now Eventstream is now generally available, adding


2023 Generally Available enhancements aimed at taking your data processing
experience to the next level.

November Eventstream Data Now, you can transform your data streams into real time
2023 Transformation for within Eventstream before they are sent to your KQL
KQL Database Database . When you create a KQL Database destination
in the eventstream, you can set the ingestion mode to
"Event processing before ingestion" and add event
processing logics such as filtering and aggregation to
transform your data streams.

November Splunk add-on Microsoft Fabric add-on for Splunk allows users to ingest
2023 preview logs from Splunk platform into a Fabric KQL DB using the
Kusto python SDK.

November Get Data from If you're working on other Fabric items and are looking to
2023 Eventstream ingest data from Eventstream, our new "Get Data from
anywhere in Fabric Eventstream" feature simplifies the process, you can Get
data from Eventstream while you are working with a KQL
database and Lakehouse.

November Two ingestion We've introduced two distinct ingestion modes for your
2023 modes for Lakehouse Destination: Rows per file and Duration .
Lakehouse
Destination

November Optimize Tables The table optimization shortcut is now available inside
2023 Before Ingesting Eventstream Lakehouse destination to compact
Data to Lakehouse numerous small streaming files generated on a Lakehouse
table. Table optimization shortcut works by opening a
Month Feature Learn more

Notebook with Spark job, which would compact small


streaming files in the destination Lakehouse table.

November Create a Cloud We've simplified the process of establishing a cloud


2023 Connection within connection to your Azure services within Eventstream .
Eventstream When adding an Azure resource, such as Azure IoT Hub and
Azure Event Hubs, to your Eventstream, you can now create
the cloud connection and enter your Azure resource
credentials right within Eventstream. This enhancement
significantly improves the process of adding new data
sources to your Eventstream, saving you time and effort.

November Get Data in Real- A new Get Data experience simplifies the data ingestion
2023 Time Analytics: A process in your KQL database.
New and Improved
Experience

October Expanded Custom New new custom app connections provide more
2023 App Connections flexibility when it comes to bringing your data streams into
Eventstream.

October Enhanced UX on New UX improvements on the no-code Event Processor


2023 Event Processor provide an intuitive experience, allowing you to effortlessly
add or delete operations on the canvas.

October Eventstream Kafka The Custom App feature has new endpoints in sources and
2023 Endpoints and destinations , including sample Java code for your
Sample Code convenience. Simply add it to your application, and you'll
be all set to stream your real-time event to Eventstream.

October Event processing Recent UX improvements introduce a full-screen mode,


2023 editor UX providing a more spacious workspace for designing your
improvements data processing workflows. The insertion and deletion of
data stream operations have been made more intuitive,
making it easier to drag and drop and connect your data
transformations.

October KQL Database Auto Users do not need to worry about how many resources are
2023 scale algorithm needed to support their workloads in a KQL database. KQL
improvements Database has a sophisticated in-built, multi-dimensional,
auto scaling algorithm. We recently implemented some
optimizations that will make some time series analysis more
efficient .

October Understanding Read more about how a KQL database is billed in the
2023 Fabric KQL DB SaaS world of Microsoft Fabric.
Capacity
Month Feature Learn more

September OneLake shortcut to Now you can create a shortcut from KQL DB to delta tables
2023 delta tables from in OneLake, allowing in-place data queries. Now you query
KQL DB delta tables in your Lakehouse or Warehouse directly from
KQL DB.

September Model and Query Kusto Query Language (KQL) now allows you to model and
2023 data as graphs using query data as graphs. This feature is currently in
KQL preview.Learn more at Introduction to graph semantics in
KQL and Graph operators and functions .

September Easily connect to Power BI desktop has released two new ways to easily
2023 KQL Database from connect to a KQL database, in the Get Data dialogue and in
Power BI desktop the OneLake data hub menus.

September Eventstream now AMQP stands for Advanced Message Queuing Protocol, a
2023 supports AMQP protocol that supports a wide range of messaging patterns.
format connection In Eventstream, you can now create a Custom App source
string for data or destination and select AMQP format connection string
ingestion for ingesting data into Fabric or consuming data from
Fabric.

September Eventstream Azure IoT Hub is a cloud-hosted solution that provides


2023 supports data secure communication channels for sending and receiving
ingestion from data from IoT devices. In Eventstream, you can now stream
Azure IoT Hub your Azure IoT Hub data into Fabric and perform real-time
processing before storing it in a Kusto Database or
Lakehouse.

September Real-Time Data A database shortcut in Real-Time Analytics is an


2023 Sharing in Microsoft embedded reference within a KQL database to a source
Fabric database in Azure Data Explorer (ADX) allowing in-place
data sharing. The behavior exhibited by the database
shortcut is similar to that of an Azure Data Explorer follower
database.

August Provisioning The KQL Database provisioning process has been


2023 optimization optimized. Now you can provision a KQL Database within a
few seconds.

August KQL Database Fabric KQL Database supports running Python code
2023 support for inline embedded in Kusto Query Language (KQL) using the
Python python() plugin. The plugin is disabled by default. Before
you start, enable the Python plugin in your KQL database.

July 2023 Microsoft Fabric Microsoft Fabric eventstreams are a high-throughput,


eventstreams: low-latency data ingestion and transformation service.
Generating Real-
time Insights with
Month Feature Learn more

Python, KQL, and


Power BI

July 2023 Stream Real-time Eventstreams under Real-Time Analytics are a centralized
Events to Microsoft platform within Fabric, allowing you to capture, transform,
Fabric with and route real-time events to multiple destinations
eventstreams from a effortlessly, all through a user-friendly, no-code experience.
custom application

June 2023 Unveiling the Epic As part of the Kusto Detective Agency Season 2 , we're
Opportunity: A Fun excited to introduce an epic opportunity for all investigators
Game to Explore the and data enthusiasts to learn about the new portfolio in a
Synapse Real-Time fun and engaging way. Recruiting now at
Analytics https://detective.kusto.io/ !

Synapse Real-Time Analytics samples and guidance

Month Feature Learn more

November Semantic Link: Data Great Expectations Open Source (GX OSS) is a popular
2023 validation using Great Python library that provides a framework for
Expectations describing and validating the acceptable state of data.
With the recent integration of Microsoft Fabric
semantic link, GX can now access semantic models ,
further enabling seamless collaboration between data
scientists and business analysts.

November Explore Data Dive into a practical scenario using real-world bike-
2023 Transformation in sharing data and learn to compute the number of
Eventstream for KQL bikes rented every minute on each street, using
Database Integration Eventstream's powerful event processor, mastering
real-time data transformations, and effortlessly
directing the processed data to your KQL Database. .

October From RabbitMQ to A walkthrough of an end-to-end scenario sending


2023 PowerBI reports with data from RabbitMQ to a KQL Database in Microsoft
Microsoft Fabric Real- Fabric .
Time Analytics

October Stream Azure IoT Hub A demo of using Fabric Eventstream to seamlessly
2023 Data into Fabric ingest and transform real-time data streams before
Eventstream for Email they reach various Fabric destinations such as
Alerting Lakehouse, KQL Database, and Reflex. Then, configure
email alerts in Reflex with Data Activator triggers.

September Real-Time Analytics Real-Time Analytics now offers a comprehensive


2023 sample gallery sample gallery with multiple datasets allowing you to
Month Feature Learn more

explore, learn and get started quickly. Access the


samples by selecting Use a sample from the Real-
Time Analytics experience home .

September Quick start: Sending data Learn how to send data from Kafka to Synapse Real-
2023 to Synapse Real-Time time Analytics in Fabric .
Analytics in Fabric from
Apache Kafka Ecosystems
using Java

June 2023 From raw data to insights: Learn about the integration between Azure Event
How to ingest data from Hubs and your KQL database .
Azure Event Hubs into a
KQL database

June 2023 From raw data to insights: Learn about the integration between eventstreams
How to ingest data from and a KQL database , both of which are a part of the
eventstreams into a KQL Real-Time Analytics experience.
database

June 2023 Discovering the best ways This blog covers different options for bringing data
to get data into a KQL into a KQL database .
database

June 2023 Get started with In this blog, we focus on the different ways of
exploring your data with querying data in Synapse Real-Time Analytics .
KQL – a purpose-built
tool for petabyte scale
data analytics

Microsoft Copilot in Microsoft Fabric


Month Feature Learn more

November Empower Power BI We are thrilled to announce the general availability of


2023 users with Microsoft Fabric and the preview of Copilot in Microsoft
Microsoft Fabric Fabric, including the experience for Power BI. .
and Copilot

November Copilot for Power We are thrilled to announce the preview of Copilot in
2023 BI in Microsoft Microsoft Fabric , including the experience for Power BI,
Fabric preview which helps users quickly get started by helping them create
reports in the Power BI web experience. For more
information, see Copilot for Power BI .

October Chat your data in Learn how to construct Copilot tools based on business data
2023 Microsoft Fabric in Microsoft Fabric .
Month Feature Learn more

with Semantic
Kernel

Microsoft Fabric core features


News and feature announcements core to the Microsoft Fabric experience.

Month Feature Learn more

November Fabric workloads are Microsoft Fabric is now generally available! Microsoft
2023 now generally Fabric Synapse Data Warehouse, Data Engineering & Data
available! Science, Real-Time Analytics, Data Factory, OneLake, and
the overall Fabric platform are now generally available.

November Microsoft Fabric User We're happy to announce the preview of Microsoft Fabric
2023 APIs User APIs. The Fabric user APIs are a major enabler for
both enterprises and partners to use Microsoft Fabric as
they enable end-to-end fully automated interaction with
the service, enable integration of Microsoft Fabric into
external web applications, and generally enable customers
and partners to scale their solutions more easily.

October Item type icons Our design team has completed a rework of the item type
2023 icons across the platform to improve visual parsing.

October Keyword-Based Microsoft Fabric has recently introduced keyword-based


2023 Filtering of Tenant filtering for the tenant settings page in the admin portal .
Settings

September Monitoring hub – Column options inside the monitoring hub give users a
2023 column options better customization experience and more room to
operate.

September OneLake File Explorer The OneLake file explorer automatically syncs all
2023 v1.0.10 Microsoft OneLake items that you have access to in
Windows File Explorer. With the latest version, you can
seamlessly transition between using the OneLake file
explorer app and the Fabric web portal. You can also right-
click on the OneLake icon in the Windows notification area,
and select Diagnostic Operations to view client-site logs.
Learn more about easy access to open workspaces and
items online .

August Multitasking Now, all Fabric items are opened in a single browser tab on
2023 navigation the left navigation bar, even in the event of a page refresh.
improvement This ensures you can refresh the page without the concern
of losing context.
Month Feature Learn more

August Monitoring Hub We have updated Monitoring Hub to allow users to


2023 support for personalize activity-specific columns. You now have the
personalized column flexibility to display columns that are relevant to the
options activities you're focused on.

July 2023 New OneLake file With OneLake file explorer v1.0.9.0, it's simple to choose
explorer update with and switch between different Microsoft Entra ID (formerly
support for switching Azure Active Directory) accounts .
organizational
accounts

July 2023 Help pane The Help pane is feature-aware and displays articles about
the actions and features available on the current Fabric
screen. For more information, see Help pane in the
monthly Fabric update.

Continuous Integration/Continuous Delivery


(CI/CD) in Microsoft Fabric
This section includes guidance and documentation updates on development process,
tools, and versioning in the Microsoft Fabric workspace.

Month Feature Learn more

November Microsoft Fabric Microsoft Fabric User APIs are now available for Fabric
2023 User APIs experiences. The Fabric user APIs are a major enabler for
both enterprises and partners to use Microsoft Fabric as
they enable end-to-end fully automated interaction with
the service, enable integration of Microsoft Fabric into
external web applications, and generally enable customers
and partners to scale their solutions more easily.

November Notebook in Now you can also use notebooks to deploy your code
2023 Deployment Pipeline across different environments, such as development, test,
Preview and production. You can also use deployment rules to
customize the behavior of your notebooks when they are
deployed, such as changing the default Lakehouse of a
Notebook. Get started with deployment pipelines to set up
your deployment pipeline, Notebook will show up in the
deployment content automatically.

November Notebook Git Fabric notebooks now offer Git integration for source
2023 integration preview control using Azure DevOps. It allows users to easily control
the notebook code versions and manage the Git branches
by leveraging the Fabric Git functions and Azure DevOps.
Month Feature Learn more

November Notebook REST APIs With REST Public APIs for the Notebook items, data
2023 Preview engineers/data scientists can automate their pipelines and
establish CI/CD conveniently and efficiently. The notebook
Restful Public API can make it easy for users to manage
and manipulate Fabric notebook items and integrate
notebook with other tools and systems.

November Lakehouse support The Lakehouse artifact now integrates with the lifecycle
2023 for git integration management capabilities in Microsoft Fabric, providing a
and deployment standardized collaboration between all development team
pipelines (preview) members throughout the product's life. Lifecycle
management facilitates an effective product versioning and
release process by continuously delivering features and bug
fixes into multiple environments.

November SQLPackage support SQLPackage now supports Fabric Warehouse. SqlPackage is


2023 for Fabric a command-line utility that automates the following
Warehouse database development tasks by exposing some of the
public Data-Tier Application Framework (DacFx) APIs. The
SqlPackage command line tool allows you to specify these
actions along with action-specific parameters and
properties.

September SQL Projects Microsoft Fabric Data Warehouse is now supported in the
2023 support for SQL Database Projects extension available inside of Azure
Warehouse in Data Studio and Visual Studio Code .
Microsoft Fabric

September Notebook file The Synapse VS Code extension now supports notebook
2023 system support in File System for Data Engineering and Data Science in
Synapse VS Code Microsoft Fabric. The Synapse VS Code extension empowers
extension users to develop their notebook artifacts directly within the
Visual Studio Code environment.

September Deployment Deployment pipelines enable creators to develop and test


2023 pipelines now content in the service before it reaches the users.
support warehouses Supported content types include reports, paginated reports,
dashboards, semantic models, dataflows, and now
warehouses. Learn how to deploy content programmatically
using REST APIs and DevOps.

September Git integration with You can now publish a Power BI paginated report and keep
2023 paginated reports in it in sync with your git workspace. Developers can apply
Power BI their development processes, tools, and best practices.

August Introducing the dbt The dbt adapter allows you to connect and transform data
2023 adapter for Synapse into Synapse Data Warehouse . The Data Build Tool (dbt)
Month Feature Learn more

Data Warehouse in is an open-source framework that simplifies data


Microsoft Fabric transformation and analytics engineering.

May 2023 Introducing git While developing in Fabric, developers can back up and
integration in version their work, roll back as needed, collaborate or work
Microsoft Fabric for in isolation using git branches . Read more about
seamless source connecting the workspace to an Azure repo.
control
management

Data Activator in Microsoft Fabric


Data Activator is a no-code experience in Microsoft Fabric for automatically taking
actions when patterns or conditions are detected in changing data. This section
summarizes recent new features and capabilities of Data Activator in Microsoft Fabric.

Month Feature Learn more

October Announcing the Data We are thrilled to announce that Data Activator is now in
2023 Activator preview preview and is enabled for all existing Microsoft Fabric
users.

August Updated preview We have been working on a new experience for designing
2023 experience for triggers and it's now available in our preview! You now see
trigger design three cards in every trigger: Select, Detect, and Act.

May Driving actions from Data Activator is a new no-code Microsoft Fabric
2023 your data with Data experience that empowers the business analyst to drive
Activator actions automatically from your data. To learn more, sign up
for the Data Activator limited preview.

Fabric and Microsoft 365


This section includes articles and announcements about Microsoft Fabric integration
with Microsoft Graph and Microsoft 365.

Month Feature Learn more

November Fabric + Microsoft 365 Microsoft Graph is the gateway to data and intelligence
2023 Data: Better Together in Microsoft 365. Microsoft 365 Data Integration for
Microsoft Fabric enables you to manage your Microsoft
365 alongside your other data sources in one place with a
suite of analytical experiences.
Month Feature Learn more

November Microsoft 365 The Microsoft 365 connector now supports ingesting
2023 connector now data into Lakehouse tables .
supports ingesting
data into Lakehouse
(preview)

October Microsoft OneLake You can now create shortcuts directly to your Dynamics
2023 adds shortcut support 365 and Power Platform data in Dataverse and analyze
to Power Platform and it with Microsoft Fabric alongside the rest of your
Dynamics 365 OneLake data. There is no need to export data, build ETL
pipelines or use third-party integration tools.

Migration
This section includes guidance and documentation updates on migration to Microsoft
Fabric.

Month Feature Learn more

November Migrate from Azure A detailed guide with a migration runbook is


2023 Synapse dedicated SQL available for migrations from Azure Synapse Data
pools Warehouse dedicated SQL pools into Microsoft
Fabric.

November Migrating from Azure A detailed set of articles on migration of Azure


2023 Synapse Spark to Fabric Synapse Spark to Microsoft Fabric, including a
migration process that can involve multiple
scenarios and phases.

July 2023 Fabric Changing the game – This blog post covers OneLake integrations and
OneLake integration multiple scenarios to ingest the data inside of Fabric
OneLake , including ADLS, ADF, OneLake Explorer,
Databricks.

June 2023 Microsoft Fabric changing This blog post covers the scenario to export data
the game: Exporting data from Azure SQL Database into OneLake .
and building the Lakehouse

June 2023 Copy data to Azure SQL at Did you know that you can use Microsoft Fabric to
scale with Microsoft Fabric copy data at scale from supported data sources to
Azure SQL Database or Azure SQL Managed
Instance within minutes?

June 2023 Bring your Mainframe DB2 In this blog, we review the convenience and ease of
z/OS data to Microsoft opening DB2 for z/OS data in Microsoft Fabric .
Fabric
Monitoring
This section includes guidance and documentation updates on monitoring your
Microsoft Fabric capacity and utilization, including the Monitoring hub.

Month Feature Learn more

October Throttling and smoothing in A new article helps you understand Fabric capacity
2023 Synapse Data Warehouse throttling. Throttling occurs when a tenant's
capacity consumes more capacity resources than it
has purchased over a period of time.

September Monitoring hub - column Users can select and reorder the columns
2023 options according to their customized needs in the
Monitoring hub .

September Fabric Capacities – Read more about the improvements we're making
2023 Everything you need to to the Fabric capacity management platform for
know about what's new and Fabric and Power BI users .
what's coming

September Microsoft Fabric Capacity The Microsoft Fabric Capacity Metrics app is
2023 Metrics available in App Source for a variety of billing and
utilization reporting.

August Monitoring Hub support for We have updated Monitoring Hub to allow users to
2023 personalized column personalize activity-specific columns. You now have
options the flexibility to display columns that are relevant
to the activities you're focused on.

May 2023 Capacity metrics in Learn more about the universal compute capacities
Microsoft Fabric and Fabric's capacity metrics governance
features that admins can use to monitor usage
and make data-driven scale-up decisions.

Microsoft Purview
This section summarizes recent announcements about governance and compliance
capabilities with Microsoft Purview in Microsoft Fabric. Learn more about Information
protection in Microsoft Fabric.

Month Feature Learn more

May Administration, Security and Microsoft Fabric provides built-in enterprise grade
2023 Governance in Microsoft Fabric governance and compliance capabilities , powered
by Microsoft Purview.
Related content
For older updates, review previous updates in Microsoft Fabric.

Modernization Best Practices and Reusable Assets Blog


Azure Data Explorer Blog
Fabric Known Issues
Microsoft Fabric Blog
Microsoft Fabric terminology
What's new in Power BI?

Next step
Microsoft Fabric community

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


End-to-end tutorials in Microsoft Fabric
Article • 11/15/2023

In this article, you find a comprehensive list of end-to-end tutorials available in


Microsoft Fabric. These tutorials guide you through a scenario that covers the entire
process, from data acquisition to data consumption. They're designed to help you
develop a foundational understanding of the Fabric UI, the various experiences
supported by Fabric and their integration points, and the professional and citizen
developer experiences that are available.

Multi-experience tutorials
The following table lists tutorials that span multiple Fabric experiences.

Tutorial Scenario
name

Lakehouse In this tutorial, you ingest, transform, and load the data of a fictional retail
company, Wide World Importers, into the lakehouse and analyze sales data across
various dimensions.

Data Science In this tutorial, you explore, clean, and transform a taxicab trip semantic model,
and build a machine learning model to predict trip duration at scale on a large
semantic model.

Real-Time In this tutorial, you use the streaming and query capabilities of Real-Time
Analytics Analytics to analyze the New York Yellow Taxi trip semantic model. You uncover
essential insights into trip statistics, taxi demand across the boroughs of New
York, and other related insights.

Data In this tutorial, you build an end-to-end data warehouse for the fictional Wide
warehouse World Importers company. You ingest data into data warehouse, transform it
using T-SQL and pipelines, run queries, and build reports.

Experience-specific tutorials
The following tutorials walk you through scenarios within specific Fabric experiences.

Tutorial name Scenario

Power BI In this tutorial, you build a dataflow and pipeline to bring data into a
lakehouse, create a dimensional model, and generate a compelling
report.
Tutorial name Scenario

Data Factory In this tutorial, you ingest data with data pipelines and transform data
with dataflows, then use the automation and notification to create a
complete data integration scenario.

Data Science end-to- In this set of tutorials, learn about the different Data Science experience
end AI samples capabilities and examples of how ML models can address your common
business problems.

Data Science - Price In this tutorial, you build a machine learning model to analyze and
prediction with R visualize the avocado prices in the US and predict future prices.

Application lifecycle In this tutorial, you learn how to use deployment pipelines together with
management git integration to collaborate with others in the development, testing and
publication of your data and reports.

Next steps
Create a workspace
Discover data items in the OneLake data hub

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric decision guide: copy
activity, dataflow, or Spark
Article • 05/23/2023

Use this reference guide and the example scenarios to help you in deciding whether you
need a copy activity, a dataflow, or Spark for your workloads using Microsoft Fabric.

) Important

Microsoft Fabric is in preview.

Copy activity, dataflow, and Spark properties


Pipeline copy activity Dataflow Gen Spark
2

Use case Data lake and data warehouse Data ingestion, Data ingestion,
migration, data data transformation,
data ingestion, transformation, data processing,
lightweight transformation data data profiling
wrangling,
data profiling

Primary Data engineer, Data engineer, Data engineer,


developer data integrator data data scientist,
persona integrator, data developer
business
analyst

Primary ETL, ETL, Spark (Scala, Python,


developer skill SQL, M, Spark SQL, R)
set JSON SQL

Code written No code, No code, Code


low code low code

Data volume Low to high Low to high Low to high

Development Wizard, Power query Notebook,


interface canvas Spark job definition

Sources 30+ connectors 150+ Hundreds of Spark


connectors libraries
Pipeline copy activity Dataflow Gen Spark
2

Destinations 18+ connectors Lakehouse, Hundreds of Spark


Azure SQL libraries
database,
Azure Data
explorer,
Azure Synapse
analytics

Transformation Low: Low to high: Low to high:


complexity lightweight - type conversion, 300+ support for native
column mapping, merge/split files, transformation Spark and open-
flatten hierarchy functions source libraries

Review the following three scenarios for help with choosing how to work with your data
in Fabric.

Scenario1
Leo, a data engineer, needs to ingest a large volume of data from external systems, both
on-premises and cloud. These external systems include databases, file systems, and APIs.
Leo doesn’t want to write and maintain code for each connector or data movement
operation. He wants to follow the medallion layers best practices, with bronze, silver,
and gold. Leo doesn't have any experience with Spark, so he prefers the drag and drop
UI as much as possible, with minimal coding. And he also wants to process the data on a
schedule.

The first step is to get the raw data into the bronze layer lakehouse from Azure data
resources and various third party sources (like Snowflake Web, REST, AWS S3, GCS, etc.).
He wants a consolidated lakehouse, so that all the data from various LOB, on-premises,
and cloud sources reside in a single place. Leo reviews the options and selects pipeline
copy activity as the appropriate choice for his raw binary copy. This pattern applies to
both historical and incremental data refresh. With copy activity, Leo can load Gold data
to a data warehouse with no code if the need arises and pipelines provide high scale
data ingestion that can move petabyte-scale data. Copy activity is the best low-code
and no-code choice to move petabytes of data to lakehouses and warehouses from
varieties of sources, either ad-hoc or via a schedule.

Scenario2
Mary is a data engineer with a deep knowledge of the multiple LOB analytic reporting
requirements. An upstream team has successfully implemented a solution to migrate
multiple LOB's historical and incremental data into a common lakehouse. Mary has been
tasked with cleaning the data, applying business logics, and loading it into multiple
destinations (such as Azure SQL DB, ADX, and a lakehouse) in preparation for their
respective reporting teams.

Mary is an experienced Power Query user, and the data volume is in the low to medium
range to achieve desired performance. Dataflows provide no-code or low-code
interfaces for ingesting data from hundreds of data sources. With dataflows, you can
transform data using 300+ data transformation options, and write the results into
multiple destinations with an easy to use, highly visual user interface. Mary reviews the
options and decides that it makes sense to use Dataflow Gen 2 as her preferred
transformation option.

Scenario3
Adam is a data engineer working for a large retail company that uses a lakehouse to
store and analyze its customer data. As part of his job, Adam is responsible for building
and maintaining the data pipelines that extract, transform, and load data into the
lakehouse. One of the company's business requirements is to perform customer review
analytics to gain insights into their customers' experiences and improve their services.

Adam decides the best option is to use Spark to build the extract and transformation
logic. Spark provides a distributed computing platform that can process large amounts
of data in parallel. He writes a Spark application using Python or Scala, which reads
structured, semi-structured, and unstructured data from OneLake for customer reviews
and feedback. The application cleanses, transforms, and writes data to Delta tables in
the lakehouse. The data is then ready to be used for downstream analytics.

Next steps
How to copy data using copy activity
Quickstart: Create your first dataflow to get and transform data
How to create an Apache Spark job definition in Fabric
Microsoft Fabric decision guide: choose
a data store
Article • 09/18/2023

Use this reference guide and the example scenarios to help you choose a data store for
your Microsoft Fabric workloads.

) Important

Microsoft Fabric is in preview.

Data warehouse and lakehouse properties


Data Lakehouse Power BI KQL Database
warehouse Datamart

Data volume Unlimited Unlimited Up to 100 Unlimited


GB

Type of data Structured Unstructured,semi- Structured Unstructured,


structured,structured semi-structured,
structured

Primary Data warehouse Data engineer, data Citizen Citizen Data


developer developer, SQL scientist developer scientist, Data
persona engineer engineer, Data
scientist, SQL
engineer

Primary SQL Spark(Scala, PySpark, No code, No code, KQL,


developer Spark SQL, R) SQL SQL
skill set

Data Databases, Folders and files, Database, Databases,


organized by schemas, and databases, and tables tables, schemas, and
tables queries tables

Read Spark,T-SQL Spark,T-SQL Spark,T- KQL, T-SQL,


operations SQL,Power Spark, Power BI
BI

Write T-SQL Spark(Scala, PySpark, Dataflows, KQL, Spark,


operations Spark SQL, R) T-SQL connector
ecosystem
Data Lakehouse Power BI KQL Database
warehouse Datamart

Multi-table Yes No No Yes, for multi-


transactions table ingestion.
See update
policy.

Primary SQL scripts Spark notebooks,Spark Power BI KQL Queryset,


development job definitions KQL Database
interface

Security Object level Row level, table level Built-in RLS Row-level
(table, view, (when using T-SQL), none editor Security
function, stored for Spark
procedure, etc.),
column level,
row level,
DDL/DML

Access data Yes (indirectly Yes No Yes


via shortcuts through the
lakehouse)

Can be a Yes (tables) Yes (files and tables) No Yes


source for
shortcuts

Query across Yes, query Yes, query across No Yes, query across
items across lakehouse and warehouse KQL Databases,
lakehouse and tables;query across lakehouses, and
warehouse lakehouses (including warehouses with
tables shortcuts using Spark) shortcuts

Advanced Time Series


analytics native elements,
Full geospatial
storing and query
capabilities

Advanced Full indexing for


formatting free text and
support semi-structured
data like JSON

Ingestion Queued
latency ingestion,
Streaming
ingestion has a
Data Lakehouse Power BI KQL Database
warehouse Datamart

couple of
seconds latency

Scenarios
Review these scenarios for help with choosing a data store in Fabric.

Scenario 1
Susan, a professional developer, is new to Microsoft Fabric. They are ready to get started
cleaning, modeling, and analyzing data but need to decide to build a data warehouse or
a lakehouse. After review of the details in the previous table, the primary decision points
are the available skill set and the need for multi-table transactions.

Susan has spent many years building data warehouses on relational database engines,
and is familiar with SQL syntax and functionality. Thinking about the larger team, the
primary consumers of this data are also skilled with SQL and SQL analytical tools. Susan
decides to use a data warehouse, which allows the team to interact primarily with T-
SQL, while also allowing any Spark users in the organization to access the data.

Scenario 2
Rob, a data engineer, needs to store and model several terabytes of data in Fabric. The
team has a mix of PySpark and T-SQL skills. Most of the team running T-SQL queries are
consumers, and therefore don't need to write INSERT, UPDATE, or DELETE statements.
The remaining developers are comfortable working in notebooks, and because the data
is stored in Delta, they're able to interact with a similar SQL syntax.

Rob decides to use a lakehouse, which allows the data engineering team to use their
diverse skills against the data, while allowing the team members who are highly skilled
in T-SQL to consume the data.

Scenario 3
Ash, a citizen developer, is a Power BI developer. They're familiar with Excel, Power BI,
and Office. They need to build a data product for a business unit. They know they don't
quite have the skills to build a data warehouse or a lakehouse, and those seem like too
much for their needs and data volumes. They review the details in the previous table
and see that the primary decision points are their own skills and their need for a self
service, no code capability, and data volume under 100 GB.

Ash works with business analysts familiar with Power BI and Microsoft Office, and knows
that they already have a Premium capacity subscription. As they think about their larger
team, they realize the primary consumers of this data may be analysts, familiar with no-
code and SQL analytical tools. Ash decides to use a Power BI datamart, which allows the
team to interact build the capability fast, using a no-code experience. Queries can be
executed via Power BI and T-SQL, while also allowing any Spark users in the organization
to access the data as well.

Scenario 4
Daisy is business analyst experienced with using Power BI to analyze supply chain
bottlenecks for a large global retail chain. They need to build a scalable data solution
that can handle billions of rows of data and can be used to build dashboards and
reports that can be used to make business decisions. The data comes from plants,
suppliers, shippers, and other sources in various structured, semi-structured, and
unstructured formats.

Daisy decides to use a KQL Database because of its scalability, quick response times,
advanced analytics capabilities including time series analysis, geospatial functions, and
fast direct query mode in Power BI. Queries can be executed using Power BI and KQL to
compare between current and previous periods, quickly identify emerging problems, or
provide geo-spatial analytics of land and maritime routes.

Next steps
What is data warehousing in Microsoft Fabric?
Create a warehouse in Microsoft Fabric
Create a lakehouse in Microsoft Fabric
Introduction to Power BI datamarts
Create a KQL database

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Navigate to your items from Microsoft
Fabric Home
Article • 05/25/2023

This article gives a high level view of navigating to your items and actions from
Microsoft Fabric Home. Each product experience has its own Home, and there are
similarities that they all share. Those similarities are described in this article. For detailed
information about Home for a particular product experience, such as Data Factory
Home, visit the relevant page for that product experience.

) Important

Microsoft Fabric is in preview.

Overview of Home
On Home, you see items that you create and that you have permission to use. These
items are from all the workspaces that you access. That means that the items available
on everyone's Home are different. At first, you might not have much content, but that
changes as you start to create and share Microsoft Fabric items.

7 Note

Home is not workspace-specific. For example, the Recent area on Home might
include items from many different workspaces.

In Microsoft Fabric, the term item refers to: apps, lakehouses, warehouses, reports, and
more. Your items are accessible and viewable in Microsoft Fabric, and often the best
place to start working in Microsoft Fabric is from Home. However, once you've created
at least one new workspace, been granted access to a workspace, or you've added an
item to My workspace, you might find it more convenient to navigate directly to a
workspace. One way to navigate to a workspace is by using the nav pane and workspace
selector.
To open Home, select it from the top of your left navigation pane.

Most important content at your fingertips


The items that you can access appear on Home. If your Home canvas gets crowded, use
global search to find what you need, quickly. The layout and content on Home is
different for every user and every product experience, but there are numerous
similarities as well. These similarities are listed here and discussed in more detail later in
this article.

7 Note

Power BI Home is different from the other product experiences. To learn more, visit
Power BI Home.
1. The left navigation pane (nav pane) for your product experience links you to
different views of your items and to creator resources.
2. The selector for switching product experiences.
3. The top menu bar for orienting yourself in Microsoft Fabric, finding items, help,
and sending Microsoft feedback. The Account manager control is a critical icon for
looking up your account information and managing your Fabric trial.
4. Options for creating new items.
5. Links to recommended content. This content helps you get started using the
product experience and links to items and workspaces that you visit often.
6. Your items organized by recent, favorites, and items shared with you by your
colleagues. The items that appear here are the same across product experiences,
except for the Power BI experience.

) Important

Only the content that you can access appears on your Home. For example, if you
don't have permissions to a report, that report doesn't appear on Home. The
exception to this is if your subscription or license changes to one with less access,
then you will receive a prompt asking you to start a trial or upgrade your license.
Locate items from Home
Microsoft Fabric offers many ways of locating and viewing your content. All approaches
access the same pool of content in different ways. Searching is sometimes the easiest
and quickest way to find something. While other times, using the nav pane to open a
workspace or selecting a card on the Home canvas is your best option.

Use the navigation pane


Along the left side is a narrow vertical bar, referred to as the nav pane. This example
uses the Data factory nav pane. Notice that My workspace is the active workspace. The
options in your nav pane depend on the product experience you've selected. The nav
pane organizes actions you can take with your items in ways that help you get to where
you want to be quickly. Occasionally, using the nav pane is the quickest way to get to
your items.

In the bottom section of the nav pane is where you find and open your workspaces. Use
the workspace selector to view a list of your workspaces and select one to open. Below
the workspace selector is the name of the currently open workspace.
- By default, you see the Workspaces selector and My workspace.
- When you open a workspace, its name replaces My workspace.
- Whenever you create a new item, it's added to the open workspace.

The nav pane is there when you open Home and remains there as you open other areas
of Microsoft Fabric. Every Microsoft Fabric product experience nav pane includes Home,
Browse, OneLake data hub, Create, and Workspaces.

Find and open workspaces


Workspaces are places to collaborate with colleagues to create collections of items such
as lakehouses, warehouses, and reports.

There are different ways to find and open your workspaces. If you know the name or
owner, you can search. Or you can select the Workspaces icon in the nav pane and
choose which workspace to open.
The workspace opens on your canvas, and the name of the workspace is listed on your
nav pane. When you open a workspace, you can view its content. It includes items such
as notebooks, pipelines, reports, and lakehouses.

For more information, see Workspaces.

Find and open other product experiences


In the bottom left corner is your experience selector. Click the icon to see all of the
available Microsoft Fabric product experiences. Select an experience to open it and
make it active.

Find your content using search, sort, and filter


To learn about the many ways to search from Microsoft Fabric, see Searching and
sorting. Global searching is available by item, name, keyword, workspace, and more.

Find answers in the context sensitive Help pane


Select the Help icon (?) to open and use the contextual Help pane and to search for
answers to questions.

Microsoft Fabric provides context sensitive help in the right rail of your browser. In this
example, we've selected Browse from the nav pane and the Help pane automatically
updates to show us articles about the features of the Browse screen. For example, we're
shown articles on View recent content and See content that others have shared with you.
If there are community posts related to the current view, they're displayed under Forum
topics.

Leave the Help pane open as you work, and use the suggested topics to learn how to
use Microsoft Fabric features and terminology. Or, select the X to close the Help pane
and save screen space.
The Help pane is also a great place to search for answers to your questions. Type your
question or keywords in the Search field.
To return to the default Help pane, select the left arrow.

For more information about searching, see Searching and sorting.

For more information about the Help pane, see Get in-product help.

Find help and support


If the self-help answers don't resolve your issue, scroll to the bottom of the Help pane
for more resources. Use the links to ask the community for help or to connect with
Microsoft Fabric Support. For more information about contacting Support, see Support
options.

Find your account and license information


Information about your account and license is available from the Account manager.
Select the tiny photo from the upper right corner of Microsoft Fabric to open your
Account manager.
For more information about licenses and trials, see Licenses.

Find notifications, settings, and feedback


In the upper right corner of Home are several helpful icons. Take time to explore your
Notifications center, Settings, and Feedback options. The ? icon displays your Help and
search options and the Account manager icon displays information about your account
and license. Both of these features are described in detail earlier in this article.

Find what you need on your Home canvas


The final section of Home is the center area, called the canvas. The content of your
canvas updates as you select different items. By default, the Home canvas displays
options for creating new items, recommended items, recents, favorites, and content that
has been shared with you. If you've selected the Show less view, the New section of the
canvas is collapsed.
When you create a new item, it's saved in your My workspace unless you've selected a
workspace from Workspaces. To learn more about creating items in workspaces, see
create workspaces.

7 Note

Power BI Home is different from the other product experiences. To learn more, visit
Power BI Home.

The Recommended area might include getting started content as well as items and
workspaces that you use frequently.

Next steps
Power BI Home
Start a Fabric trial
Self-help with the Fabric contextual
Help pane
Article • 05/23/2023

This article explains how to use the Fabric Help pane. The Help pane is feature-aware
and displays articles about the actions and features available on the current Fabric
screen. The Help pane is also a search engine that quickly finds answers to questions in
the Fabric documentation and Fabric community forums.

) Important

Microsoft Fabric is in preview.

The Help pane is feature-aware


The feature-aware state is the default view of the Help pane when you open it without
entering any search terms. The Help pane shows a list of recommended topics,
resources that are relevant to your current context and location in Fabric, and a list of
links for other resources. It has three sections:
Feature-aware documents: This section groups the documents by the features
that are available on the current screen. Select a feature in the Fabric screen and
the Help pane updates with documents related to that feature. Select a document
to open it in a separate browser tab.

Forum topics: This section shows topics from the community forums that are
related to the features on the current screen. Select a topic to open it in a separate
browser tab.

Other resources: This section has links for feedback and Support.

The Help pane is a search engine


The Help pane is also a search engine. Enter a keyword to find relevant information and
resources from Microsoft documentation and community forum topics. Use the
dropdown to filter the results.
The Help pane is perfect for learning and
getting started
As you explore Fabric, the feature-aware documents update based on what you've
selected and where you are in Fabric. This is a great way to learn how to use Fabric. Give
yourself a guided tour by making selections in Fabric and reading the feature-aware
documents. For example, in the Data Science experience, select OneLake data hub. The
Help pane updates with articles that you can use to learn about the data hub.
Open the Help pane
Follow the instructions to practice using the Help pane.

1. From the upper-right corner of Fabric, select the ? icon to open the Help pane.

2. Open Browse and select the Recent feature. The Fabric Help pane displays
documents about the Recent feature. Select a document to learn more. The
document opens in a separate browser tab.

3. Forum posts often provide interesting context. Select one that looks helpful or
interesting.

4. Search the Microsoft documentation and community forums by entering a


keyword in the search pane.
5. Return to the default display of the Help pane by selecting the arrow.

6. Close the Help pane by selecting the X icon in the upper-right corner of the pane.
Still need help?
If you still need help, select Ask the community and submit a question. If you have an
idea for a new feature, let us know by selecting Submit an idea. To open the Support
site, select Get help in Other Resources.
Global search
Article • 06/21/2023

When you're new to Microsoft Fabric, you have only a few items (workspaces, reports,
apps, lakehouses). But as you begin creating and sharing items, you can end up with
long lists of content. That's when searching, filtering, and sorting become helpful.

) Important

Microsoft Fabric is in preview.

Search for content


At the top of Home, the global search box finds items by title, name, or keyword.
Sometimes, the fastest way to find an item is to search for it. For example, if a
dashboard you haven't used in a while isn't showing up on your Home canvas. Or, if
your colleague shared something with you, but you don't remember what it's named or
what type of content they shared. Sometimes, you might have so much content that it's
easier to search for it rather than scrolling or sorting.

7 Note

Global search is currently unavailable in sovereign clouds.

Search is available from Home and also most other areas of Microsoft Fabric. Just look

for the search box or search icon .

In the Search field, type all or part of the name of an item, creator, keyword, or
workspace. You can even enter your colleague's name to search for content that they've
shared with you. The search finds matches in all the items that you own or have access
to.
In addition to the Search field, most experiences on the Microsoft Fabric canvas also
include a Filter by keyword field. Similar to search, use Filter by keyword to narrow
down the content on your canvas to find what you need. The keywords you enter in the
Filter by keyword pane apply to the current view only. For example, if you open Browse
and enter a keyword in the Filter by keyword pane, Microsoft Fabric searches only the
content that appears on the Browse canvas.

Sort content lists


If you have only a few items, sorting isn't necessary. But when you have long lists of
items, sorting helps you find what you need. For example, this Shared with me content
list has many items.
Right now, this content list is sorted alphabetical by name, from Z to A. To change the
sort criteria, select the arrow to the right of Name.

Sorting is also available in other areas of Microsoft Fabric. In this example, the
workspaces are sorted by the Refreshed date. To set sorting criteria for workspaces,
select a column header, and then select again to change the sorting direction.
Not all columns can be sorted. Hover over the column headings to discover which can
be sorted.

Filter content lists


Another way to locate content quickly is to use the content list Filter. Display the filters
by selecting Filter from the upper right corner. The filters available depend on your
location in Microsoft Fabric. This example is from a Recent content list. It allows you to
filter the list by content Type, Time, or Owner.
Next steps
Find Fabric items from Home
Start a Fabric trial
Fabric settings
Article • 06/15/2023

) Important

Microsoft Fabric is in preview.

The Fabric settings pane provides links to various kinds of settings you can configure.
This article shows how to open the Fabric settings pane and describes the kinds of
settings you can access from there.

Open the Fabric settings pane


To open the Fabric settings pane, select the gear icon in the Fabric portal header.

Preferences
In the preferences section, individual users can set their user preferences, specify the
language of the Fabric user interface, manage their account and notifications, and
configure settings for their personal use throughout the system.

Link Description

General Opens the generate settings page, where you can set the display language for the
Fabric interface and parts of visuals.

Notifications Opens the notifications settings page where you can view your subscriptions and
alerts.

Item Opens the item settings page, where you can configure per-item-type settings.
settings
Link Description

Developer Opens the developer settings page, where you can configure developer mode
settings settings.

Resources and extensions


The resources and extensions section provides links to pages where users can use
following capabilities.

Link Description

Manage Opens the personal/group storage management page, where you can see and
personal/group manage data items that you own or that have been shared with you.
storage

Power BI Opens the Power BI settings page, where you can get to the settings pages for
settings the Power BI items (dashboards, datasets, workbooks, reports, datamarts, and
dataflows) that are in the current workspace.

Manage Opens page where you can manage connections, on-premises data gateways,
connections and virtual networks data gateways.
and gateways

Manage Opens a page where you can manage embed codes you have created.
embed codes

Azure Analysis Opens up a page where you can migrate your Azure Analysis Services datasets
Services to Power BI Premium.
migrations

Governance and insights settings


The governance and insights section provides links to help admins and users with their
admin, governance, and compliance tasks.

Link Description

Admin Opens the Fabric admin portal where admins perform various management tasks and
portal configure Fabric tenant settings. For more information, see What is the admin portal?

Microsoft Currently available to Fabric admins only. Opens the Microsoft Purview hub where
Purview you can view Purview insights about your organization's sensitive data. The Microsoft
hub Purview hub also provides links to Purview governance and compliance capabilities
(preview) and has links to documentation to help you get started with Microsoft Purview
governance and compliance in Fabric.
Next steps
What is Fabric
What is Microsoft Fabric admin?
Workspaces
Article • 06/14/2023

Workspaces are places to collaborate with colleagues to create collections of items such
as lakehouses, warehouses, and reports. This article describes workspaces, how to
manage access to them, and what settings are available.

Ready to get started? Read Create a workspace.

Work with workspaces


Here are some useful tips about working with workspaces.

Pin workspaces to the top of the workspace flyout list to quickly access your
favorite workspaces. Read more about pin workspaces.
Use granular workspace roles for flexible permissions management in the
workspaces: Admin, Member, Contributor, and Viewer. Read more about
workspace roles.
Navigate to current workspace from anywhere by selecting the icon on left nav
pane. Read more about current workspace in this article.
Workspace settings: As workspace admin, you can update and manage your
workspace configurations in workspace settings.
Manage a workspace in Git: Git integration in Microsoft Fabric enables Pro
developers to integrate their development processes, tools, and best practices
straight into the Fabric platform. Learn how to manage a workspace with Git.
Contact list: Specify who receives notification about workspace activity. Read more
about workspace contact lists in this article.

Current workspace
After you select and open to a workspace, this workspace becomes your current
workspace. You can quickly navigate to it from anywhere by selecting the workspace
icon from left nav pane.

Workspace settings
Workspace admins can use workspace settings to manage and update the workspace.
The settings include general settings of the workspace, like the basic information of the
workspace, contact list, OneDrive, license, Azure connections, storage, and other
experiences' specific settings.

To open the workspace settings, you can select the workspace in the nav pane, then
select More options (...) > Workspace settings next to the workspace name.
You can also open it from the workspace page.

Workspace contact list


The Contact list feature allows you to specify which users receive notification about
issues occurring in the workspace. By default, the one who created the workspace is in
the contact list. You can add others to that list while creating workspace or in workspace
settings after creation. Users or groups in the contact list are also listed in the user
interface (UI) of the workspace settings, so workspace users know whom to contact.

Microsoft 365 and OneDrive


The Workspace OneDrive feature allows you to configure a Microsoft 365 Group whose
SharePoint document library is available to workspace users. You create the Group
outside of Microsoft Fabric first, with one available method being from OneDrive. Read
about creating a OneDrive shared library .

7 Note

Creating Microsoft 365 Groups may be restricted in your environment, or the ability
to create them from your OneDrive site may be disabled. If this is the case, speak
with your IT department.

Microsoft Fabric doesn't synchronize permissions between users or groups with


workspace access, and users or groups with Microsoft 365 Group membership. A best
practice is to give access to the workspace to the same Microsoft 365 Group whose file
storage you configured. Then manage workspace access by managing membership of
the Microsoft 365 Group.

You can configure OneDrive in workspace settings by typing in the name of the
Microsoft 365 group that you created earlier. Type just the name, not the URL. Microsoft
Fabric automatically picks up the OneDrive for the group.

License mode
By default, workspaces are created in your organization's shared capacity. When your
organization has other capacities, workspaces including My Workspaces can be assigned
to any capacity in your organization. You can configure it while creating a workspace or
in Workspace settings -> Premium. Read more about licenses.
Azure connections configuration
Workspace admins can configure dataflow storage to use Azure Data Lake Gen 2
storage and Azure Log Analytics (LA) connection to collect usage and performance logs
for the workspace in workspace settings.


With the integration of Azure Data Lake Gen 2 storage, you can bring your own storage
to dataflows, and establish a connection at the workspace level. Read Configure
dataflow storage to use Azure Data Lake Gen 2 for more detail.

After the connection with Azure Log Analytics (LA), activity log data is sent continuously
and is available in Log Analytics in approximately 5 minutes. Read Using Azure Log
Analytics for more detail.

System storage
System storage is the place to manage your dataset storage in your individual or
workspace account so you can keep publishing reports and datasets. Your own datasets,
Excel reports, and those items that someone has shared with you, are included in your
system storage.

In the system storage, you can view how much storage you have used and free up the
storage by deleting the items in it.

Keep in mind that you or someone else may have reports and dashboards based on a
dataset. If you delete the dataset, those reports and dashboards don't work anymore.


Remove the workspace
As an admin for a workspace, you can delete it. When you delete the workspace,
everything contained within the workspace is deleted for all group members, and the
associated app is also removed from AppSource.

In the Workspace settings pane, select Other > Remove this workspace.

Administering and auditing workspaces


Administration for workspaces is in the Microsoft Fabric admin portal. Microsoft Fabric
admins decide who in an organization can create workspaces and distribute apps. Read
about managing users' ability to create workspaces in the "Workspace settings" article.

Admins can also see the state of all the workspaces in their organization. They can
manage, recover, and even delete workspaces. Read about managing the workspaces
themselves in the "Admin portal" article.

Auditing
Microsoft Fabric audits the following activities for workspaces.

Friendly name Operation name

Created Microsoft Fabric folder CreateFolder

Deleted Microsoft Fabric folder DeleteFolder


Friendly name Operation name

Updated Microsoft Fabric folder UpdateFolder

Updated Microsoft Fabric folder access UpdateFolderAccess

Read more about Microsoft Fabric auditing.

Considerations and limitations


Limitations to be aware of:

Workspaces can contain a maximum of 1,000 datasets, or 1,000 reports per


dataset.
Certain special characters aren't supported in workspace names when using an
XMLA endpoint. As a workaround, use URL encoding of special characters, for
example, for a forward slash /, use %2F.
A user or a service principal can be a member of up to 1,000 workspaces.

Next steps
Create workspaces
Give users access to workspaces
Create a workspace
Article • 11/15/2023

This article explains how to create workspaces in Microsoft Fabric. In workspaces, you
create collections of items such as lakehouses, warehouses, and reports. For more
background, see the Workspaces article.

To create a workspace:

1. Select Workspaces > New workspace. The Create a workspace pane opens.

2. The Create a workspace pane opens.


Give the workspace a unique name (mandatory).

Provide a description of the workspace (optional).

Assign the workspace to a domain (optional).

If you are a domain contributor for the workspace, you can associate the
workspace to a domain, or you can change an existing association. For
information about domains, see Domains in Fabric.

3. When done, either continue to the advanced settings, or select Apply.

Advanced settings
Expand Advanced and you see advanced setting options:

Contact list
Contact list is a place where you can put the names of people as contacts for
information about the workspace. Accordingly, people in this contact list receive system
email notifications for workspace level changes.

By default, the first workspace admin who created the workspace is the contact. You can
add other users or groups according to your needs. Enter the name in the input box
directly, it helps you to automatically search and match users or groups in your org.

License mode
Different license mode provides different sets of feature for your workspace. After the
creation, you can still change the workspace license type in workspace settings, but
some migration effort is needed.

7 Note

Currently, if you want to downgrade the workspace license type from Premium
capacity to Pro (Shared capacity), you must first remove any non-Power BI Fabric
items that the workspace contains. Only after you remove such items will you be
allowed to downgrade the capacity. For more information, see Moving data
around.

Default storage format


Power BI semantic models can store data in a highly compressed in-memory cache for
optimized query performance, enabling fast user interactivity. With Premium capacities,
large semantic models beyond the default limit can be enabled with the Large semantic
model storage format setting. When enabled, semantic model size is limited by the
Premium capacity size or the maximum size set by the administrator. Learn more about
large semantic model storage format.

Template apps
Power BI template apps are developed for sharing outside your organization. If you
check this option, a special type of workspace (template app workspace) is created. It's
not possible to revert it back to a normal workspace after creation.
Dataflow storage (preview)
Data used with Power BI is stored in internal storage provided by Power BI by default.
With the integration of dataflows and Azure Data Lake Storage Gen 2 (ADLS Gen2), you
can store your dataflows in your organization's Azure Data Lake Storage Gen2 account.
Learn more about dataflows in Azure Data Lake Storage Gen2 accounts.

Give users access to your workspace


Now that you've created the workspace, you'll want to add other users to roles in the
workspace, so you can collaborate with them. See these articles for more information:

Give users access to a workspace


Roles in workspaces

Pin workspaces
Quickly access your favorite workspaces by pinning them to the top of the workspace
flyout list.

1. Open the workspace flyout from the nav pane and hover over the workspace you
want to pin. Select the Pin to top icon.
2. The workspace is added in the Pinned list.
3. To unpin a workspace, select the unpin button. The workspace is unpinned.

Next steps
Read about workspaces

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Roles in workspaces in Microsoft Fabric
Article • 11/15/2023

Workspace roles let you manage who can do what in a Microsoft Fabric workspace.
Microsoft Fabric workspaces sit on top of OneLake and divide the data lake into
separate containers that can be secured independently. Workspace roles in Microsoft
Fabric extend the Power BI workspace roles by associating new Microsoft Fabric
capabilities such as data integration and data exploration with existing workspace roles.
For more information on Power BI roles, see Roles in workspaces in Power BI.

You can either assign roles to individuals or to security groups, Microsoft 365 groups,
and distribution lists. To grant access to a workspace, assign those user groups or
individuals to one of the workspace roles: Admin, Member, Contributor, or Viewer.
Here's how to give users access to workspaces.

To create a new workspace, see Create a workspace.

Everyone in a user group gets the role that you've assigned. If someone is in several user
groups, they get the highest level of permission that's provided by the roles that they're
assigned. If you nest user groups and assign a role to a group, all the contained users
have permissions.

Users in workspace roles have the following Microsoft Fabric capabilities, in addition to
the existing Power BI capabilities associated with these roles.

Microsoft Fabric workspace roles


Capability Admin Member Contributor Viewer

Update and delete the workspace.

Add or remove people, including other admins.

Add members or others with lower permissions.

Allow others to reshare items.1

View and read content of data pipelines,


notebooks, Spark job definitions, ML models and
experiments, and Event streams.

View and read content of KQL databases, KQL


query-sets, and real-time dashboards.
Capability Admin Member Contributor Viewer

Connect to SQL analytics endpoint of Lakehouse or


the Warehouse

Read Lakehouse and Data warehouse data and -3


shortcuts2 with T-SQL through TDS endpoint.

Read Lakehouse and Data warehouse data and -


shortcuts2 through OneLake APIs and Spark.

Read Lakehouse data through Lakehouse explorer. -

Write or delete data pipelines, notebooks, Spark -


job definitions, ML models and experiments, and
Event streams.

Write or delete KQL query-sets, real-time -


dashboards, and schema and data of KQL
databases, Lakehouses, data warehouses, and
shortcuts.

Execute or cancel execution of notebooks, Spark -


job definitions, ML models and experiments.

Execute or cancel execution of data pipelines.

View execution output of data pipelines,


notebooks, ML models and experiments.

Schedule data refreshes via the on-premises


gateway.4

Modify gateway connection settings.4

1 Contributors and Viewers can also share items in a workspace, if they have Reshare
permissions.

2 Additional permissions are needed to read data from shortcut destination. Learn more
about shortcut security model.

3
Admins, Members, and Contributors can grant viewers granular SQL permissions to
query the SQL analytics endpoint of the Lakehouse and the Warehouse via TDS
endpoints for SQL connections.

4
Keep in mind that you also need permissions on the gateway. Those permissions are
managed elsewhere, independent of workspace roles and permissions.
Next steps
Roles in workspaces in Power BI
Create workspaces
Give users access to workspaces
OneLake security
OneLake shortcuts
Data warehouse security
Data engineering security
Data science roles and permissions

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Give users access to workspaces
Article • 05/23/2023

After you create a workspace in Microsoft Fabric, or if you have an admin or member
role in a workspace, you can give others access to it by adding them to the different
roles. Workspace creators are automatically admins. For an explanation of the different
roles, see Roles in workspaces.

7 Note

To enforce row-level security (RLS) on Power BI items for Microsoft Fabric Pro users
who browse content in a workspace, assign them the Viewer Role.

After you add or remove workspace access for a user or a group, the permission
change only takes effect the next time the user logs into Microsoft Fabric.

Give access to your workspace


1. Because you have the Admin or Member role in the workspace, on the command
bar of workspace page, you see Manage Access. Sometimes this entry is on the
More options (...) menu.

Manage access on the More options menu.

2. Select Add people or groups.


3. Enter name or email, select a role, and select Add. You can add security groups,
distribution lists, Microsoft 365 groups, or individuals to these workspaces as
admins, members, contributors, or viewers. If you have the member role, you can
only add others to the member, contributor, or viewer roles.
4. You can view and modify access later if needed. Use the Search box to search for
people or groups who already have access of this workspace. To modify access,
select drop-down arrow, and select a role.
Next steps
Read about the workspace experience.
Create workspaces.
Roles in workspaces
Manage a workspace with Git
Article • 09/05/2023

This article walks you through the following basic tasks in Microsoft Fabric’s Git
integration tool:

Connect to a Git repo


Commit changes
Update from Git
Disconnect from Git

It’s recommended to read the overview of Git integration before you begin.

) Important

Microsoft Fabric is in preview.

Prerequisites
To integrate Git with your Microsoft Fabric workspace, you need to set up the following
prerequisites in both Azure DevOps and Fabric.

Azure DevOps prerequisites


An active Azure account registered to the same user that is using the Fabric
workspace. Create a free account .
Access to an existing repository.

Fabric prerequisites
To access the Git integration feature, you need one of the following:

Power BI Premium license. Your Power BI premium license still works for all Power
BI features.
Fabric capacity. A Fabric capacity is required to use all supported Fabric items.

In addition, your organization’s administrator has to enable the Fabric switch. If this
switch is disabled, contact your administrator.
Connect a workspace to an Azure repo
Only a workspace admin can connect a workspace to an Azure Repo, but once
connected, anyone with permission can work in the workspace. If you're not an admin,
ask your admin for help with connecting. To connect a workspace to an Azure Repo,
follow these steps:

1. Sign into Power BI and navigate to the workspace you want to connect with.

2. Go to Workspace settings

7 Note

If you don't see the Workspace settings icon, select the ellipsis (three dots)
then workspace settings.

3. Select Git integration. You’re automatically signed into the Azure Repos account
registered to the Azure AD user signed into Fabric.
4. From the dropdown menu, specify the following details about the branch you want
to connect to:

7 Note

You can only connect a workspace to one branch and one folder at a time.

Organization
Project
Git repository
Branch (Select an existing branch using the drop-down menu, or select +
New Branch to create a new branch. You can only connect to one branch at a
time.)
Folder (Select an existing folder in the branch or enter a name to create a
new folder. If you don’t select a folder, content will be created in the root
folder. You can only connect to one folder at a time.)
5. Select Connect and sync.

During the initial sync, if either the workspace or Git branch is empty, content is copied
from the nonempty location to the empty one. If both the workspace and Git branch
have content, you’re asked which direction the sync should go. For more information on
this initial sync, see Connect and sync.

After you connect, the Workspace displays information about source control that allows
the user to view the connected branch, the status of each item in the branch and the
time of the last sync.

To keep your workspace synced with the Git branch, commit any changes you make in
the workspace to the Git branch, and update your workspace whenever anyone creates
new commits to the Git branch.

Commit changes to git


Once you successfully connect to a Git folder, edit your workspace as usual. Any
changes you save are saved in the workspace only. When you’re ready, you can commit
your changes to the Git branch, or you can undo the changes and revert to the previous
status. Read more about commits.

Commit to Git
To commit your changes to the Git branch, follow these steps:

1. Go to the workspace.

2. Select the Source control icon. This icon shows the number of uncommitted

changes.

3. Select the Changes tab of the Source control pane. A list appears with all the
items you changed, and an icon indicating if the item is new , modified ,
conflict , or deleted .

4. Select the items you want to commit. To select all items, check the top box.

5. Add a comment in the box. If you don't add a comment, a default message is
added automatically.

6. Select Commit.
After the changes are committed, the items that were committed are removed from
the list, and the workspace will point to the new commit that it's synced to.
After the commit is completed successfully, the status of the selected items changes
from Uncommitted to Synced.

Update workspace from Git


Whenever anyone commits a new change to the connected Git branch, a notification
appears in the relevant workspace. Use the Source control pane to pull the latest
changes, merges, or reverts into the workspace and update live items. Read more about
updating.

To update a workspace, follow these steps:

1. Go to the workspace.
2. Select the Source control icon.
3. Select the Updates tab of the Source control pane. A list appears with all the items
that were changed in the branch since the last update.
4. Select Update all.
After it updates successfully, the list of items is removed, and the workspace will point to
the new commit that it's synced to.
After the update is completed successfully, the status of the items changes to Synced.

Disconnect a workspace from Git


Only a workspace admin can disconnect a workspace from an Azure Repo. If you’re not
an admin, ask your admin for help with disconnecting. If you’re an admin and want to
disconnect your repo, follow these steps:

1. Go to Workspace settings

2. Select Git integration

3. Select Disconnect workspace


4. Select Disconnect again to confirm.

Permissions
The actions you can take on a workspace depend on the permissions you have in both
the workspace and Azure DevOps. For a more detailed discussion of permissions, see
Permissions.

Considerations and limitations


During the Commit to Git process, the Fabric service deletes any files inside the
item folder that aren't part of the item definition. Unrelated files not in an item
folder are not deleted.
After you commit changes, you might notice some unexpected changes to the
item that you didn't make. These changes are semantically insignificant and can
happen for several reasons. For example:

Manually changing the item definition file. These changes are valid, but might
be different than if done through the editors. For example, if you rename a
dataset column in Git and import this change to the workspace, the next time
you commit changes to the dataset, the bim file will register as changed and the
modified column pushed to the back of the columns array. This is because the
AS engine that generates the bim files pushes renamed columns to the end of
the array. This change doesn't affect the way the item operates.

Committing a file that uses CRLF line breaks. The service uses LF (line feed) line
breaks. If you had item files in the Git repo with CRLF line breaks, when you
commit from the service these files are changed to LF. For example, if you open
a report in desktop, save the .pbip project and upload it to Git using CRLF.

If you're having trouble with these actions, make sure you understand the
limitations of the Git integration feature.

Next steps
Understand the Git integration process
Manage Git branches
Git integration best practices

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Overview of Copilot for Fabric (preview)
Article • 11/15/2023

With Copilot and other generative AI features in preview, Microsoft Fabric brings a new
way to transform and analyze data, generate insights, and create visualizations and
reports.

Before your business starts using Copilot capabilities in Fabric, you may have questions
about how it works, how it keeps your business data secure and adheres to privacy
requirements, and how to use generative AI responsibly. Read on for answers to these
and other questions.

Copilot for Data Science and Data Engineering


Copilot for Data Engineering and Data Science is an AI-enhanced toolset tailored to
support data professionals in their workflow. It provides intelligent code completion,
automates routine tasks, and supplies industry-standard code templates to facilitate
building robust data pipelines and crafting complex analytical models. Utilizing
advanced machine learning algorithms, Copilot offers contextual code suggestions that
adapt to the specific task at hand, helping you code more effectively and with greater
ease. From data preparation to insight generation, Microsoft Fabric Copilot acts as an
interactive aide, lightening the load on engineers and scientists and expediting the
journey from raw data to meaningful conclusions.

Copilot for Data Factory


Copilot for Data Factory is an AI-enhanced toolset that supports both citizen and
professional data wranglers in streamlining their workflow. It provides intelligent code
generation to transform data with ease and generates code explanations to help you
better understand complex tasks.

Copilot for Power BI


Power BI has introduced generative AI that allows you to create reports automatically by
selecting the topic for a report or by prompting Copilot for Power BI on a particular
topic. You can use Copilot for Power BI to generate a summary for the report page that
you just created, and generate synonyms for better Q&A capabilities. See the article
Overview of Copilot for Power BI (/power-bi/create-reports/copilot-introduction) for
details of the features and how to use Copilot for Power BI.
How do I use Copilot responsibly?
Microsoft is committed to ensuring that our AI systems are guided by our AI
principles and Responsible AI Standard . These principles include empowering our
customers to use these systems effectively and in line with their intended uses. Our
approach to responsible AI is continually evolving to proactively address emerging
issues. Copilot sends your customer data to generate summaries to Azure OAI, where it's
stored for 30 days. See the Supplemental Terms of Use for Microsoft Azure Previews
for details.

The article Privacy, security, and responsible use for Copilot (preview) offers guidance on
responsible use.

Copilot features in Fabric are built to meet the Responsible AI Standard, which means
that they're reviewed by multidisciplinary teams for potential harms, and then refined to
include mitigations for those harms.

Before you use Copilot, your admin needs to enable Copilot in Fabric. See the article
Enable Copilot in Fabric (preview) for details. Also, keep in mind the limitations of
Copilot:

Copilot responses can include inaccurate or low-quality content, so make sure to


review outputs before using them in your work.
Reviews of outputs should be done by people who are able to meaningfully
evaluate the content's accuracy and appropriateness.
Today, Copilot features work best in the English language. Other languages may
not perform as well.

Next steps
What is Microsoft Fabric?
Copilot for Fabric: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Enable Copilot for Microsoft Fabric
(preview)
Article • 12/01/2023

Before your business can start using Copilot capabilities in Microsoft Fabric, you need to
enable Copilot. Copilot doesn't work for trial SKUs. You need a F64 or P1 capacity to use
Copilot. With Copilot and other generative AI features in preview, Microsoft Fabric
brings a new way to transform and analyze data, generate insights, and create
visualizations and reports.

The preview of Copilot in Microsoft Fabric is rolling out in stages with the goal that all
customers with a paid Fabric capacity (F64 or higher) or Power BI Premium capacity (P1
or higher) have access to the Copilot preview. It becomes available to you automatically
as a new setting in the Fabric admin portal when it's rolled out to your tenant. When
charging begins for the Copilot in Fabric experiences, you can count Copilot usage
against your existing Fabric or Power BI Premium capacity.

7 Note

Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64 or
higher, or P1 or higher) are supported.

We're enabling Copilot in stages. Everyone will have the access by March 2024.

See the article Copilot tenant settings (preview) for details.

Next steps
What is Microsoft Fabric?
Copilot in Fabric and Power BI: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Copilot for Microsoft Fabric: FAQ
FAQ

This article answers frequently asked questions about Copilot for Microsoft Fabric and
Power BI.

Power BI
I loaded my semantic model, but it doesn't meet
all the criteria listed in the data evaluation. What
should I do?
The criteria listed in Update your data model to work well with Copilot for Power BI is
important because it helps you get a better quality report. As long as you meet seven of
the eight points, including Consistency, the quality of the reports generated should be
good.

If your data doesn't meet that criteria, we recommend spending the time to bring it into
compliance.

I was given a Copilot URL, but I can't see the


Copilot button. Why is that?
First, check with your admin to see if they have enabled Copilot.

Next, when you select a Copilot-enabled URL, you have to initially load the semantic
model. When you've completed loading the semantic model, then you see the Copilot
button. See the Create a report with Copilot for the Power BI service article.

If you load a semantic model and still can't see the Copilot button, file a bug here:
Copilot Bug Template.

I selected the Copilot button, and it's stuck on


Analyzing your semantic model
Depending upon the size of the semantic model, Copilot might take a while to analyze
it. If you've waited longer than 15 minutes and you haven't received any errors, chances
are that there is an internal server error.
Try restarting Copilot by closing the pane and selecting the Copilot button again.

I loaded the semantic model and Copilot


generated a summary, but I don't think that it's
accurate
This inaccuracy could be because your semantic model has missing values. Because AI is
generating the summary, it can try to fill the holes and fabricate data. If you can remove
the rows with missing values, this situation could be avoided.

I generated the report visuals, but the quality of


the visuals concern me. I wouldn't choose them
myself
We're continuously looking to improve the quality of the copilot generated visuals. For
now, we recommend that you make the change by using the Power BI visualization tool.

The accuracy of the narrative visual concerns me.


We're continuously working to improve the accuracy of the narrative visual results. The
Power BI team already has plans to improve the accuracy over the coming months. As a
public preview, there might be mistakes. Know that we're working toward making the
tool as accurate as possible. In the meantime, we recommend using the custom prompts
as an additional tool to try to tweak the summary to meet your needs.

I want to disable Copilot immediately as I'm


concerned with the data storage as you mention
previously
Contact your help desk to get support from your IT admin.

I want to suggest new features. How can I do


that?
First, thank you for the feedback. It's great that you found it useful. As part of the
feedback sessions, we send you a form and you can add your suggestions there.
Data Factory
I'm a Fabric user. I opened the semantic model in
Data Factory. Why can't I see Copilot?
We have only enabled Copilot for Power BI. In the future, we'll enable it for other areas
of Fabric as well. We don't have a timeline yet for those areas.

Next steps
What is Microsoft Fabric?
Privacy, security, and responsible use for Copilot

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use for
Copilot Microsoft Fabric (preview)
Article • 11/15/2023

With Copilot and other generative AI features in preview, Microsoft Fabric brings a new
way to transform and analyze data, generate insights, and create visualizations and
reports.

Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.

This article provides answers to common questions related to business data security and
privacy to help your organization get started with Copilot in Fabric.

Overview

Your business data is secure


Copilot features use Azure OpenAI Service, which is fully controlled by Microsoft.
Your data isn't used to train models and isn't available to other customers.
You retain control over where your data is processed. Data processed by Copilot in
Fabric stays within your tenant's geographic region, unless you explicitly allow data
to be processed outside your region—for example, to let your users use Copilot
when Azure OpenAI isn't available in your region or availability is limited due to
high demand. Learn more about admin settings for Copilot.
Data is stored for up to 30 days and may be reviewed by Microsoft employees for
abuse monitoring.

Check Copilot outputs before you use them


Copilot responses can include inaccurate or low-quality content, so make sure to
review outputs before you use them in your work.
Reviews of outputs should be done by people who can meaningfully evaluate the
content's accuracy and appropriateness.
Today, Copilot features work best in the English language. Other languages may
not perform as well.
) Important

Review the supplemental preview terms for Fabric , which includes terms of use
for Microsoft Generative AI Service Previews.

How Copilot works


In this article, Copilot refers to a range of generative AI features and capabilities in Fabric
that are powered by Azure OpenAI Service.

In general, these features are designed to generate natural language, code, or other
content based on:

(a) inputs you provide, and,

(b) grounding data that the feature has access to.

For example, Power BI, Data Factory, and Data Science offer Copilot chats where you can
ask questions and get responses that are contextualized on your data. Copilot for Power
BI can also create reports and other visualizations. Copilot for Data Factory can
transform your data and explain what steps it has applied. Data Science offers Copilot
features outside of the chat pane, such as custom IPython magic commands in
notebooks. Copilot chats may be added to other experiences in Fabric, along with other
features that are powered by Azure OpenAI under the hood.

This information is sent to Azure OpenAI Service, where it's processed and an output is
generated. Therefore, data processed by Azure OpenAI can include:

The user's prompt or input.


Grounding data.
The AI response or output.

Grounding data may include a combination of dataset schema, specific data points, and
other information relevant to the user's current task. Review each experience section for
details on what data is accessible to Copilot features in that scenario.

Interactions with Copilot are specific to each user. This means that Copilot can only
access data that the current user has permission to access, and its outputs are only
visible to that user unless that user shares the output with others, such as sharing a
generated Power BI report or generated code. Copilot doesn't use data from other users
in the same tenant or other tenants.
Copilot uses Azure OpenAI—not the publicly available OpenAI services—to process all
data, including user inputs, grounding data, and Copilot outputs. Copilot currently uses
a combination of GPT models, including GPT 3.5. Microsoft hosts the OpenAI models in
the Microsoft Azure environment, and the Service doesn't interact with any services by
OpenAI, such as ChatGPT or the OpenAI API. Your data isn't used to train models and
isn't available to other customers. Learn more about Azure OpenAI.

Data from Copilot in Fabric is stored by Microsoft for up to 30 days (as outlined in the
Preview Terms of Use ) to help monitor and prevent abusive or harmful uses or outputs
of the service. Authorized Microsoft employees may review data that has triggered our
automated systems to investigate and verify potential abuse.

The Copilot process


These features follow the same general process:

1. Copilot receives a prompt from a user. This prompt could be in the form of a
question that a user types into a chat pane, or in the form of an action such as
selecting a button that says "Create a report."
2. Copilot preprocesses the prompt through an approach called grounding.
Depending on the scenario, this might include retrieving relevant data such as
dataset schema or chat history from the user's current session with Copilot.
Grounding improves the specificity of the prompt, so the user gets responses that
are relevant and actionable to their specific task. Data retrieval is scoped to data
that is accessible to the authenticated user based on their permissions. See the
section What data does Copilot use and how is it processed? in this article for
more information.
3. Copilot takes the response from Azure OpenAI and postprocesses it. Depending
on the scenario, this postprocessing might include responsible AI checks, filtering
with Azure content moderation, or additional business-specific constraints.
4. Copilot returns a response to the user in the form of natural language, code, or
other content. For example, a response might be in the form of a chat message or
generated code, or it might be a contextually appropriate form such as a Power BI
report or a Synapse notebook cell.
5. The user reviews the response before using it. Copilot responses can include
inaccurate or low-quality content, so it's important for subject matter experts to
check outputs before using or sharing them.

Just as each experience in Fabric is built for certain scenarios and personas—from data
engineers to data analysts—each Copilot feature in Fabric has also been built with
unique scenarios and users in mind. For capabilities, intended uses, and limitations of
each feature, review the section for the experience you're working in.

Definitions

Prompt or input
The text or action submitted to Copilot by a user. This could be in the form of a question
that a user types into a chat pane, or in the form of an action such as selecting a button
that says "Create a report."

Grounding
A preprocessing technique where Copilot retrieves additional data that's contextual to
the user's prompt, and then sends that data along with the user's prompt to Azure
OpenAI in order to generate a more relevant and actionable response.

Response or output
The content that Copilot returns to a user. For example, a response might be in the form
of a chat message or generated code, or it might be contextually appropriate content
such as a Power BI report or a Synapse notebook cell.

What data does Copilot use and how is it


processed?
To generate a response, Copilot uses:

The user's prompt or input and, when appropriate,


Additional data that is retrieved through the grounding process.

This information is sent to Azure OpenAI Service, where it's processed and an output is
generated. Therefore, data processed by Azure OpenAI can include:

The user's prompt or input.


Grounding data.
The AI response or output.

Grounding data may include a combination of dataset schema, specific data points, and
other information relevant to the user's current task. Review each experience section for
details on what data is accessible to Copilot features in that scenario.

Interactions with Copilot are specific to each user. This means that Copilot can only
access data that the current user has permission to access, and its outputs are only
visible to that user unless that user shares the output with others, such as sharing a
generated Power BI report or generated code. Copilot doesn't use data from other users
in the same tenant or other tenants.

Copilot uses Azure OpenAI—not OpenAI's publicly available services—to process all
data, including user inputs, grounding data, and Copilot outputs. Copilot currently uses
a combination of GPT models, including GPT 3.5. Microsoft hosts the OpenAI models in
Microsoft's Azure environment and the Service doesn't interact with any services by
OpenAI (for example, ChatGPT or the OpenAI API). Your data isn't used to train models
and isn't available to other customers. Learn more about Azure OpenAI.

Data from Copilot in Fabric is stored by Microsoft for up to 30 days (as outlined in the
Preview Terms of Use ) to help monitor and prevent abusive or harmful uses or outputs
of the service. Authorized Microsoft employees may review data that has triggered our
automated systems to investigate and verify potential abuse.

Data residency and compliance


You retain control over where your data is processed. Data processed by Copilot in Fabric
stays within your tenant's geographic region, unless you explicitly allow data to be
processed outside your region—for example, to let your users use Copilot when Azure
OpenAI isn't available in your region or availability is limited due to high demand. (See
where Azure OpenAI is currently available.)

To allow data to be processed elsewhere, your admin can turn on the setting Data sent
to Azure OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance. Learn more about admin settings for
Copilot.

What should I know to use Copilot responsibly?


Microsoft is committed to ensuring that our AI systems are guided by our AI
principles and Responsible AI Standard . These principles include empowering our
customers to use these systems effectively and in line with their intended uses. Our
approach to responsible AI is continually evolving to proactively address emerging
issues.
Copilot features in Fabric are built to meet the Responsible AI Standard, which means
that they're reviewed by multidisciplinary teams for potential harms, and then refined to
include mitigations for those harms.

Before you use Copilot, keep in mind the limitations of Copilot:

Copilot responses can include inaccurate or low-quality content, so make sure to


review outputs before using them in your work.
People who are able to meaningfully evaluate the content's accuracy and
appropriateness should review the outputs.
Currently, Copilot features work best in the English language. Other languages may
not perform as well.

Copilot for Fabric workloads


Privacy, security, and responsible use for:

Copilot for Power BI (preview)


Copilot for Data Factory (preview)
Copilot for Data Science (preview)

Notes by release
Additional information for future releases or feature updates will appear here.

Next steps
What is Microsoft Fabric?
Copilot in Fabric and Power BI: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use of
Copilot for Data Factory (preview)
Article • 11/15/2023

With Copilot for Data Factory in Microsoft Fabric and other generative AI features in
preview, Microsoft Fabric brings a new way to transform and analyze data, generate
insights, and create visualizations and reports in Data Science and the other workloads.

Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.

The article Privacy, security, and responsible use for Copilot (preview) provides an
overview of Copilot in Fabric. Read on for details about Copilot for Data Factory.

Capabilities and intended uses of Copilot for


Data Factory
The Copilot features in Data Factory currently support use in Dataflow Gen2. These
features include the Copilot chat pane and suggested transformations.

Copilot has the following intended uses:


Provide a summary of the query and the applied steps.
Generate new transformation steps for an existing query.
Generate a new query that may include sample data or a connection to a data
source that requires configuring authentication.

Limitations of Copilot for Data Factory


Here are the current limitations of Copilot for Data Factory:

Copilot can't perform transformations or explanations across multiple queries in a


single input. For instance, you can't ask Copilot to "Capitalize all the column
headers for each query in my dataflow."
Copilot doesn't understand previous inputs and can't undo changes after a user
commits a change when authoring, either via user interface or the chat pane. For
example, you can't ask Copilot to "Undo my last 5 inputs." However, users can still
use the existing user interface options to delete unwanted steps or queries.
Copilot can't make layout changes to queries in your session. For example, if you
tell Copilot to create a new group for queries in the editor, it doesn't work.
Copilot may produce inaccurate results when the intent is to evaluate data that
isn't present within the sampled results imported into the sessions data preview.
Copilot doesn't produce a message for the skills that it doesn't support. For
example, if you ask Copilot to "Perform statistical analysis and write a summary
over the contents of this query", it doesn't complete the instruction successfully as
mentioned previously. Unfortunately, it doesn't give an error message either.

Data use of Copilot for Data Factory


Copilot can only access data that is accessible to the user's current Gen2 dataflow
session, and that is configured and imported into the data preview grid. Learn
more about getting data in Power Query.

Evaluation of Copilot for Data Factory


The product team has tested Copilot to see how well the system performs within
the context of Gen2 dataflows, and whether AI responses are insightful and useful.
The team also invested in other harms mitigations, including technological
approaches to focusing Copilot's output on topics related to data integration.

Tips for working with Copilot for Data Factory


Copilot is best equipped to handle data integration topics, so it's best to limit your
questions to this area.
If you include descriptions such as query names, column names, and values in the
input, Copilot is more likely to generate useful outputs.
Try breaking complex inputs into more granular tasks. This helps Copilot better
understand your requirements and generate a more accurate output.

Notes by release
Additional information for future releases or feature updates will appear here.

Next steps
What is Microsoft Fabric?
Copilot in Fabric: FAQ
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use of
Copilot for Data Science (preview)
Article • 11/15/2023

With Copilot for Data Science in Microsoft Fabric and other generative AI features in
preview, Microsoft Fabric brings a new way to transform and analyze data, generate
insights, and create visualizations and reports in Data Science and the other workloads.

Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.

The article Privacy, security, and responsible use for Copilot (preview) provides an
overview of Copilot in Fabric. Read on for details about Copilot for Data Science.

Capabilities, intended uses, and limitations of


Copilot for Data Science
Copilot features in the Data Science experience are currently scoped to notebooks.
These features include the Copilot chat pane, IPython magic commands that can
be used within a code cell, and automatic code suggestions as you type in a code
cell. Copilot can also read Power BI semantic models using an integration of
semantic link.

Copilot has two key intended uses.


One, you can ask Copilot to examine and analyze data in your notebook (for
example, by first loading a DataFrame and then asking Copilot about data inside
the DataFrame).
Two, you can ask Copilot to generate a range of suggestions about your data
analysis process, such as what predictive models might be relevant, code to
perform different types of data analysis, and documentation for a completed
notebook.

Keep in mind that code generation with fast-moving or recently released libraries
may include inaccuracies or fabrications.

Data use of Copilot for Data Science


In notebooks, Copilot can only access data that is accessible to the user’s current
notebook, either in an attached lakehouse or directly loaded or imported into that
notebook by the user. In notebooks, Copilot can't access any data that's not
accessible to the notebook.

By default, Copilot has access to the following data types:


Previous messages sent to and replies from Copilot for that user in that session.
Contents of cells that the user has executed.
Outputs of cells that the user has executed.
Schemas of data sources in the notebook.
Sample data from data sources in the notebook.
Schemas from external data sources in an attached lakehouse.

Evaluation of Copilot for Data Science


The product team has tested Copilot to see how well the system performs within
the context of notebooks, and whether AI responses are insightful and useful.
The team also invested in additional harms mitigations, including technological
approaches to focusing Copilot’s output on topics related to data science.

Tips for working with Copilot for Data Science


Copilot is best equipped to handle data science topics, so limit your questions to
this area.
Be explicit about the data you want Copilot to examine. If you describe the data
asset, such as naming files, tables, or columns, Copilot is more likely to retrieve
relevant data and generate useful outputs.
If you want more granular responses, try loading data into the notebook as
DataFrames or pinning the data in your lakehouse. This gives Copilot more context
with which to perform analysis. If an asset is too large to load, pinning it's a helpful
alternative.

Notes by release
Additional information for future releases or feature updates will appear here.

Next steps
What is Microsoft Fabric?
Copilot in Fabric: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use for
Copilot for Power BI (preview)
Article • 11/15/2023

With Copilot and other generative AI features in preview, Microsoft Fabric brings a new
way to transform and analyze data, generate insights, and create visualizations and
reports in Power BI and the other workloads.

Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.

The article Privacy, security, and responsible use for Copilot (preview) provides an
overview of Copilot in Fabric. Read on for details about Copilot for Power BI.

Capabilities and intended uses of Copilot for


Power BI
By using Copilot for Power BI, you can quickly create reports with just a few clicks.
Copilot can save you hours of effort building your report pages.

Copilot provides a summary of your dataset and an outline of suggested pages for
your report. Then it generates those pages for the report. After you open a blank
report with a semantic model, Copilot can generate:
Suggested topics.
A report outline: for example, what each page in the report will be about, and
how many pages it will create.
The visuals for the individual pages.

Limitations of Copilot for Power BI


Here are the current limitations of Copilot for Power BI:

Copilot can't modify the visuals after it has generated them.


Copilot can't add filters or set slicers if you specify them in the prompts. For
example, if you say, "Create a sales report for the last 30 days," Copilot can't
interpret 30 days as a date filter.
Copilot can't make layout changes. For example, if you tell Copilot to resize the
visuals, or to align all the visuals perfectly, it won't work.
Copilot can't understand complex intent. For example, suppose you frame a
prompt like this: "Generate a report to show incidents by team, incident type,
owner of the incident, and do this for only 30 days." This prompt is complex, and
Copilot will probably generate irrelevant visuals.
Copilot doesn't produce a message for the skills that it doesn't support. For
example, if you ask Copilot to edit or add a slicer, it won't complete the instruction
successfully as mentioned above. Unfortunately, it won't give an error message
either.

Data use in Copilot for Power BI


Copilot uses the data in a semantic model that you provide, combined with the
prompts you enter, to create visuals. Learn more about semantic models.

Tips for working with Copilot for Power BI


Review FAQ for Copilot for Power BI for tips and suggestions to help you work with
Copilot in this experience.

Notes by release
Additional information for future releases or feature updates will appear here.

Next steps
What is Microsoft Fabric?
Copilot in Fabric and Power BI: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Copilot for Data Factory overview
Article • 11/15/2023

Copilot in Fabric enhances productivity, unlocks profound insights, and facilitates the
creation of custom AI experiences tailored to your data. As a component of the Copilot
in Fabric experience, Copilot in Data Factory empowers customers to use natural
language to articulate their requirements for creating data integration solutions using
Dataflow Gen2. Essentially, Copilot in Data Factory operates like a subject-matter expert
(SME) collaborating with you to design your dataflows.

Copilot for Data Factory is an AI-enhanced toolset that supports both citizen and
professional data wranglers in streamlining their workflow. It provides intelligent
Mashup code generation to transform data using natural language input and generates
code explanations to help you better understand earlier generated complex queries and
tasks.

Supported capabilities
With Dataflow Gen2, you can:

Generate a new query that may include sample data or a connection to a data
source that requires configuring authentication.
Provide a summary of the query and the applied steps.
Generate new transformation steps for an existing query.

Get started
1. Create a new Dataflows Gen2.

2. On the Home tab in Dataflows Gen2, select the Copilot button.


3. In the bottom left of the Copilot pane, select the starter prompt icon, then the Get
data from option.

4. In the Get data window, search for OData and select the OData connector.

5. In the Connect to data source for the OData connector, input the following text
into the URL field:

https://services.odata.org/V4/Northwind/Northwind.svc/
6. From the navigator, select the Orders table and then Select related tables. Then
select Create to bring multiple tables into the Power Query editor.

7. Select the Customers query, and in the Copilot pane type this text: Only keep
European customers , then press Enter or select the Send message icon.

Your input is now visible in the Copilot pane along with a returned response card.
You can validate the step with the corresponding step title in the Applied steps list
and review the formula bar or the data preview window for accuracy of your
results.

8. Select the Employees query, and in the Copilot pane type this text: Count the
total number of employees by City , then press Enter or select the Send message

icon. Your input is now visible in the Copilot pane along with a returned response
card and an Undo button.

9. Select the column header for the Total Employees column and choose the option
Sort descending. The Undo button disappears because you modified the query.

10. Select the Order_Details query, and in the Copilot pane type this text: Only keep
orders whose quantities are above the median value , then press Enter or select

the Send message icon. Your input is now visible in the Copilot pane along with a
returned response card.

11. Either select the Undo button or type the text Undo (any text case) and press Enter
in the Copilot pane to remove the step.

12. To leverage the power of Azure Open AI when creating or transforming your data,
ask Copilot to create sample data by typing this text:

Create a new query with sample data that lists all the Microsoft OS versions

and the year they were released

Copilot adds a new query to the Queries pane list, containing the results of your
input. At this point, you can either transform data in the user interface, continue to
edit with Copilot text input, or delete the query with an input such as Delete my
current query .

Next steps

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Overview of Copilot for Data Science
and Data Engineering (preview)
Article • 11/15/2023

) Important

This feature is in preview.

Copilot for Data Science and Data Engineering is an AI assistant that helps analyze and
visualize data. It works with Lakehouse tables and files, Power BI Datasets, and
pandas/spark/fabric dataframes, providing answers and code snippets directly in the
notebook. The most effective way of using Copilot is to add your data as a dataframe.
You can ask your questions in the chat panel, and the AI provides responses or code to
copy into your notebook. It understands your data's schema and metadata, and if data
is loaded into a dataframe, it has awareness of the data inside of the data frame as well.
You can ask Copilot to provide insights on data, create code for visualizations, or
provide code for data transformations, and it recognizes file names for easy reference.
Copilot streamlines data analysis by eliminating complex coding.

7 Note

Copilot in Fabric is currently rolling out in public preview and is expected to be


available for all customers by end of March 2024.

Introduction to Copilot for Data Science and


Data Engineering for Fabric Data Science
With Copilot for Data Science and Data Engineering, you can chat with an AI assistant
that can help you handle your data analysis and visualization tasks. You can ask the
Copilot questions about lakehouse tables, Power BI Datasets, or Pandas/Spark
dataframes inside notebooks. Copilot answers in natural language or code snippets.
Copilot can also generate data-specific code for you, depending on the task. For
example, Copilot for Data Science and Data Engineering can generate code for:

Chart creation
Filtering data
Applying transformations
Machine learning models

First select the Copilot icon in the notebooks ribbon. The Copilot chat panel opens, and
a new cell appears at the top of your notebook. This cell must run each time a Spark
session loads in a Fabric notebook. Otherwise, the Copilot experience won't properly
operate. We are in the process of evaluating other mechanisms for handling this
required initialization in future releases.

Run the cell at the top of the notebook. After the cell successfully executes, you can use
Copilot. You must rerun the cell at the top of the notebook each time your session in the
notebook closes.

To maximize Copilot effectiveness, load a table or dataset as a dataframe in your


notebook. This way, the AI can access the data and understand its structure and content.
Then, start chatting with the AI. Select the chat icon in the notebook toolbar, and type
your question or request in the chat panel. For example, you can ask:

"What is the average age of customers in this dataset?"


"Show me a bar chart of sales by region"

And more. Copilot responds with the answer or the code, which you can copy and paste
it your notebook. Copilot for Data Science and Data Engineering is a convenient,
interactive way to explore and analyze your data.

As you use Copilot, you can also invoke the magic commands inside of a notebook cell
to obtain output directly in the notebook. For example, for natural language answers to
responses, you can ask questions using the "%%chat" command, such as:

%%chat
What are some machine learning models that may fit this dataset?

or

%%code
Can you generate code for a logistic regression that fits this data?

Copilot for Data Science and Data Engineering also has schema and metadata
awareness of tables in the lakehouse. Copilot can provide relevant information in
context of your data in an attached lakehouse. For example, you can ask:
"How many tables are in the lakehouse?"
"What are the columns of the table customers?"

Copilot responds with the relevant information if you added the lakehouse to the
notebook. Copilot also has awareness of the names of files added to any lakehouse
attached to the notebook. You can refer to those files by name in your chat. For
example, if you have a file named sales.csv in your lakehouse, you can ask "Create a
dataframe from sales.csv". Copilot generates the code and displays it in the chat panel.
With Copilot for notebooks, you can easily access and query your data from different
sources. You don't need the exact command syntax to do it.

Tips
"Clear" your conversation in the Copilot chat panel with the broom located at the
top of the chat panel. Copilot retains knowledge of any inputs or outputs during
the session, but this helps if you find the current content distracting.
Use the chat magics library to configure settings about Copilot, including privacy
settings. The default sharing mode is designed to maximize the context sharing
Copilot has access to, so limiting the information provided to copilot can directly
and significantly impact the relevance of its responses.
When Copilot first launches, it offers a set of helpful prompts that can help you get
started. They can help kickstart your conversation with Copilot. To refer to prompts
later, you can use the sparkle button at the bottom of the chat panel.
You can "drag" the sidebar of the copilot chat to expand the chat panel, to view
code more clearly or for readability of the outputs on your screen.

Next steps
How to use Chat-magics
How to use the Copilot Chat Pane

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Overview of Chat-magics in Microsoft
Fabric Notebooks
Article • 11/15/2023

) Important

This feature is in preview.

The Chat-magics Python library enhances your data science and engineering workflow
in Microsoft Fabric notebooks. It seamlessly integrates with the Fabric environment, and
allows execution of specialized IPython magic commands in a notebook cell, to provide
real-time outputs. IPython magic commands and more background on usage can be
found here: https://ipython.readthedocs.io/en/stable/interactive/magics.html# .

7 Note

Copilot in Fabric is currently rolling out in public preview and is expected to be


available for all customers by end of March 2024.

Capabilities of Chat-magics

Instant query and code generation


The %%chat command allows you to ask questions about the state of your notebook.
The %%code enables code generation for data manipulation or visualization.

Dataframe descriptions
The %%describe command provides summaries and descriptions of loaded dataframes.
This simplifies the data exploration phase.

Commenting and debugging


The %%add_comments and %%fix_errors commands help add comments to your code and
fix errors respectively. This helps make your notebook more readable and error-free.
Privacy controls
Chat-magics also offers granular privacy settings, which allows you to control what data
is shared with the Azure OpenAI Service. The %set_sharing_level and
%configure_privacy_settings commands, for example, provide this functionality.

How can Chat-magics help you?


Chat-magics enhances your productivity and workflow in Microsoft Fabric notebooksIt
accelerates data exploration, simplifies notebook navigation, and improves code quality.
It adapts to multilingual code environments, and it prioritizes data privacy and security.
Through cognitive load reductions, it allows you to more closely focus on problem-
solving. Whether you're a data scientist, data engineer, or business analyst, Chat-magics
seamlessly integrates robust, enterprise-level Azure OpenAI capabilities directly into
your notebooks. This makes it an indispensable tool for efficient and streamlined data
science and engineering tasks.

Get started with Chat-magics


1. Open a new or existing Microsoft Fabric notebook.
2. Select the Copilot button on the notebook ribbon to output the Chat-magics
initialization code into a new notebook cell.
3. Run the cell when it is added at the top of your notebook.

Verify the Chat-magics installation


1. Create a new cell in the notebook, and run the %chat_magics command to display
the help message. This step verifies proper Chat-magics installation.

Introduction to basic commands: %%chat and


%%code

Using %%chat (Cell Magic)


1. Create a new cell in your notebook.
2. Type %%chat at the top of the cell.
3. Enter your question or instruction below the %%chat command - for example,
What variables are currently defined?
4. Execute the cell to see the Chat-magics response.

Using %%code (Cell Magic)


1. Create a new cell in your notebook.
2. Type %%code at the top of the cell.
3. Below this, specify the code action you'd like - for example, Load my_data.csv into
a pandas dataframe.
4. Execute the cell, and review the generated code snippet.

Customizing output and language settings


1. Use the %set_output command to change the default for how magic commands
provide output. The options can be viewed by running %set_output?
2. Choose where to place the generated code, from options like

current cell
new cell
cell output
into a variable

Advanced commands for data operations

%%describe, %%add_comments, and %%fix_errors


1. Use %%describe DataFrameName in a new cell to obtain an overview of a specific
dataframe.
2. To add comments to a code cell for better readability, type %%add_comments to
the top of the cell you want to annotate and then execute. Be sure to validate the
code is correct
3. For code error fixing, type %%fix_errors at the top of the cell that contained an
error and execute it.

Privacy and security settings


1. By default, your privacy configuration shares previous messages sent to and from
the Language Learning Model (LLM). However, it doesn't share cell contents,
outputs, or any schemas or sample data from data sources.
2. Use %set_sharing_level in a new cell to adjust the data shared with the AI
processor.
3. For more detailed privacy settings, use %configure_privacy_settings .

Context and focus commands

Using %pin, %new_task, and other context commands


1. Use %pin DataFrameName to help the AI focus on specific dataframes.
2. To clear the AI to focus on a new task in your notebook, type %new_task followed
by a task that you are about to undertake. This clears the execution history that
copilot knows about to this point and can make future responses more relevant.

Next steps
How to use Copilot Pane

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Use the Copilot for Data Science and
Data Engineering chat panel
Article • 11/15/2023

) Important

This feature is in preview.

Copilot for Data Science and Data Engineering notebooks is an AI assistant that helps
you analyze and visualize data. It works with lakehouse tables, Power BI Datasets, and
pandas/spark dataframes, providing answers and code snippets directly in the
notebook. The most effective way of using Copilot is to load your data as a dataframe.
You can use the chat panel to ask your questions, and the AI provides responses or code
to copy into your notebook. It understands your data's schema and metadata, and if
data is loaded into a dataframe, it has awareness of the data inside of the data frame as
well. You can ask Copilot to provide insights on data, create code for visualizations, or
provide code for data transformations, and it recognizes file names for easy reference.
Copilot streamlines data analysis by eliminating complex coding.

Prerequisites

7 Note

Copilot in Fabric is currently rolling out in public preview and is expected to be


available for all customers by end of March 2024.

To use Copilot in Data Science:

Your Fabric admin must enable it in the administration portal.


The workspace you use must have the F64 and above license SKU.

Azure OpenAI enablement


Azure OpenAI must be enabled within Fabric at the tenant level.

7 Note
If your workspace is provisioned in a region without GPU capacity, and your data is
not enabled to flow cross-geo, Copilot will not function properly and you will see
errors.

Successful execution of Chat-Magics


installation cell
1. To use the Copilot pane, The installation cell for chat-magics must successfully
execute within your Spark session.

) Important

If your Spark session terminates, the context for chat-magics will also
terminate, also wiping the context for the Copilot pane.

2. Verify that all these conditions are met before proceeding with the Copilot Chat
Pane.

Open Copilot chat panel inside the notebook


1. Select the Copilot button on the notebook ribbon
2. To open Copilot, select the Copilot button at the top of the Notebook.

3. The Copilot chat panel opens on the right side of your notebook.

4. A panel opens, to provide overview information and helpful links.

Key capabilities
AI assistance: Generate code, query data, and get suggestions to accelerate your
workflow.
Data insights: Quick data analysis and visualization capabilities.
Explanations: Copilot can provide natural language explanations of notebook cells,
and can provide an overview for notebook activity as it runs.
Fixing errors: Copilot can also fix notebook run errors as they arise. Copilot shares
context with the notebook cells (executed output) and can provide helpful
suggestions.

Important notices
Inaccuracies: Potential for inaccuracies exists. Review AI-generated content
carefully.
Data storage: Customer data is temporarily stored, to identify harmful use of AI.

Getting started with Copilot chat in notebooks


1. Copilot for Data Science and Data Engineering offers helpful starter prompts to get
started. For example, "Load data from my lakehouse into a dataframe", or
"Generate insights from data".

2. Each of these selections outputs chat text in the text panel. As the user, you must
fill out the specific details of the data you'd like to use.

3. You can then input any type of request you have in the chat box.

Regular usage of the Copilot chat panel


The more specifically you describe your goals in your chat panel entries, the more
accurate the Copilot responses.
You can "copy" or "insert" code from the chat panel. At the top of each code block,
two buttons allow input of items directly into the notebook.
To clear your conversation, select the Broom icon at the top to remove your
conversation from the pane. It clears the pane of any input or output, but the
context remains in the session until it ends.
Configure the Copilot privacy settings with the %configure_privacy_settings
command, or the %set_sharing_level command in the chat magics library.
Transparency: Read our Transparency Note for details on data and algorithm use.

Next steps
How to use Chat-magics

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap
Article • 11/17/2023

The goal of this series of articles is to provide a roadmap. The roadmap presents a series
of strategic and tactical considerations and action items that lead to the successful
adoption of Microsoft Fabric, and help build a data culture in your organization.

Advancing adoption and cultivating a data culture is about more than implementing
technology features. Technology can assist an organization in making the greatest
impact, but a healthy data culture involves many considerations across the spectrum of
people, processes, and technology.

7 Note

While reading this series of articles, we recommended that you also take into
consideration Power BI implementation planning guidance. After you're familiar
with the concepts in the Microsoft Fabric adoption roadmap, consider reviewing
the usage scenarios. Understanding the diverse ways Power BI is used can
influence your implementation strategies and decisions for all of Microsoft Fabric.

The diagram depicts the following areas of the Microsoft Fabric adoption roadmap.
The areas in the above diagram include:

Area Description

Data culture: Data culture refers to a set of behaviors and norms in the organization that
encourages a data-driven culture. Building a data culture is closely related to adopting
Fabric, and it's often a key aspect of an organization's digital transformation.

Executive sponsor: An executive sponsor is someone with credibility, influence, and


authority throughout the organization. They advocate for building a data culture and
adopting Fabric.

Business Alignment: How well the data culture and data strategy enable business users to
achieve business objectives. An effective BI data strategy aligns with the business strategy.
Area Description

Content ownership and management: There are three primary strategies for how business
intelligence (BI) and analytics content is owned and managed: business-led self-service BI,
managed self-service BI, and enterprise BI. These strategies have a significant influence on
adoption, governance, and the Center of Excellence (COE) operating model.

Content delivery scope: There are four primary strategies for content and data delivery:
personal, team, departmental, and enterprise. These strategies have a significant influence
on adoption, governance, and the COE operating model.

Center of Excellence: A Fabric COE is an internal team of technical and business experts.
These experts actively assist others who are working with data within the organization. The
COE forms the nucleus of the broader community to advance adoption goals that are
aligned with the data culture vision.

Governance: Data governance is a set of policies and procedures that define the ways in
which an organization wants data to be used. When adopting Fabric, the goal of
governance is to empower the internal user community to the greatest extent possible,
while adhering to industry, governmental, and contractual requirements and regulations.

Mentoring and user enablement: A critical objective for adoption efforts is to enable users
to accomplish as much as they can within the guardrails established by governance
guidelines and policies. The act of mentoring users is one of the most important
responsibilities of the COE. It has a direct influence on adoption efforts.

Community of practice: A community of practice comprises a group of people with a


common interest, who interact with and help each other on a voluntary basis. An active
community is an indicator of a healthy data culture. It can significantly advance adoption
efforts.

User support: User support includes both informally organized and formally organized
methods of resolving issues and answering questions. Both formal and informal support
methods are critical for adoption.

System oversight: System oversight includes the day-to-day administration responsibilities


to support the internal processes, tools, and people.

Change management: Change management involves procedures to address the impact of


change for people in an organization. These procedures safeguard against disruption and
productivity loss due to changes in solutions or processes. An effective data strategy
describes who is responsible for managing this change and the practices and resources
needed to realize it.

The relationships in the above diagram can be summarized as follows.

Your organizational data culture vision will strongly influence the strategies that
you follow for self-service and enterprise content ownership and management
and content delivery scope.
These strategies will, in turn, have a big impact on the operating model for your
Center of Excellence and governance decisions.
The established governance guidelines, policies, and processes affect the
implementation methods used for mentoring and enablement, the community of
practice, and user support.
Governance decisions will dictate the day-to-day system oversight (administration)
activities.
Adoption and governance decisions are implemented alongside change
management to mitigate the impact and disruption of change on existing business
processes.
All data culture and adoption-related decisions and actions are accomplished more
easily with guidance and leadership from an executive sponsor, who facilitates
business alignment between the business strategy and data strategy. This
alignment in turn informs data culture and governance decisions.

Each individual article in this series discusses key topics associated with the items in the
diagram. Considerations and potential action items are provided. Each article concludes
with a set of maturity levels to help you assess your current state so you can decide
what action to take next.

Microsoft Fabric adoption


Successful adoption of analytical tools like Fabric involves making effective processes,
support, tools, and data available and integrated into regular ongoing patterns of usage
for content creators, consumers, and stakeholders in the organization.

) Important

This series of adoption articles is focused on organizational adoption. See


Microsoft Fabric adoption maturity levels for an introduction to the three types of
adoption: organizational, user, and solution.

A common misconception is that adoption relates primarily to usage or the number of


users. There's no question that usage statistics are an important factor. However, usage
isn't the only factor. Adoption isn't just about using the technology regularly; it's about
using it effectively. Effectiveness is much more difficult to define and measure.

Whenever possible, adoption efforts should be aligned across analytics platforms and BI
services.
7 Note

Individuals—and the organization itself—are continually learning, changing, and


improving. That means there's no formal end to adoption-related efforts.

The remaining articles in this Power BI adoption series discuss the following aspects of
adoption.

Adoption maturity levels


Data culture
Executive sponsorship
Business alignment
Content ownership and management
Content delivery scope
Center of Excellence
Governance
Mentoring and enablement
Community of practice
User support
System oversight
Change management
Conclusion and additional resources

) Important

You might be wondering how this Fabric adoption roadmap is different from the
Power BI adoption framework . The adoption framework was created primarily to
support Microsoft partners. It's a lightweight set of resources to help partners
deploy Power BI solutions for their customers.

This adoption series is more current. It's intended to guide any person or
organization that is using—or considering using—Fabric. If you're seeking to
improve your existing Power BI of Fabric implementation, or planning a new Power
BI or Fabric implementation, this adoption roadmap is a great place to start.

Target audience
The intended audience of this series of articles is interested in one or more of the
following outcomes.
Improving their organization's ability to effectively use analytics.
Increasing their organization's maturity level related to the delivery of analytics.
Understanding and overcoming adoption-related challenges faced when scaling
and growing.
Increasing their organization's return on investment (ROI) in data and analytics.

This series of articles will be most helpful to those who work in an organization with one
or more of the following characteristics.

Power BI or other Fabric workloads are deployed with some successes.


There are pockets of viral adoption, but analytics isn't being purposefully governed
across the entire organization.
Analytics solutions are deployed with some meaningful scale, but there remains a
need to determine:
What is effective and what should be maintained.
What should be improved.
How future deployments could be more strategic.
An expanded implementation of analytics is under consideration or is planned.

This series of articles will also be helpful for:

Organizations that are in the early stages of an analytics implementation.


Organizations that have had success with adoption and now want to evaluate their
current maturity level.

Assumptions and scope


The primary focus of this series of articles is on the Microsoft Fabric platform.

To fully benefit from the information provided in these articles, you should have an
understanding of Power BI foundational concepts and Fabric foundational concepts.

Next steps
In the next article in this series, learn about the Fabric adoption maturity levels. The
maturity levels are referenced throughout the entire series of articles. Also, see the
conclusion article for additional adoption-related resources.

Other helpful resources include:

Power BI implementation planning


Questions? Try asking the Microsoft Fabric community .
Suggestions? Contribute ideas to improve Microsoft Fabric .

Experienced partners are available to help your organization succeed with adoption
initiatives. To engage with a partner, visit the Power BI partner portal .

Acknowledgments
The Microsoft Fabric adoption roadmap articles are written by Melissa Coates, Kurt
Buhler, and Peter Myers. Matthew Roche, from the Fabric Customer Advisory Team,
provides strategic guidance and feedback to the subject matter experts. Reviewers
include Cory Moore, James Ward, Timothy Bindas, Greg Moir, and Chuy Varela.
Microsoft Fabric adoption roadmap
maturity levels
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

There are three inter-related perspectives to consider when adopting an analytics


technology like Microsoft Fabric.

The three types of adoption depicted in the above diagram include:

Type Description

Organizational adoption: Organizational adoption refers to the effectiveness of your


analytics governance processes. It also refers to data management practices that support
and enable analytics and business intelligence (BI) efforts.

User adoption: User adoption is the extent to which consumers and creators continually
increase their knowledge. It's concerned with whether they're actively using analytics tools,
and whether they're using them in the most effective way.

Solution adoption: Solution adoption refers to the impact and business value achieved for
individual requirements and analytical solutions.
As the four arrows in the previous diagram indicate, the three types of adoption are all
strongly inter-related:

Solution adoption affects user adoption. A well-designed and well-managed


solution—which could be many things, such as a set of reports, a Power BI app, a
semantic model (previously known as a dataset) or a Fabric lakehouse—impacts
and guides users on how to use analytics in an optimal way.
User adoption impacts organizational adoption. The patterns and practices used
by individual users influence organizational adoption decisions, policies, and
practices.
Organizational adoption influences user adoption. Effective organizational
practices—including mentoring, training, support, and community—encourage
users to do the right thing in their day-to-day workflow.
User adoption affects solution adoption. Stronger user adoption, because of the
effective use of analytics by educated and informed users, contributes to stronger
and more successful individual solutions.

The remainder of this article introduces the three types of adoption in more detail.

Organizational adoption maturity levels


Organizational adoption measures the state of analytics governance and data
management practices. There are several organizational adoption goals:

Effectively support the community of creators, consumers, and stakeholders


Enable and empower users
Right-sized governance of analytics, BI, and data management activities
Oversee information delivery via enterprise BI and self-service BI with continuous
improvement cycles

It's helpful to think about organizational adoption from the perspective of a maturity
model. For consistency with the Power CAT adoption maturity model and the maturity
model for Microsoft 365, this Microsoft Fabric adoption roadmap aligns with the five
levels from the Capability Maturity Model , which were later enhanced by the Data
Management Maturity (DMM) model from ISACA (note that the DMM was a paid
resource that has since been retired).

Every organization has limited time, funding, and people. So, it requires them to be
selective about where they prioritize their efforts. To get the most from your investment
in analytics, seek to attain at least maturity level 300 or 400, as discussed below. It's
common that different business units in the organization evolve and mature at different
rates, so be conscious of the organizational state as well as progress for key business
units.

7 Note

Organizational adoption maturity is a long journey. It takes time, effort, and


planning to progress to the higher levels.

Maturity level 100 – Initial


Level 100 is referred to as initial or performed. It's the starting point for new data-related
investments that are new, undocumented, and without any process discipline.

Common characteristics of maturity level 100 include:

Pockets of success and experimentation with Fabric exist in one or more areas of
the organization.
Achieving quick wins has been a priority, and solutions have been delivered with
some success.
Organic growth has led to the lack of a coordinated strategy or governance
approach.
Practices are undocumented, with significant reliance on tribal knowledge.
There are few formal processes in place for effective data management.
Risk exists due to a lack of awareness of how data is used throughout the
organization.
The potential for a strategic investment with analytics is acknowledged. However,
there's no clear path forward for purposeful, organization-wide execution.

Maturity level 200 – Repeatable


Level 200 is referred to as repeatable or managed. At this point on the maturity curve,
data management is planned and executed. Defined processes exist, though these
processes might not apply uniformly throughout the organization.

Common characteristics of maturity level 200 include:

Certain analytics content is now critical in importance and/or it's broadly used by
the organization.
There are attempts to document and define repeatable practices. These efforts are
siloed, reactive, and deliver varying levels of success.
There's an over-reliance on individuals having good judgment and adopting
healthy habits that they learned on their own.
Analytics adoption continues to grow organically and produces value. However, it
takes place in an uncontrolled way.
Resources for an internal community are established, such as a Teams channel or
Yammer group.
Initial planning for a consistent analytics governance strategy is underway.
There's recognition that a Center of Excellence (COE) can deliver value.

Maturity level 300 – Defined


Level 300 is referred to as defined. At this point on the maturity curve, a set of
standardized data management processes are established and consistently applied
across organizational boundaries.

Common characteristics of maturity level 300 include:

Measurable success is achieved for the effective use of analytics.


Progress is made on the standardization of repeatable practices. However, less-
than-optimal aspects could still exist due to early uncontrolled growth.
The COE is established. It has clear goals and scope of responsibilities.
The internal community of practice gains traction with the participation of a
growing number of users.
Champions emerge in the internal user community.
Initial investments in training, documentation, and resources (such as template
files) are made.
An initial governance model is in place.
There's an active and engaged executive sponsor.
Roles and responsibilities for all analytics stakeholders are well understood.

Maturity level 400 – Capable


Level 400 is known as capable or measured. At this point on the maturity curve, data is
well-managed across its entire lifecycle.

Common characteristics of maturity level 400 include:

Analytics and business intelligence efforts deliver significant value.


Approved tools are commonly used for delivering critical content throughout the
organization.
There's an established and accepted governance model with cooperation from all
key business units.
Training, documentation, and resources are readily available for, and actively used
by, the internal community of users.
Standardized processes are in place for the oversight and monitoring of analytics
usage and practices.
The COE includes representation from all key business units.
A champions network supports the internal community. The champions actively
work with their colleagues as well as the COE.

Maturity level 500 – Efficient


Level 500 is known as efficient or optimizing because at this point on the maturity curve,
the emphasis is now on automation and continuous improvement.

Common characteristics of maturity level 500 include:

The value of analytics solutions is prevalent in the organization. Fabric is widely


accepted throughout the organization.
Analytics skillsets are highly valued in the organization, and they're recognized by
leadership.
The internal user community is self-sustaining, with support from the COE. The
community isn't over-reliant on key individuals.
The COE reviews key performance indicators regularly to measure success of
implementation and adoption goals.
Continuous improvement is a continual priority.
Use of automation adds value, improves productivity, or reduces risk for error.

7 Note

The characteristics above are generalized. When considering maturity levels and
designing a plan, you'll want to consider each topic or goal independently. In
reality, it's probably not possible to reach level 500 maturity level for every aspect
of Fabric adoption for the entire organization. So, assess maturity levels
independently per goal. That way, you can prioritize your efforts where they will
deliver the most value. The remainder of the articles in this Fabric adoption series
present maturity levels on a per-topic basis.

Individuals—and the organization itself—continually learn, change, and improve. That


means there's no formal end to adoption-related efforts. However, it's common that
effort is reduced as higher maturity levels are reached.
The remainder of this article introduces the second and third types of adoption: user
adoption and solution adoption.

7 Note

The remaining articles in this series focus primarily on organizational adoption.

User adoption stages


User adoption measures the extent to which content consumers and self-service content
creators are actively and effectively using analytics tools such as Fabric. Usage statistics
alone don't indicate successful user adoption. User adoption is also concerned with
individual user behaviors and practices. The aim is to ensure users engage with
solutions, tools, and processes in the correct way and to their fullest extent.

User adoption encompasses how consumers view content, as well as how self-service
creators generate content for others to consume.

User adoption occurs on an individual user basis, but it's measured and analyzed in the
aggregate. Individual users progress through the four stages of user adoption at their
own pace. An individual who adopts a new technology will take some time to achieve
proficiency. Some users will be eager; others will be reluctant to learn yet another tool,
regardless of the promised productivity improvements. Advancing through the user
adoption stages involves time and effort, and it involves behavioral changes to become
aligned with organizational adoption objectives. The extent to which the organization
supports users advancing through the user adoption stages has a direct correlation to
the organizational-level adoption maturity.

User adoption stage 1 – Awareness


Common characteristics of stage 1 user adoption include:

An individual has heard of, or been initially exposed to, analytics in some way.
An individual might have access to a tool, such as Fabric, but isn't yet actively using
it.

User adoption stage 2 – Understanding


Common characteristics of stage 2 user adoption include:
An individual develops understanding of the benefits of analytics and how it can
support decision-making.
An individual shows interest and starts to use analytics tools.

User adoption stage 3 – Momentum


Common characteristics of stage 3 user adoption include:

An individual actively gains analytics skills by attending formal training, self-


directed learning, or experimentation.
An individual gains basic competency by using or creating analytics relevant to
their role.

User adoption stage 4 – Proficiency


Common characteristics of stage 4 user adoption include:

An individual actively uses analytics regularly.


An individual understands how to use analytic tools in the way in which they were
intended, as relevant for their role.
An individual modifies their behavior and activities to align with organizational
governance processes.
An individual's willingness to support organizational processes and change efforts
is growing over time, and they become an advocate for analytics in the
organization.
An individual makes the effort to continually improve their skills and stay current
with new product capabilities and features.

It's easy to underestimate the effort it takes to progress from stage 2 (understanding) to
stage 4 (proficiency). Typically, it takes the longest time to progress from stage 3
(momentum) to stage 4 (proficiency).

) Important

By the time a user reaches the momentum and proficiency stages, the organization
needs to be ready to support them in their efforts. You can consider some proactive
efforts to encourage users to progress through stages. For more information, see
the community of practice and the user support articles.

Solution adoption phases


Solution adoption is concerned with measuring the impact of content that's been
deployed. It's also concerned with the level of value solutions provide. The scope for
evaluating solution adoption is for one set of requirements, like a set of reports, a
lakehouse, or a single Power BI app.

7 Note

In this series of articles, content is synonymous with solution.

As a solution progresses to phases 3 or 4, expectations to operationalize the solution


are higher.

 Tip

The importance of scope on expectations for governance is described in the


content delivery scope article. That concept is closely related to this topic, but this
article approaches it from a different angle. It considers when you already have a
solution that is operationalized and distributed to many users. That doesn't
immediately equate to phase 4 solution adoption, as the concept of solution
adoption focuses on how much value the content delivers.

Solution phase 1 – Exploration


Common characteristics of phase 1 solution adoption include:

Exploration and experimentation are the main approaches to testing out new
ideas. Exploration of new ideas can occur through informal self-service efforts, or
through a formal proof of concept (POC), which is purposely narrow in scope. The
goal is to confirm requirements, validate assumptions, address unknowns, and
mitigate risks.
A small group of users test the proof of concept solution and provide useful
feedback.
For simplicity, all exploration—and initial feedback—could occur within local user
tools (such as Power BI Desktop or Excel) or within a single Fabric workspace.

Solution phase 2 – Functional


Common characteristics of phase 2 solution adoption include:
The solution is functional and meets the basic set of user requirements. There are
likely plans to iterate on improvements and enhancements.
The solution is deployed to the Fabric portal.
All necessary supporting components are in place (for example, a gateway to
support scheduled data refresh).
Target users are aware of the solution and show interest in using it. Potentially, it
could be a limited preview release, and might not yet be ready to promote to a
production workspace.

Solution phase 3 – Valuable


Common characteristics of phase 3 solution adoption include:

Target users find the solution to be valuable and experience tangible benefits.
The solution is promoted to a production workspace that's managed, secured, and
audited.
Validations and testing occur to ensure data quality, accurate presentation,
accessibility, and acceptable performance.
Content is endorsed, when appropriate.
Usage metrics for the solution are actively monitored.
User feedback loops are in place to facilitate suggestions and improvements that
can contribute to future releases.
Solution documentation is generated to support the needs of information
consumers (such as data sources used or how metrics are calculated). The
documentation helps future content creators (for example, for documenting any
future maintenance or planned enhancements).
Ownership and subject matter experts for the content are clear.
Report branding and theming are in place and in line with governance guidelines.

Solution phase 4 – Essential


Common characteristics of phase 4 solution adoption include:

Target users actively and routinely use the solution, and it's considered essential
for decision-making purposes.
The solution resides in a production workspace well separated from development
and test content. Change management and release management are carefully
controlled due to the impact of changes.
A subset of users regularly provides feedback to ensure the solution continues to
meet evolving requirements.
Expectations for the success of the solution are clear and are measured.
Expectations for support of the solution are clear, especially if there are service
level agreements.
The solution aligns with organizational governance guidelines and practices.
Most content is certified due to its critical nature.
Formal user acceptance testing for new changes might occur, particularly for IT-
managed content.

Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
organizational data culture and its impact on adoption efforts.
Microsoft Fabric adoption roadmap:
Data culture
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

Building a data culture is closely related to adopting analytics, and it's often a key aspect
of an organization's digital transformation. The term data culture can be defined in
different ways by different organizations. In this series of articles, data culture means a
set of behaviors and norms in an organization. It encourages a culture that regularly
employs informed data decision-making:

By more stakeholders throughout more areas of the organization.


Based on analytics, not opinion.
In an effective, efficient way that's based on best practices approved by the Center
of Excellence (COE).
Based on trusted data.
That reduces reliance on undocumented tribal knowledge.
That reduces reliance on hunches and gut decisions.

) Important

Think of data culture as what you do, not what you say. Your data culture is not a
set of rules (that's governance). So, data culture is a somewhat abstract concept. It's
the behaviors and norms that are allowed, rewarded, and encouraged—or those
that are disallowed and discouraged. Bear in mind that a healthy data culture
motivates employees at all levels of the organization to generate and distribute
actionable knowledge.

Within an organization, certain business units or teams are likely to have their own
behaviors and norms for getting things done. The specific ways to achieve data culture
objectives can vary across organizational boundaries. What's important is that they
should all align with the organizational data culture objectives. You can think of this
structure as aligned autonomy.
The following circular diagram conveys the interrelated aspects that influence your data
culture:

The diagram depicts the somewhat ambiguous relationships among the following items:

Data culture is the outer circle. All topics within it contribute to the state of the
data culture.
Organizational adoption (including the implementation aspects of mentoring and
user enablement, user support, community of practice, governance, and system
oversight) is the inner circle. All topics are major contributors to the data culture.
Executive support and the Center of Excellence are drivers for the success of
organizational adoption.
Data literacy, data democratization, and data discovery are data culture aspects
that are heavily influenced by organizational adoption.
Content ownership and management, and content delivery scope, are closely
related to data democratization.

The elements of the diagram are discussed throughout this series of articles.

Data culture vision


The concept of data culture can be difficult to define and measure. Even though it's
challenging to articulate data culture in a way that's meaningful, actionable, and
measurable, you need to have a well-understood definition of what a healthy data
culture means to your organization. This vision of a healthy data culture should:

Originate from the executive level.


Align with organizational objectives.
Directly influence your adoption strategy.
Serve as the high-level guiding principles for enacting governance policies and
guidelines.

Data culture outcomes aren't specifically mandated. Rather, the state of the data culture
is the result of following the governance rules as they're enforced (or the lack of
governance rules). Leaders at all levels need to actively demonstrate through their
actions what's important to them, including how they praise, recognize, and reward staff
members who take initiative.

 Tip

If you can take for granted that your efforts to develop a data solution (such as a
semantic model—previously known as a dataset, a lakehouse, or a report) will be
valued and appreciated, that's an excellent indicator of a healthy data culture.
Sometimes, however, it depends on what your immediate manager values most.

The initial motivation for establishing a data culture often comes from a specific
strategic business problem or initiative. It might be:

A reactive change, such as responding to new agile competition.


A proactive change, such as starting a new line of business or expanding into new
markets to seize a "green field" opportunity. Being data driven from the beginning
can be relatively easier when there are fewer constraints and complications,
compared with an established organization.
Driven by external changes, such as pressure to eliminate inefficiencies and
redundancies during an economic downturn.
In each of these situations, there's often a specific area where the data culture takes
root. The specific area could be a scope of effort that's smaller than the entire
organization, even if it's still significant. After necessary changes are made at this smaller
scope, they can be incrementally replicated and adapted for the rest of the organization.

Although technology can help advance the goals of a data culture, implementing
specific tools or features isn't the objective. This series of articles covers a lot of topics
that contribute to adoption of a healthy data culture. The remainder of this article
addresses three essential aspects of data culture: data discovery, data democratization,
and data literacy.

Data discovery
A successful data culture depends on users working with the right data in their day-to-
day activities. To achieve this goal, users need to find and access data sources, reports,
and other items.

Data discovery is the ability to effectively locate relevant data assets across the
organization. Primarily, data discovery is concerned with improving awareness that data
exists, which can be particularly challenging when data is siloed in departmental
systems.

Data discovery is a slightly different concept from search, because:

Data discovery allows users to see metadata for an item, like the name of a
semantic model, even if they don't currently have access to it. After a user is aware
of its existence, that user can go through the standard process to request access to
the item.
Search allows users to locate an existing item when they already have security
access to the item.

 Tip

It's important to have a clear and simple process so users can request access to
data. Knowing that data exists—but being unable to access it within the guidelines
and processes that the domain owner has established—can be a source of
frustration for users. It can force them to use inefficient workarounds instead of
requesting access through the proper channels.

Data discovery contributes to adoption efforts and the implementation of governance


practices by:
Encouraging the use of trusted high-quality data sources.
Encouraging users to take advantage of existing investments in available data
assets.
Promoting the use and enrichment of existing data items (such as a lakehouse,
data warehouse, data pipeline, dataflow, or semantic model) or reporting items
(such as reports, dashboards, or metrics).
Helping people understand who owns and manages data assets.
Establishing connections between consumers, creators, and owners.

The OneLake data hub and the use of endorsements are key ways to promote data
discovery in your organization.

Furthermore, data catalog solutions are extremely valuable tools for data discovery.
They can record metadata tags and descriptions to provide deeper context and
meaning. For example, Microsoft Purview can scan and catalog items from a Fabric
tenant (as well as many other sources).

Questions to ask about data discovery

Use questions like those found below to assess data discovery.

Is there a data hub where business users can search for data?
Is there a metadata catalog that describes definitions and data locations?
Are high-quality data sources endorsed by certifying or promoting them?
To what extent do redundant data sources exist because people can't find the data
they need? What roles are expected to create data items? What roles are expected
to create reports or perform ad hoc analysis?
Can end users find and use existing reports, or do they insist on data exports to
create their own?
Do end users know which reports to use to address specific business questions or
find specific data?
Are people using the appropriate data sources and tools, or resisting them in favor
of legacy ones?
Do analysts understand how to enrich existing certified semantic models with new
data—for example, by using a Power BI composite model?
How consistent are data items in their quality, completeness, and naming
conventions?
Can data item owners follow data lineage to perform impact analysis of data
items?

Maturity levels of data discovery

The following maturity levels can help you assess your current state of data discovery.

Level State of Fabric data discovery

100: Initial • Data is fragmented and disorganized, with no clear structures or processes to
find it.

• Users struggle to find and use data they need for their tasks.

200: • Scattered or organic efforts to organize and document data are underway, but
Repeatable only in certain teams or departments.

• Content is occasionally endorsed, but these endorsements aren't defined and


the process isn't managed. Data remains siloed and fragmented, and it's difficult
to access.

300: Defined • A central repository, like the OneLake data hub, is used to make data easier to
find for people who need it.

• An explicit process is in place to endorse quality data and content.

• Basic documentation includes catalog data, definitions, and calculations, as well


as where to find them.

400: Capable • Structured, consistent processes guide users how to endorse, document, and
find data from a central hub. Data silos are the exception instead of the rule.

• Quality data assets are consistently endorsed and easily identified.

• Comprehensive data dictionaries are maintained and improve data discovery.

500: Efficient • Data and metadata is systematically organized and documented with a full view
of the data lineage.

• Quality assets are endorsed and easily identified.


Level State of Fabric data discovery

• Cataloging tools, like Microsoft Purview, are used to make data discoverable for
both use and governance.

Data democratization
Data democratization refers to putting data into the hands of more users who are
responsible for solving business problems. It's about enabling more users to make
better data-driven decisions.

7 Note

The concept of data democratization doesn't imply a lack of security or a lack of


justification based on job role. As part of a healthy data culture, data
democratization helps reduce shadow IT by providing semantic models that:

Are secured, governed, and well managed.


Meet business needs in cost-effective and timely ways.

Your organization's position on data democratization will have a wide-reaching impact


on adoption and governance-related efforts.

2 Warning

If access to data or the ability to perform analytics is limited to a select number of


individuals in the organization, that's typically a warning sign because the ability to
work with data is a key characteristic of a healthy data culture.

Questions to ask about data democratization

Use questions like those found below to assess data democratization.

Is data and analytics readily accessible, or restricted to limited roles and


individuals?
Is an effective process in place for people to request access to new data and tools?
Is data readily shared between teams and business units, or is it siloed and closely
guarded?
Who is permitted to have Power BI Desktop installed?
Who is permitted to have Power BI Pro or Power BI Premium Per User (PPU)
licenses?
Who is permitted to create assets in Fabric workspaces?
What's the desired level of self-service analytics and business intelligence (BI) user
enablement? How does this level vary depending on business unit or job role?
What's the desired balance between enterprise and self-service analytics, and BI?
What data sources are strongly preferred for what topics and business domains?
What's the allowed use of unsanctioned data sources?
Who can manage content? Is this decision different for data versus reports? Is the
decision different for enterprise BI users versus decentralized users? Who can own
and manage self-service BI content?
Who can consume content? Is this decision different for external partners,
customers, or suppliers?

Maturity levels of data democratization

The following maturity levels can help you assess your current state of data
democratization.

Level State of data democratization

100: Initial • Data and analytics are limited to a small number of roles, who gatekeep access to
others.

• Business users must request access to data or tools to complete tasks. They
struggle with delays or bottlenecks.

• Self-service initiatives are taking place with some success in various areas of the
organization. These activities are occurring in a somewhat chaotic manner, with few
formal processes and no strategic plan. There's a lack of oversight and visibility into
these self-service activities. The success or failure of each solution isn't well
understood.

• The enterprise data team can't keep up with the needs of the business. A
significant backlog of requests exists for this team.
Level State of data democratization

200: • There are limited efforts underway to expand access to data and tools.
Repeatable
• Multiple teams have had measurable success with self-service solutions. People in
the organization are starting to pay attention.

• Investments are being made to identify the ideal balance of enterprise and self-
service solutions.

300: • Many people have access to the data and tools they need, although not all users
Defined are equally enabled or held accountable for the content they create.

• Effective self-service data practices are incrementally and purposely replicated


throughout more areas of the organization.

400: • Healthy partnerships exist among enterprise and self-service solution creators.
Capable Clear, realistic user accountability and policies mitigate risk of self-service analytics
and BI.

• Clear and consistent processes are in place for users to request access to data
and tools.

• Individuals who take initiative in building valuable solutions are recognized and
rewarded.

500: • User accountability and effective governance give central teams confidence in
Efficient what users do with data.

• Automated, monitored processes enable people to easily request access to data


and tools. Anyone with the need or interest to use data can follow these processes
to perform analytics.

Data literacy
Data literacy refers to the ability to interpret, create, and communicate with data and
analytics accurately and effectively.

Training efforts, as described in the mentoring and user enablement article, often focus
on how to use the technology itself. Technology skills are important to producing high-
quality solutions, but it's also important to consider how to purposely advance data
literacy throughout the organization. Put another way, successful adoption takes a lot
more than merely providing software and licenses to users.

How you go about improving data literacy in your organization depends on many
factors, such as current user skillsets, complexity of the data, and the types of analytics
that are required. You might choose to focus on these types of activities related to data
literacy:

Interpreting charts and graphs


Assessing the validity of data
Performing root cause analysis
Discerning correlation from causation
Understanding how context and outliers affect how results are presented
Using storytelling to help consumers quickly understand and act

 Tip

If you're struggling to get data culture or governance efforts approved, focusing on


tangible benefits that you can achieve with data discovery ("find the data"), data
democratization ("use the data"), or data literacy ("understand the data") can help.
It can also be helpful to focus on specific problems that you can solve or mitigate
through data culture advancements.

Getting the right stakeholders to agree on the problem is usually the first step.
Then, it's a matter of getting the stakeholders to agree on the strategic approach to
a solution, along with the solution details.

Questions to ask about data literacy

Use questions like those found below to assess data literacy.

Does a common analytical vocabulary exist in the organization to talk about data
and BI solutions? Alternatively, are definitions fragmented and different across
silos?
How comfortable are people with making decisions based on data and evidence
compared to intuition and subjective experience?
When people who hold an opinion are confronted with conflicting evidence, how
do they react? Do they critically appraise the data, or do they dismiss it? Can they
alter their opinion, or do they become entrenched and resistant?
Do training programs exist to support people in learning about data and analytical
tools?
Is there significant resistance to visual analytics and interactive reporting in favor of
static spreadsheets?
Are people open to new analytical methods and tools to potentially address their
business questions more effectively? Alternatively, do they prefer to keep using
existing methods and tools to save time and energy?
Are there methods or programs to assess or improve data literacy in the
organization? Does leadership have an accurate understanding of the data literacy
levels?
Are there roles, teams, or departments where data literacy is particularly strong or
weak?

Maturity levels of data literacy

The following maturity levels can help you assess your current state of data literacy.

Level State of data literacy

100: Initial • Decisions are frequently made based on intuition and subjective experience.
When confronted with data that challenges existing opinions, data is often
dismissed.

• Individuals have low confidence to use and understand data in decision-making


processes or discussions.

• Report consumers have a strong preference for static tables. These consumers
dismiss interactive visualizations or sophisticated analytical methods as "fancy" or
unnecessary.

200: • Some teams and individuals inconsistently incorporate data into their decision
Repeatable making. There are clear cases where misinterpretation of data has led to flawed
decisions or wrong conclusions.

• There's some resistance when data challenges pre-existing beliefs.

• Some people are skeptical interactive visualizations and sophisticated analytical


methods, though their use is increasing.

300: • The majority of teams and individuals understand data relevant to their business
Defined area and use it implicitly to inform decisions.
Level State of data literacy

• When data challenges pre-existing beliefs, it produces critical discussions and


sometimes motivates change.

• Visualizations and advanced analytics are more widely accepted, though not
always used effectively.

400: • Data literacy is recognized explicitly as a necessary skill in the organization. Some
Capable training programs address data literacy. Specific efforts are taken to help
departments, teams, or individuals that have particularly weak data literacy.

• Most individuals can effectively use and apply data to make objectively better
decisions and take actions.

• Visual and analytical best practices are documented and followed in strategically
important data solutions.

500: • Data literacy, critical thinking, and continuous learning are strategic skills and
Efficient values in the organization. Effective programs monitor progress to improve data
literacy in the organization.

• Decision making is driven by data across the organization. Decision intelligence


or prescriptive analytics are used to recommend key decisions and actions.

• Visual and analytical best practices are seen as essential to generate business
value with data.

Considerations and key actions

Checklist - Here are some considerations and key actions that you can take to
strengthen your data culture.

" Align your data culture goals and strategy: Give serious consideration to the type
of data culture that you want to cultivate. Ideally, it's more from a position of user
empowerment than a position of command and control.
" Understand your current state: Talk to stakeholders in different business units to
understand which analytics practices are currently working well and which practices
aren't working well for data-driven decision-making. Conduct a series of workshops
to understand the current state and to formulate the desired future state.
" Speak with stakeholders: Talk to stakeholders in IT, BI, and the COE to understand
which governance constraints need consideration. These conversations can present
an opportunity to educate teams on topics like security and infrastructure. You can
also use the opportunity to educate stakeholders on the features and capabilities
included in Fabric.
" Verify executive sponsorship: Verify the level of executive sponsorship and support
that you have in place to advance data culture goals.
" Make purposeful decisions about your data strategy: Decide what the ideal
balance of business-led self-service, managed self-service, and enterprise data,
analytics and BI use cases should be for the key business units in the organization
(covered in the content ownership and management article). Also consider how the
data strategy relates to the extent of published content for personal, team,
departmental, and enterprise analytics and BI (described in the content delivery
scope article). Define your high-level goals and priorities for this strategic planning.
Determine how these decisions affect your tactical planning.
" Create a tactical plan: Begin creating a tactical plan for immediate, short-term, and
long-term action items. Identify business groups and problems that represent
"quick wins" and can make a visible difference.
" Create goals and metrics: Determine how you'll measure effectiveness for your
data culture initiatives. Create key performance indicators (KPIs) or objectives and
key results (OKRs) to validate the results of your efforts.

Questions to ask about data culture

Use questions like those found below to assess data culture.

Is data regarded as a strategic asset in the organization?


Is there a vision of a healthy data culture that originates from executive leadership
and aligns with organizational objectives?
Does the data culture guide creation of governance policies and guidelines?
Are organizational data sources trusted by content creators and consumers?
When justifying an opinion, decision, or choice, do people use data as evidence?
Is knowledge about analytics and data use documented or is there a reliance on
undocumented tribal knowledge?
Are efforts to develop a data solution valued and appreciated by the user
community?
Maturity levels of data culture

The following maturity levels will help you assess the current state of your data culture.

Level State of data culture

100: Initial • Enterprise data teams can't keep up with the needs of the business. A significant
backlog of requests exists.

• Self-service data and BI initiatives are taking place with some success in various
areas of the organization. These activities occur in a somewhat chaotic manner,
with few formal processes and no strategic plan.

• There's a lack of oversight and visibility into self-service BI activities. The


successes or failures of data and BI solutions aren't well understood.

200: • Multiple teams have had measurable successes with self-service solutions. People
Repeatable in the organization are starting to pay attention.

• Investments are being made to identify the ideal balance of enterprise and self-
service data, analytics, and BI.

300: Defined • Specific goals are established for advancing the data culture. These goals are
implemented incrementally.

• Learnings from what works in individual business units is shared.

• Effective self-service practices are incrementally and purposely replicated


throughout more areas of the organization.

400: • The data culture goals to employ informed decision-making are aligned with
Capable organizational objectives. They're actively supported by the executive sponsor, the
COE, and they have a direct impact on adoption strategies.

• A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared goals.

• Individuals who take initiative in building valuable data solutions are recognized
and rewarded.

500: • The business value of data, analytics, and BI solutions is regularly evaluated and
Efficient measured. KPIs or OKRs are used to track data culture goals and the results of
these efforts.

• Feedback loops are in place, and they encourage ongoing data culture
Level State of data culture

improvements.

• Continual improvement of organizational adoption, user adoption, and solution


adoption is a top priority.

Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
importance of an executive sponsor.
Microsoft Fabric adoption roadmap:
Executive sponsorship
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

When planning to advance the data culture and the state of organizational adoption for
data and analytics, it's crucial to have executive support. An executive sponsor is
imperative because analytics adoption is far more than just a technology project.

Although some successes can be achieved by a few determined individual contributors,


the organization is in a significantly better position when a senior leader is engaged,
supportive, informed, and available to assist with the following activities.

Formulating a strategic vision, goals, and priorities for data, analytics, and business
intelligence (BI).
Providing top-down guidance and reinforcement for the data strategy by regularly
promoting, motivating, and investing in strategic and tactical planning.
Leading by example by actively using data and analytics in a way that's consistent
with data culture and adoption goals.
Allocating staffing and prioritizing resources.
Approving funding (for example, Fabric licenses).
Removing barriers to enable action.
Communicating announcements that are of critical importance, to help them gain
traction.
Decision-making, particularly for strategic-level governance decisions.
Dispute resolution (for escalated issues that can't be resolved by operational or
tactical personnel).
Supporting organizational change initiatives (for example, creating or expanding
the Center of Excellence).

) Important

The ideal executive sponsor has sufficient credibility, influence, and authority
throughout the organization. They also have an invested stake in data efforts and
the data strategy. When the BI strategy is successful, the ideal executive sponsor
also experiences success in their role.

Identifying an executive sponsor


There are multiple ways to identify an executive sponsor.

Top-down pattern
An executive sponsor might be selected by a more senior executive. For example, the
Chief Executive Officer (CEO) could hire a Chief Data Officer (CDO) or Chief Analytics
Officer (CAO) to explicitly advance the organization's data culture objectives or lead
digital transformation efforts. The CDO or CAO then becomes the ideal candidate to
serve as the executive sponsor for Fabric (or for data and analytics in general).

Here's another example: The CEO might empower an existing executive, such as the
Chief Financial Officer (CFO), because they have a good track record leading data and
analytics in their organization. As the new executive sponsor, the CFO could then lead
efforts to replicate the finance team's success to other areas of the organization.

7 Note

Having an executive sponsor at the C-level is an excellent leading indicator. It


indicates that the organization recognizes the importance of data as a strategic
asset and is advancing its data culture in a positive direction.

Bottom-up pattern
Alternatively, a candidate for the executive sponsor role could emerge due to the
success they've experienced with creating data solutions. For example, a business unit
within the organization, such as Finance, has organically achieved great success with
their use of data and analytics. Essentially, they've successfully formed their own data
culture on a smaller scale. A junior-level leader who hasn't reached the executive level
(such as a director) might then grow into the executive sponsor role by sharing
successes with other business units across the organization.

The bottom-up approach is more likely to occur in smaller organizations. It might be


because the return on investment and strategic imperative of a data culture (or digital
transformation) isn't yet an urgent priority for C-level executives.
The success for a leader using the bottom-up pattern depends on being recognized by
senior leadership.

With a bottom-up approach, the sponsor might be able to make some progress, but
they won't have formal authority over other business units. Without clear authority, it's
only a matter of time until challenges occur that are beyond their level of authority. For
this reason, the top-down approach has a higher probability of success. However, initial
successes with a bottom-up approach can convince leadership to increase their level of
sponsorship, which might start a healthy competition across other business units in the
adoption of data and BI.

Considerations and key actions

Checklist - Here's a list of considerations and key actions you can take to establish or
strengthen executive support for analytics.

" Identify an executive sponsor with broad authority: Find someone in a sufficient


position of influence and authority (across organizational boundaries) who
understands the value and impact of BI. It is important that the individual has a
vested interest in the success of analytics in the organization.
" Involve your executive sponsor: Consistently involve your executive sponsor in all
strategic-level governance decisions involving data management, BI, and analytics.
Also involve your sponsor in all governance data culture initiatives to ensure
alignment and consensus on goals and priorities.
" Establish responsibilities and expectation: Formalize the arrangement with
documented responsibilities for the executive sponsor role. Ensure that there's no
uncertainty about expectations and time commitments.
" Identify a backup for the sponsor: Consider naming a backup executive sponsor.
The backup can attend meetings in the sponsor's absence and make time-sensitive
decisions when necessary.
" Identify business advocates: Find influential advocates in each business unit.
Determine how their cooperation and involvement can help you to accomplish your
objectives. Consider involving advocates from various levels in the organization
chart.

Questions to ask
Use questions like those found below to assess data literacy.

Has an executive sponsor of Fabric or other analytical tools been identified?


If so, who is the executive sponsor?
If not, is there an informal executive sponsor? Who is the closest to this role? Can
you define the business impact of having no executive sponsor?
To what extent is the strategic importance of Fabric and analytics understood and
endorsed by executives?
Are executives using Fabric and the results of data and BI initiatives? What's the
sentiment among executives for the effectiveness of data solutions?
Is the executive sponsor leading by example in the effective use of data and BI
tools?
Does the executive sponsor provide the appropriate resources for data initiatives?
Is the executive sponsor involved in dispute resolution and change management?
Does the executive sponsor engage with the user community?
Does the executive sponsor have sufficient credibility and healthy relationships
across organizational boundaries (particularly the business and IT)?

Maturity levels

The following maturity levels will help you assess your current state of executive
support.

Level State of executive support

100: Initial • There might be awareness from at least one executive about the strategic
importance of how analytics can advance the organization's data culture goals.
However, neither a sponsor nor an executive-level decision-maker is identified.

200: • Informal executive support exists for analytics through informal channels and
Repeatable relationships.
Level State of executive support

300: • An executive sponsor is identified. Expectations are clear for the role.
Defined

400: • An executive sponsor is well established with someone with sufficient authority
Capable across organizational boundaries.

• A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared data culture goals.

500: • The executive sponsor is highly engaged. They're a key driver for advancing the
Efficient organization's data culture vision.

• The executive sponsor is involved with ongoing organizational adoption


improvements. KPIs (key performance indicators) or OKRs (objectives and key
results) are used to track data culture goals and the results of data, analytics, and
BI efforts.

Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
importance of business alignment with organizational goals.
Microsoft Fabric adoption roadmap:
Business alignment
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

Business intelligence (BI) activities and solutions have the best potential to deliver value
when they're well aligned to organizational business goals. In general, effective business
alignment helps to improve adoption. With effective business alignment, the data
culture and data strategy enable business users to achieve their business objectives.

You can achieve effective business alignment with data activities and solutions by
having:

An understanding of the strategic importance of data and analytics in achieving


measurable progress toward business goals.
A shared awareness of the business strategy and key business objectives among
content owners, creators, consumers, and administrators. A common
understanding should be integral to the data culture and decision-making across
the organization.
A clear and unified understanding of the business data needs, and how meeting
these needs helps content creators and content consumers achieve their
objectives.
A governance strategy that effectively balances user enablement with risk
mitigation.
An engaged executive sponsor who provides top-down guidance to regularly
promote, motivate, and support the data strategy and related activities and
solutions.
Productive and solution-oriented discussions between business teams and
technical teams that address business data needs and problems.
Effective and flexible requirements gathering processes to design and plan
solutions.
Structured and consistent processes to validate, deploy, and support solutions.
Structured and sustainable processes to regularly update existing solutions so that
they remain relevant and valuable, despite changes in technology or business
objectives.
Effective business alignment brings significant benefits to an organization. Here are
some benefits of effective business alignment.

Improved adoption, because content consumers are more likely to use solutions
that enable them to achieve their objectives.
Increased business return on investment (ROI) for analytics initiatives and
solutions, because these initiatives and solutions will be more likely to directly
advance progress toward business goals.
Less effort and fewer resources spent on change management and changing
business requirements, due to an improved understanding of business data needs.

Achieve business alignment


There are multiple ways to achieve business alignment of data activities and initiatives.

Communication alignment
Effective and consistent communication is critical to aligning processes. Consider the
following actions and activities when you want to improve communication for successful
business alignment.

Make and follow a plan for central teams and the user community to follow.
Plan regular alignment meetings between different teams and groups. For
example, central teams can plan regular planning and priority alignments with
business units. Another example is when central teams schedule regular meetings
to mentor and enable self-service users.
Set up a centralized portal to consolidate communication and documentation for
user communities. For strategic solutions and initiatives, consider using a
communication hub.
Limit complex business and technical terminology in cross-functional
communications.
Strive for concise communication and documentation that's formatted and well
organized. That way, people can easily find the information that they need.
Consider maintaining a visible roadmap that shows the planned solutions and
activities relevant to the user community in the next quarter.
Be transparent when communicating policies, decisions, and changes.
Create a process for people to provide feedback, and review that feedback
regularly as part of regular planning activities.

) Important
To achieve effective business alignment, you should make it a priority to identify
and dismantle any communication barriers between business teams and technical
teams.

Strategic alignment
Your business strategy should be well aligned with your data and BI strategy. To
incrementally achieve this alignment, we recommend that you commit to following
structured, iterative planning processes.

Strategic planning: Define data, analytics, and BI goals and priorities based on the
business strategy and current state of adoption and implementation. Typically,
strategic planning occurs every 12-18 months to iteratively define high-level
desired outcomes. You should synchronize strategic planning with key business
planning processes.
Tactical planning: Define objectives, action plans, and a backlog of solutions that
help you to achieve your data and BI goals. Typically, tactical planning occurs
quarterly to iteratively re-evaluate and align the data strategy and activities to the
business strategy. This alignment is informed by business feedback and changes to
business objectives or technology. You should synchronize tactical planning with
key project planning processes.
Solution planning: Design, develop, test, and deploy solutions that support
content creators and consumers in achieving their business objectives. Both
centralized content creators and self-service content creators conduct solution
planning to ensure that the solutions they create are well aligned with business
objectives. You should synchronize solution planning with key adoption and
governance planning processes.

) Important

Effective business alignment is a key prerequisite for a successful data strategy.

Governance and compliance alignment


A key aspect of effective business alignment is balancing user enablement and risk
mitigation. This balance is an important aspect of your governance strategy, together
with other activities related to compliance, security and privacy, that can include:

Transparently document and justify compliance criteria, key governance decisions,


and policies so that content creators and consumers know what's expected of
them.
Regularly audit and assess activities to identify risk areas or strong deviations from
the desired behaviors.
Provide mechanisms for content owners, content creators, and content consumers
to request clarification or provide feedback about existing policies.

U Caution

A governance strategy that's poorly aligned with business objectives can result in
more conflicts and compliance risk, because users will often pursue workarounds to
complete their tasks.

Executive alignment
Executive leadership plays a key role in defining the business strategy and business
goals. To this end, executive engagement is an important part of achieving top-down
business alignment.

To achieve executive alignment, consider the following key considerations and activities.

Work with your executive sponsor to organize short, quarterly executive feedback
sessions about the use of data in the organization. Use this feedback to identify
changes in business objectives, re-assess the data strategy, and inform future
actions to improve business alignment.
Schedule regular alignment meetings with the executive sponsor to promptly
identify any potential changes in the business strategy or data needs.
Deliver monthly executive summaries that highlight relevant information,
including:
Key performance indicators (KPIs) that measure progress toward data, analytics,
and BI goals.
Fabric adoption and implementation milestones.
Technology changes that might impact organizational business goals.

) Important

Don't underestimate the importance of the role your executive sponsor has in
achieving and maintaining effective business alignment.

Maintain business alignment


Business alignment is a continual process. To maintain business alignment, consider the
following factors.

Assign a responsible team: A working team reviews feedback and organizes re-
alignment sessions. This team is responsible for the alignment of planning and
priorities between the business and data strategy.
Create and support a feedback process: Your user community requires the means
to provide feedback. Examples of feedback can include requests to change existing
solutions, or to create new solutions and initiatives. This feedback is essential for
bottom-up business user alignment, and it drives iterative and continuous
improvement cycles.
Measure the success of business alignment: Consider using surveys, sentiment
analysis, and usage metrics to assess the success of business alignment. When
combined with other concise feedback mechanisms, this can provide valuable
input to help define future actions and activities to improve business alignment
and Fabric adoption.
Schedule regular re-alignment sessions: Ensure that data strategic planning and
tactical planning occur alongside relevant business strategy planning (when
business leadership review business goals and objectives).

7 Note

Because business objectives continually evolve, you should understand that


solutions and initiatives will change over time. Don't assume that requirements for
data and BI projects are rigid and can't be altered. If you struggle with changing
requirements, it might be an indication that your requirements-gathering process is
ineffective or inflexible, or that your development workflows don't sufficiently
incorporate regular feedback.

) Important

To effectively maintain business alignment, it's essential that user feedback be


promptly and directly addressed. Regularly review and analyze feedback, and
consider how you can integrate it into iterative strategic planning, tactical planning,
and solution planning processes.

Questions to ask
Use questions like those found below to assess business alignment.

Can people articulate the goals of the organization and the business objectives of
their team?
To what extent do descriptions of organizational goals align across the
organization? How do they align between the business user community and
leadership community? How do they align between business teams and technical
teams?
Does executive leadership understand the strategic importance of data in
achieving business objectives? Does the user community understand the strategic
importance of data in helping them succeed in their jobs?
Are changes in the business strategy reflected promptly in changes to the data
strategy?
Are changes in business user data needs addressed promptly in data and BI
solutions?
To what extent do data policies support or conflict with existing business processes
and the way that users work?
Do solution requirements focus more on technical features than addressing
business questions? Is there a structured requirements gathering process? Do
content owners and creators interact effectively with stakeholders and content
consumers during requirements gathering?
How are decisions about data or BI investments made? Who makes these
decisions?
How well do people trust existing data and BI solutions? Is there a single version of
truth, or are there regular debates about who has the correct version?
How are data and BI initiatives and strategy communicated across the
organization?

Maturity levels
A business alignment assessment evaluates integration between the business strategy
and data strategy. Specifically, this assessment focuses on whether or not data and BI
initiatives and solutions support business users to achieve business strategic objectives.

The following maturity levels will help you assess your current state of business
alignment.

Level State of data and business alignment

100: Initial • Business and data strategies lack formal alignment, which leads to reactive
implementation and misalignment between data teams and business users.

• Misalignment in priorities and planning hinders productive discussions and


effectiveness.

• Executive leadership doesn't recognize data as a strategic asset.

200: • There are efforts to align data and BI initiatives with specific data needs without
Repeatable a consistent approach or understanding of their success.

• Alignment discussions focus on immediate or urgent needs and focus on


features, solutions, tools or data, rather than strategic alignment.

• People have a limited understanding of the strategic importance of data in


achieving business objectives.

300: • Data and BI initiatives are prioritized based on their alignment with strategic
Defined business objectives. However, alignment is siloed and typically focuses on local
needs.

• Strategic initiatives and changes have a clear, structured involvement of both the
business and data strategic decision makers. Business teams and technical teams
can have productive discussions to meet business and governance needs.

400: • There's a consistent, organization-wide view of how data initiatives and solutions
Capable support business objectives.

• Regular and iterative strategic alignments occur between the business and
technical teams. Changes to the business strategy result in clear actions that are
reflected by changes to the data strategy to better support business needs.

• Business and technical teams have healthy, productive relationships.

500: • The data strategy and the business strategy are fully integrated. Continuous
Efficient improvement processes drive consistent alignment, and they are themselves data
driven.

• Business and technical teams have healthy, productive relationships.


Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn more about
content ownership and management, and its effect on business-led self-service BI,
managed self-service BI, and enterprise BI.
Microsoft Fabric adoption roadmap:
Content ownership and management
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

7 Note

The Power BI implementation planning usage scenarios explore many concepts


discussed in this article, focusing on the Power BI workload in Microsoft Fabric. The
usage scenario articles include detailed diagrams that you might find helpful to
support your planning and decision making.

There are three primary strategies for how data, analytics, and business intelligence (BI)
content is owned and managed: business-led self-service, managed self-service, and
enterprise. For the purposes of this series of articles, the term content refers to any type
of data item (like a notebook, semantic model—previously known as a dataset, report,
or dashboard).

The organization's data culture is the driver for why, how, and by whom each of these
three content ownership strategies is implemented.

The areas in the above diagram include:

Area Description

Business-led self-service: All content is owned and managed by the creators and subject
matter experts within a business unit. This ownership strategy is also known as a
Area Description

decentralized or bottom-up strategy.

Managed self-service: The data is owned and managed by a centralized team, whereas
business users take responsibility for reports and dashboards. This ownership strategy is
also known as discipline at the core and flexibility at the edge.

Enterprise: All content is owned and managed by a centralized team such as IT, enterprise
BI, or the Center of Excellence (COE).

It's unlikely that an organization operates exclusively with one content ownership and
management strategy. Depending on your data culture, one strategy might be far more
dominant than the others. The choice of strategy could differ from solution to solution,
or from team to team. In fact, a single team can actively use multiple strategies if it's
both a consumer of enterprise content and a producer of its own self-service content.
The strategy to pursue depends on factors such as:

Requirements for a solution (such as a collection of reports, a Power BI app, or a


lakehouse).
User skills.
Ongoing commitment for training and skills growth.
Flexibility required.
Complexity level.
Priorities and leadership commitment level.

The organization's data culture—particularly its position on data democratization—has


considerable influence on the extent of which of the three content ownership strategies
are used. While there are common patterns for success, there's no one-size-fits-all
approach. Each organization's governance model and approach to content ownership
and management should reflect the differences in data sources, applications, and
business context.

How content is owned and managed has a significant effect on governance, the extent
of mentoring and user enablement, needs for user support, and the COE operating
model.

As discussed in the governance article, the level of governance and oversight depends
on:

Who owns and manages the content.


The scope of content delivery.
The data subject area and sensitivity level.
The importance of the data, and whether it's used for critical decision making.
In general:

Business-led self-service content is subject to the least stringent governance and


oversight controls. It often includes personal BI and team BI solutions.
Managed self-service content is subject to moderately stringent governance and
oversight controls. It frequently includes team BI and departmental BI solutions.
Enterprise solutions are subject to more rigorous governance controls and
oversight.

As stated in the adoption maturity levels article, organizational adoption measures the
state of data management processes and governance. The choices made for content
ownership and management significantly affect how organizational adoption is
achieved.

Ownership and stewardship


There are many roles related to data management. Roles can be defined in many ways
and can be easily misunderstood. The following table presents possible ways you might
conceptually define these roles:

Role Description

Data steward Responsible for defining and/or managing acceptable data quality levels as well
as master data management (MDM).

Subject Responsible for defining what the data means, what it's used for, who might
matter expert access it, and how the data is presented to others. Collaborates with domain
(SME) owner as needed and supports colleagues in their use of data.

Technical Responsible for creating, maintaining, publishing, and securing access to data and
owner reporting items.

Domain Higher-level decision-maker who collaborates with governance teams on data


owner management policies, processes, and requirements. Decision-maker for defining
appropriate and inappropriate uses of the data. Participates on the data
governance board, as described in the governance article.

Assigning ownership for a data domain tends to be more straightforward when


managing transactional source systems. In analytics and BI solutions, data is integrated
from multiple domain areas, then transformed and enriched. For downstream analytical
solutions, the topic of ownership becomes more complex.

7 Note
Be clear about who is responsible for managing data items. It's crucial to ensure a
good experience for content consumers. Specifically, clarity on ownership is helpful
for:

Who to contact with questions.


Feedback.
Enhancement requests.
Support requests.

In the Fabric portal, content owners can set the contact list property for many
types of items. The contact list is also used in security workflows. For example,
when a user is sent a URL to open a Power BI app but they don't have permission,
they will be presented with an option to make a request for access.

Guidelines for being successful with ownership:

Define how ownership and stewardship terminology is used in your organization,


including expectations for these roles.
Set contacts for each workspace and for individual items to communicate
ownership and/or support responsibilities.
Specify between two and four workspace administrators and conduct an audit of
workspace admins regularly (perhaps twice a year). Workspace admins might be
directly responsible for managing workspace content, or it could be that those
tasks are assigned to colleagues who do the hands-on work. In all cases,
workspace admins should be able to easily contact owners of specific content.
Include consistent branding on reports to indicate who produced the content and
who to contact for help. A small image or text label located in the report footer is
valuable, especially when the report is exported from the Fabric portal. A standard
template file can encourage and simplify the consistent use of branding.
Make use of best practices reviews and co-development projects with the COE.

The remainder of this article covers considerations related to the three content
ownership and management strategies.

Business-led self-service
With a business-led self-service approach to data and BI, all content is owned and
managed by creators and subject matter experts. Because responsibility is retained
within a business unit, this strategy is often described as the bottom-up, or decentralized,
approach. Business-led self-service is often a good strategy for personal BI and team BI
solutions.
) Important

The concept of business-led self-service isn't the same as shadow IT. In both
scenarios, data and BI content is created, owned, and managed by business users.
However, shadow IT implies that the business unit is circumventing IT and so the
solution is not sanctioned. With business-led self-service BI solutions, the business
unit has full authority to create and manage content. Resources and support from
the COE are available to self-service content creators. It's also expected that the
business unit will comply with all established data governance guidelines and
policies.

Business-led self-service is most suitable when:

Decentralized data management aligns with the organization's data culture, and
the organization is prepared to support these efforts.
Data exploration and freedom to innovate is a high priority.
The business unit wants to have the most involvement and retain the highest level
of control.
The business unit has skilled users capable of—and fully committed to—
supporting solutions through the entire lifecycle. It covers all types of items,
including the data (such as a lakehouse, data warehouse, data pipeline, dataflow,
or semantic model), the visuals (such as reports and dashboards), and Power BI
apps.
The flexibility to respond to changing business conditions and react quickly
outweighs the need for stricter governance and oversight.

Here are some guidelines to help become successful with business-led self-service data
and BI.

Teach your creators to use the same techniques that IT would use, like shared
semantic models and dataflows. Make use of a well-organized OneLake. Centralize
data to reduce maintenance, improve consistency, and reduce risk.
Focus on providing mentoring, training, resources, and documentation (described
in the Mentoring and user enablement article). The importance of these efforts
can't be overstated. Be prepared for skill levels of self-service content creators to
vary significantly. It's also common for a solution to deliver excellent business value
yet be built in such a way that it won't scale or perform well over time (as historic
data volumes increase). Having the COE available to help when these situations
arise is very valuable.
Provide guidance on the best way to use endorsements. The promoted
endorsement is for content produced by self-service creators. Consider reserving
use of the certified endorsement for enterprise BI content and managed self-
service BI content (described next).
Analyze the activity log to discover situations where the COE could proactively
contact self-service owners to offer helpful information. It's especially useful when
a suboptimal usage pattern is detected. For example, log activity could reveal
overuse of individual item sharing when Power BI app audiences or workspace
roles might be a better choice. The data from the activity log allows the COE to
offer support and advice to the business units. In turn, this information can help
increase the quality of solutions, while allowing the business to retain full
ownership and control of their content. For more information, see Auditing and
monitoring.

Managed self-service
Managed self-service BI is a blended approach to data and BI. The data is owned and
managed by a centralized team (such as IT, enterprise BI, or the COE), while
responsibility for reports and dashboards belongs to creators and subject matter experts
within the business units. Managed self-service BI is frequently a good strategy for team
BI and departmental BI solutions.

This approach is often called_discipline at the core and flexibility at the edge_. It's
because the data architecture is maintained by a single team with an appropriate level
of discipline and rigor. Business units have the flexibility to create reports and
dashboards based on centralized data. This approach allows report creators to be far
more efficient because they can remain focused on delivering value from their data
analysis and visuals.

Managed self-service BI is most suitable when:

Centralized data management aligns with the organization's data culture.


The organization has a team of BI experts who manage the data architecture.
There's value in the reuse of data by many self-service report creators across
organizational boundaries.
Self-service report creators need to produce analytical content at a pace faster
than the centralized team can accommodate.
Different users are responsible for handling data preparation, data modeling, and
report creation.

Here are some guidelines to help you become successful with managed self-service BI.

Teach users to separate model and report development. They can use live
connections to create reports based on existing semantic models. When the
semantic model is decoupled from the report, it promotes data reuse by many
reports and many authors. It also facilitates the separation of duties.
Use dataflows to centralize data preparation logic and to share commonly used
data tables—like date, customer, product, or sales—with many semantic model
creators. Refine the dataflow as much as possible, using friendly column names
and correct data types to reduce the downstream effort required by semantic
model authors, who consume the dataflow as a source. Dataflows are an effective
way to reduce the time involved with data preparation and improve data
consistency across semantic models. The use of dataflows also reduces the number
of data refreshes on source systems and allows fewer users who require direct
access to source systems.
When self-service creators need to augment an existing semantic model with
departmental data, educate them to create composite models. This feature allows
for an ideal balance of self-service enablement while taking advantage of the
investment in data assets that are centrally managed.
Use the certified endorsement for semantic models and dataflows to help content
creators identify trustworthy sources of data.
Include consistent branding on all reports to indicate who produced the content
and who to contact for help. Branding is particularly helpful to distinguish content
that is produced by self-service creators. A small image or text label in the report
footer is valuable when the report is exported from the Fabric portal.
Consider implementing separate workspaces for storing data and reports. This
approach allows for better clarity on who is responsible for content. It also allows
for more restrictive workspace roles assignments. That way, report creators can
only publish content to their reporting workspace; and, read and build semantic
model permissions allow creators to create new reports with row-level security
(RLS) in effect, when applicable. For more information, see Workspace-level
planning. For more information about RLS, see Content creator security planning.
Use the Power BI REST APIs to compile an inventory of Power BI items. Analyze the
ratio of semantic models to reports to evaluate the extent of semantic model
reuse.

Enterprise
Enterprise is a centralized approach to delivering data and BI solutions in which all
solution content is owned and managed by a centralized team. This team is usually IT,
enterprise BI, or the COE.

Enterprise is the most suitable when:


Centralizing content management with a single team aligns with the organization's
data culture.
The organization has data and BI expertise to manage all items end-to-end.
The content needs of consumers are well-defined, and there's little need to
customize or explore data beyond the reporting solution that's delivered.
Content ownership and direct access to data needs to be limited to a small
number of experts and owners.
The data is highly sensitive or subject to regulatory requirements.

Here are some guidelines to help you become successful with enterprise data and BI.

Implement a rigorous process for use of the certified endorsement for content.
Not all enterprise content needs to be certified, but much of it probably should be.
Certified content should indicate that data quality has been validated. Certified
content should also follow change management rules, have formal support, and be
fully documented. Because certified content has passed rigorous standards, the
expectations for trustworthiness are higher.
Include consistent branding on enterprise BI reports to indicate who produced the
content, and who to contact for help. A small image or text label in the report
footer is valuable when the report is exported by a user.
If you use specific report branding to indicate enterprise BI content, be careful with
the save a copy functionality that would allow a user to download a copy of a
report and personalize it. Although this functionality is an excellent way to bridge
enterprise BI with managed self-service BI, it dilutes the value of the branding. A
more seamless solution is to provide a separate Power BI Desktop template file for
self-service authors. The template defines a starting point for report creation with a
live connection to an existing semantic model, and it doesn't include branding. The
template file can be shared as a link within a Power BI app, or from the community
portal.

Ownership transfers
Occasionally, the ownership of a particular solution might need to be transferred to
another team. An ownership transfer from a business unit to a centralized team can
happen when:

A business-led solution is used by a significant number of users, or it now supports


critical business decisions. In these cases, the solution should be managed by a
team with processes in place to implement higher levels of governance and
support.
A business-led solution is a candidate to be used far more broadly throughout the
organization, so it needs to be managed by a team who can set security and
deploy content widely throughout the organization.
A business unit no longer has the expertise, budget, or time available to continue
managing the content, but the business need for the content remains.
The size or complexity of a solution has grown to a point where a different data
architecture or redesign is required.
A proof of concept is ready to be operationalized.

The COE should have well-documented procedures for identifying when a solution is a
candidate for ownership transfer. It's very helpful if help desk personnel know what to
look for as well. Having a customary pattern for self-service creators to build and grow a
solution, and hand it off in certain circumstances, is an indicator of a productive and
healthy data culture. A simple ownership transfer could be addressed during COE office
hours; a more complex transfer could warrant a small project managed by the COE.

7 Note

There's potential that the new owner will need to do some refactoring and data
validations before they're willing to take full ownership. Refactoring is most likely to
occur with the less visible aspects of data preparation, data modeling, and
calculations. If there are any manual steps or flat file sources, now is an ideal time
to apply those enhancements. The branding of reports and dashboards might also
need to change (for example, if there's a footer indicating report contact or a text
label indicating that the content is certified).

It's also possible for a centralized team to transfer ownership to a business unit. It could
happen when:

The team with domain knowledge is better equipped to own and manage the
content going forward.
The centralized team has created the solution for a business unit that doesn't have
the skills to create it from scratch, but it can maintain and extend the solution
going forward.

 Tip

Don't forget to recognize and reward the work of the original creator, particularly if
ownership transfers are a common occurrence.
Considerations and key actions

Checklist - Here's a list of considerations and key actions you can take to strengthen
your approach to content ownership and management.

" Gain a full understanding of what's currently happening: Ensure you deeply


understand how content ownership and management is happening throughout the
organization. Recognize that there likely won't be a one-size-fits-all approach to
apply uniformly across the entire organization. Review the implementation planning
usage scenarios to understand how Power BI and Fabric can be used in diverse
ways.
" Conduct discussions: Determine what is currently working well, what isn't working
well, and what the desired balance is between the three ownership strategies. If
necessary, schedule discussions with specific people on various teams. Develop a
plan for moving from the current state to the desired state.
" Perform an assessment: If your enterprise data team currently has challenges
related to scheduling and priorities, do an assessment to determine if a managed
self-service strategy can be put in place to empower more content creators
throughout the organization. Managed self-service data and BI can be extremely
effective on a global scale.
" Clarify terminology: Clarify terms used in your organization for owner, data
steward, and subject matter expert.
" Assign clear roles and responsibilities: Make sure roles and responsibilities for
owners, stewards, and subject matter experts are documented and well understood
by everyone involved. Include backup personnel.
" Ensure community involvement: Ensure that all your content owners—from both
the business and IT—are part of your community of practice.
" Create user guidance for owners and contacts in Fabric: Determine how you will
use the contacts feature in Fabric. Communicate with content creators about how it
should be used, and why it's important.
" Create a process for handling ownership transfers: If ownership transfers occur
regularly, create a process for how it will work.
" Support your advanced content creators: Determine your strategy for using
external tools for advanced authoring capabilities and increased productivity.

Questions to ask
Use questions like those found below to assess content ownership and management.

Do central teams that are responsible for Fabric have a clear understanding of who
owns what BI content? Is there a distinction between report and data items, or
different item types (like Power BI semantic models, data science notebooks, or
lakehouses)?
Which usage scenarios are in place, such as personal BI, team BI, departmental BI,
or enterprise BI? How prevalent are they in the organization, and how do they
differ between key business units?
What activities do business analytical teams perform (for example, data
integration, data modeling, or reporting)?
What kinds of roles in the organizations are expected to create and own content?
Is it limited to central teams, analysts, or also functional roles, like sales?
Where does the organization sit on the spectrum of business-led self-service,
managed self-service, or enterprise? Does it differ between key business units?
Do strategic data and BI solutions have ownership roles and stewardship roles that
are clearly defined? Which are missing?
Are content creators and owners also responsible for supporting and updating
content once it's released? How effective is the ownership of content support and
updates?
Is a clear process in place to transfer ownership of solutions (where necessary)? An
example is when an external consultant creates or updates a solution.
Do data sources have data stewards or subject matter experts (SMEs) who serve as
a special point of contact?
If your organization is already using Fabric or Power BI, does the current workspace
setup comply with the content ownership and delivery strategies that are in place?

Maturity levels

The following maturity levels will help you assess the current state of your content
ownership and management.
Level State of content ownership and management

100: Initial • Self-service content creators own and manage content in an uncontrolled way,
without a specific strategy.

• A high ratio of semantic models to reports exists. When many semantic models
exist only support one report, it indicates opportunities to improve data reusability,
improve trustworthiness, reduce maintenance, and reduce the number of duplicate
semantic models.

• Discrepancies between different reports are common, causing distrust of content


produced by others.

200: • A plan is in place for which content ownership and management strategy to use
Repeatable and in which circumstances.

• Initial steps are taken to improve the consistency and trustworthiness levels for
self-service efforts.

• Guidance for the user community is available that includes expectations for self-
service versus enterprise content.

• Roles and responsibilities are clear and well understood by everyone involved.

300: • Managed self-service is a priority and an area of investment to further advance


Defined the data culture. The priority is to allow report creators the flexibility they need
while using well-managed, secure, and trustworthy data sources.

• Report branding is consistently used to indicate who produced the content.

• A mentoring program exists to educate self-service content creators on how to


apply best practices and make good decisions.

400: • Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.

• There's a plan in place for how to request and handle ownership transfers.

• Managed self-service—and techniques for the reuse of data—are commonly


used and well-understood.

500: • Proactive steps to communicate with users occur when any concerning activities
Efficient are detected in the activity log. Education and information are provided to make
gradual improvements or reduce risk.

• Third-party tools are used by highly proficient content creators to improve


productivity and efficiency.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
scope of content delivery.
Microsoft Fabric adoption roadmap:
Content delivery scope
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

The four delivery scopes described in this article include personal, team, departmental,
and enterprise. To be clear, focusing on the scope of a delivered data and business
intelligence (BI) solution does refer to the number of people who might view the
solution, though the impact is much more than that. The scope strongly influences best
practices for not only content distribution, but also content management, security, and
information protection. The scope has a direct correlation to the level of governance
(such as requirements for change management, support, or documentation), the extent
of mentoring and user enablement, and needs for user support. It also influences user
licensing decisions.

The related content ownership and management article makes similar points. Whereas
the focus of that article was on the content creator, the focus of this article is on the
target content usage. Both inter-related aspects need to be considered to arrive at
governance decisions and the Center of Excellence (COE) operating model.

) Important

Not all data and solutions are equal. Be prepared to apply different levels of data
management and governance to different teams and various types of content.
Standardized rules are easier to maintain. However, flexibility or customization is
often necessary to apply the appropriate level of oversight for particular
circumstances. Your executive sponsor can prove invaluable by reaching consensus
across stakeholder groups when difficult situations arise.

Scope of content delivery


The following diagram focuses on the number of target consumers who will consume
the content.
The four scopes of content delivery shown in the above diagram include:

Personal: Personal solutions are, as the name implies, intended for use by the
creator. Sharing content with others isn't an objective. Therefore, a personal data
and BI solution has the fewest number of target consumers.
Team: Collaborates and shares content with a relatively small number of colleagues
who work closely together.
Departmental: Delivers content to a large number of consumers, who can belong
to a department or business unit.
Enterprise: Delivers content broadly across organizational boundaries to the
largest number of target consumers. Enterprise content is most often managed by
a centralized team and is subject to additional governance requirements.

Contrast the above four scopes of content delivery with the following diagram, which
has an inverse relationship with respect to the number of content creators.
The four scopes of content creators shown in the above diagram include:

Personal: Represents the largest number of creators because the data culture
encourages any user to work with data using business-led self-service data and BI
methods. Although managed self-service BI methods can be used, it's less
common with personal data and BI efforts.
Team: Colleagues within a team collaborate and share with each other by using
business-led self-service patterns. It has the next largest number of creators in the
organization. Managed self-service patterns could also begin to emerge as skill
levels advance.
Departmental: Involves a smaller population of creators. They're likely to be
considered power users who are using sophisticated tools to create sophisticated
solutions. Managed self-service practices are very common and highly encouraged.
Enterprise: Involves the smallest number of content creators because it typically
includes only professional data and BI developers who work in the BI team, the
COE, or in IT.

The content ownership and management article introduced the concepts of business-
led self-service, managed self-service, and enterprise. The most common alignment
between ownership and delivery scope is:
Business-led self-service ownership: Commonly deployed as personal and team
solutions.
Managed self-service ownership: Can be deployed as personal, team, or
departmental solutions.
Enterprise ownership: Typically deployed as enterprise-scoped solutions.

Some organizations also equate self-service content with community-based support. It's
the case when self-service content creators and owners are responsible for supporting
the content they publish. The user support article describes multiple informal and formal
levels for support.

7 Note

The term sharing can be interpreted two ways: It's often used in a general way
related to sharing content with colleagues, which could be implemented multiple
ways. It can also reference a specific feature in Fabric, which is a specific
implementation where a user or group is granted access to a single item. In this
article, the term sharing is meant in a general way to describe sharing content with
colleagues. When the per-item permissions are intended, this article will make a
clear reference to that feature. For more information, see Report consumer
security planning.

Personal
The Personal delivery scope is about enabling an individual to gain analytical value. It's
also about allowing them to more efficiently perform business tasks through the
effective personal use of data, information, and analytics. It could apply to any type of
information worker in the organization, not just data analysts and developers.

Sharing content with others isn't the objective. Personal content can reside in Power BI
Desktop or in a personal workspace in the Fabric portal.

Here are the characteristics of creating content for a personal delivery scope.

The creator's primary intention is data exploration and analysis, rather than report
delivery.
The content is intended to be analyzed and consumed by one person: the creator.
The content might be an exploratory proof of concept that may, or may not, evolve
into a project.
Here are a few guidelines to help you become successful with content developed for
personal use.

Consider personal data and BI solutions to be like an analytical sandbox that has
little formal governance and oversight from the governance team or COE.
However, it's still appropriate to educate content creators that some general
governance guidelines could still apply to personal content. Valid questions to ask
include: Can the creator export the personal report and email it to others? Can the
creator store a personal report on a non-organizational laptop or device? What
limitations or requirements exist for content that contains sensitive data?
See the techniques described for business-led self-service, and managed self-
service in the content ownership and management article. They're highly relevant
techniques that help content creators create efficient and personal data and BI
solutions.
Analyze data from the activity log to discover situations where personal solutions
appear to have expanded beyond the original intended usage. It's usually
discovered by detecting a significant amount of content sharing from a personal
workspace.

 Tip

For information about how users progress through the stages of user adoption, see
the Microsoft Fabric adoption roadmap maturity levels. For more information
about using the activity log, see Tenant-level auditing.

Team
The Team delivery scope is focused on a team of people who work closely together, and
who are tasked with solving closely related problems using the same data. Collaborating
and sharing content with each other in a workspace is usually the primary objective.

Content is often shared among the team more informally as compared to departmental
or enterprise content. For instance, the workspace is often sufficient for consuming
content within a small team. It doesn't require the formality of publishing the workspace
to distribute it as an app. There isn't a specific number of users when team-based
delivery is considered too informal; each team can find the right number that works for
them.

Here are the characteristics of creating content for a team delivery scope.
Content is created, managed, and viewed among a group of colleagues who work
closely together.
Collaboration and co-management of content is the highest priority.
Formal delivery of content might occur for report viewers (especially for managers
of the team), but it's usually a secondary priority.
Reports aren't always highly sophisticated or attractive; functionality and accessing
the information is what matters most.

Here are some guidelines to help you become successful with content developed for
team use.

Ensure the COE is prepared to support the efforts of self-service creators


publishing content for their team.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for a Power BI app. It's tempting to start with one workspace per team,
but that might not be flexible enough to satisfy all needs.
See the techniques described for business-led self-service and managed self-
service in the content ownership and management article. They're highly relevant
techniques that help content creators create efficient and effective team data and
BI solutions.

 Tip

For more information, see Workspace-level planning.

Departmental
Content is delivered to members of a department or business unit. Content distribution
to a larger number of consumers is a priority for departmental delivery scopes.

Here are the characteristics of departmental content delivery.

A few content creators typically publish content for colleagues to consume.


Formal delivery of reports by using Power BI apps is a high priority to ensure
consumers have the best experience.
Additional effort is made to deliver more sophisticated and polished reports.
Following best practices for data preparation and higher quality data modeling is
also expected.
Needs for change management and lifecycle management begin to emerge to
ensure release stability and a consistent experience for consumers.
Here are a few guidelines to help you become successful with departmental BI delivery.

Ensure that the COE is prepared to support the efforts of self-service creators.
Creators who publish content used throughout their department or business unit
might emerge as candidates to become champions. Or, they might become
candidates to join the COE as a satellite member.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for an app. Several workspaces will likely be required to meet all the
needs of a large department or business unit.
Plan how Power BI apps will distribute content to the enterprise. An app can
provide a significantly better user experience for consuming content. In many
cases, content consumers can be granted permissions to view content via the app
only, reserving workspace permissions management for content creators and
reviewers only. The use of app audience groups allows you to mix and match
content and target audience in a flexible way.
Be clear about what data quality validations have occurred. As the importance and
criticality level grows, expectations for trustworthiness grow too.
Ensure that adequate training, mentoring, and documentation is available to
support content creators. Best practices for data preparation, data modeling, and
data presentation will result in better quality solutions.
Provide guidance on the best way to use the promoted endorsement, and when
the certified endorsement could be permitted for departmental solutions.
Ensure that the owner is identified for all departmental content. Clarity on
ownership is helpful, including who to contact with questions, feedback,
enhancement requests, or support requests. In the Fabric portal, content owners
can set the contact list property for many types of items (like reports and
dashboards). The contact list is also used in security workflows. For example, when
a user is sent a URL to open an app but they don't have permission, they'll be
presented with an option to make a request for access.
Consider using deployment pipelines in conjunction with separate workspaces.
Deployment pipelines can support development, test, and production
environments, which provide more stability for consumers.
Consider enforcing the use of sensitivity labels to implement information
protection on all content.
Include consistent branding on reports by:
Using departmental colors and styling to indicate who produced the content.
For more information, see Content ownership and management.
Adding a small image or text label to the report footer, which is valuable when
the report is exported from the Fabric portal.
Using a standard Power BI Desktop template file. For more information, see
Mentoring and user enablement.
Apply the techniques described for business-led self-service and managed self-
service content delivery in the Content ownership and management article. They're
highly relevant techniques that can help content creators to create efficient and
effective departmental solutions.

Enterprise
Enterprise content is typically managed by a centralized team and is subject to
additional governance requirements. Content is delivered broadly across organizational
boundaries.

Here are the characteristics of enterprise content delivery.

A centralized team of experts manages the content end-to-end and publishes it for
others to consume.
Formal delivery of data solutions like reports, lakehouses, and Power BI apps is a
high priority to ensure consumers have the best experience.
The content is highly sensitive, subject to regulatory requirements, or is considered
extremely critical.
Published enterprise-level semantic models (previously known as datasets) and
dataflows might be used as a source for self-service creators, thus creating a chain
of dependencies to the source data.
Stability and a consistent experience for consumers are highly important.
Application lifecycle management, such as deployment pipelines and DevOps
techniques , is commonly used. Change management processes to review and
approve changes before they're deployed are commonly used for enterprise
content, for example, by a change review board or similar group.
Processes exist to gather requirements, prioritize efforts, and plan for new projects
or enhancements to existing content.
Integration with other enterprise-level data architecture and management services
could exist, possibly with other Azure services and Power Platform products.

Here are some guidelines to help you become successful with enterprise content
delivery.

Governance and oversight techniques described in the governance article are


relevant for managing an enterprise solution. Techniques primarily include change
management and lifecycle management.
Plan for how to effectively use Premium Per User or Fabric capacity licensing per
workspace. Align your workspace management strategy, like how workspaces will
be organized and secured, to the planned licensing strategy.
Plan how Power BI apps will distribute enterprise content to consumers. An app
can provide a significantly better user experience for consuming content. Align the
app distribution strategy with your workspace management strategy.
Consider enforcing the use of sensitivity labels to implement information
protection on all content.
Implement a rigorous process for use of the certified endorsement for enterprise
reports and apps. Data assets can be certified, too, when there's the expectation
that self-service creators will build solutions based on them. Not all enterprise
content needs to be certified, but much of it probably will be.
Make it a common practice to announce when changes will occur. For more
information, see the community of practice article for a description of
communication types.
Include consistent branding on reports, by:
Using specific colors and styling, which can also indicate who produced the
content. For more information, see Content ownership and management.
Adding a small image or text label to the report footer, which can be valuable
when the report is exported from the Fabric portal.
Using a standard Power BI Desktop template file. For more information, see
Mentoring and user enablement.
Actively use the lineage view to understand dependencies, perform impact
analysis, and communicate to downstream content owners when changes will
occur.
See the techniques described for enterprise content delivery in the content
ownership and management article. They're highly relevant techniques that help
content creators create efficient and effective enterprise solutions.
See the techniques described in the system oversight article for auditing,
governing, and the oversight of enterprise content.

Considerations and key actions

Checklist - Considerations and key actions you can take to strengthen your approach to
content delivery.

" Align goals for content delivery: Ensure that guidelines, documentation, and other
resources align with the strategic goals defined for Fabric adoption.
" Clarify the scopes for content delivery in your organization: Determine who each
scope applies to, and how each scope aligns with governance decisions. Ensure that
decisions and guidelines are consistent with how content ownership and
management is handled.
" Consider exceptions: Be prepared for how to handle situations when a smaller
team wants to publish content for an enterprise-wide audience.
Will it require the content be owned and managed by a centralized team? For
more information, see the Content ownership and management article, which
describes an inter-related concept with content delivery scope.
Will there be an approval process? Governance can become more complicated
when the content delivery scope is broader than the owner of the content. For
example, when an app that's owned by a divisional sales team is distributed to
the entire organization.
" Create helpful documentation: Ensure that you have sufficient training
documentation and support so that your content creators understand when it's
appropriate to use workspaces, apps, or per-item sharing (direct access or link) .
" Create a licensing strategy: Ensure that you have a specific strategy in place to
handle Fabric licensing considerations. Create a process for how workspaces could
be assigned each license type, and the prerequisites required for the type of
content that could be assigned to Premium.

Questions to ask

Use questions like those found below to assess content delivery scope.

Do central teams that are responsible for Fabric have a clear understanding of who
creates and delivers content? Does it differ by business area, or for different
content item types?
Which usage scenarios are in place, such as personal BI, team BI, departmental BI,
or enterprise BI? How prevalent are they in the organization? Are there advanced
scenarios, like advanced data preparation or advanced data model management,
or niche scenarios, like self-service real-time analytics?
For the identified content delivery scopes in place, to what extent are guidelines
being followed?
Are there trajectories for helpful self-service content to be "promoted" from
personal to team content delivery scopes and beyond? What systems and
processes enable sustainable, bottom-up scaling and distribution of useful self-
service content?
What are the guidelines for publishing content to, and using, personal
workspaces?
Are personal workspaces assigned to dedicated Fabric capacity? In what
circumstances are personal workspaces intended to be used?
On average, how many reports does someone have access to? How many reports
does an executive have access to? How many reports does the CEO have access
to?
If your organization is using Fabric or Power BI today, does the current workspace
setup comply with the content ownership and delivery strategies that are in place?
Is there a clear licensing strategy? How many licenses are used today? How many
tenants and capacities exist, who uses them, and why?
How do central teams decide what gets published to Premium (or Fabric)
dedicated capacity, and what uses shared capacity? Do development workloads
use separate Premium Per User (PPU) licensing to avoid affecting production
workloads?

Maturity levels

The following maturity levels will help you assess the current state of your content
delivery.

Level State of content delivery

100: Initial • Content is published for consumers by self-service creators in an uncontrolled


way, without a specific strategy.

200: • Pockets of good practices exist. However, good practices are overly dependent
Repeatable on the knowledge, skills, and habits of the content creator.

300: Defined • Clear guidelines are defined and communicated to describe what can and can't
occur within each delivery scope. These guidelines are followed by some—but not
all—groups across the organization.

400: • Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.
Level State of content delivery

• Guidelines for content delivery scope are followed by most, or all, groups across
the organization.

• Change management requirements are in place to approve critical changes for


content that's distributed to a larger-sized audience.

• Changes are announced and follow a communication plan. Content creators are
aware of the downstream effects on their content. Consumers are aware of when
reports and apps are changed.

500: Efficient • Proactively take steps to communicate with users occur when any concerning
activities are detected in the activity log. Education and information are provided
to make gradual improvements or reduce risk.

• The business value that's achieved for deployed solutions is regularly evaluated.

Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
Center of Excellence (COE).
Microsoft Fabric adoption roadmap:
Center of Excellence
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

A data or analytics Center of Excellence (COE) is an internal team of technical and


business experts. The team actively assists others within the organization who are
working with data. The COE forms the nucleus of the broader community to advance
adoption goals, which align with the data culture vision.

A COE might also be known as competency center, capability center, or a center of


expertise. Some organizations use the term squad. Many organizations perform the COE
responsibilities within their data, analytics, or business intelligence (BI) team.

7 Note

Having a COE team formally recognized in your organizational chart is


recommended, but not required. What's most important is that the COE roles and
responsibilities are identified, prioritized, and assigned. It's common for a
centralized data or analytics team to take on many of the COE responsibilities;
some responsibilities might also reside within IT. For simplicity, in this series of
articles, COE means a specific group of people, although you might implement it
differently. It's also very common to implement the COE with a scope broader than
Fabric or Power BI alone: for instance, a Power Platform COE, a data COE, or an
analytics COE.

Goals for a COE


Goals for a COE include:

Evangelizing a data-driven culture.


Promoting the adoption of analytics.
Nurturing, mentoring, guiding, and educating internal users to increase their skills
and level of self-reliance.
Coordinating efforts and disseminating knowledge across organizational
boundaries.
Creating consistency and transparency for the user community, which reduces
friction and pain points related to finding relevant data and analytics content.
Maximizing the benefits of self-service BI, while reducing the risks.
Reducing technical debt by helping users make good decisions that increase
consistency and result in fewer inefficiencies.

) Important

One of the most powerful aspects of a COE is the cross-departmental insight into
how analytics tools like Fabric are used by the organization. This insight can reveal
which practices work well and which don't, that can facilitate a bottom-up
approach to governance. A primary goal of the COE is to learn which practices work
well, share that knowledge more broadly, and replicate best practices across the
organization.

Scope of COE responsibilities


The scope of COE responsibilities can vary significantly between organizations. In a way,
a COE can be thought of as a consultancy service because its members routinely provide
expert advice to the internal community of users. To varying degrees, most COEs handle
hands-on work too.

Common COE responsibilities include:

Mentoring and facilitating knowledge sharing within the internal Fabric


community.
Holding office hours to engage with the internal Fabric community.
Conducting co-development projects and best practices reviews in order to
actively help business units deliver solutions.
Managing the centralized portal.
Producing, curating, and promoting training materials.
Creating documentation and other resources, such as template files, to encourage
consistent use of standards and best practices.
Applying, communicating, and assisting with governance guidelines.
Handling and assisting with system oversight and Fabric administration.
Responding to user support issues escalated from the help desk.
Developing solutions and/or proofs of concept.
Establishing and maintaining the BI platform and data architecture.
Communicating regularly with the internal community of users.

Staffing a COE
People who are good candidates as COE members tend to be those who:

Understand the analytics vision for the organization.


Have a desire to continually improve analytics practices for the organization.
Have a deep interest in, and expertise with, analytics tools such as Fabric.
Are interested in seeing Fabric used effectively and adopted successfully
throughout the organization.
Take the initiative to continually learn, adapt, and grow.
Readily share their knowledge with others.
Are interested in repeatable processes, standardization, and governance with a
focus on user enablement.
Are hyper-focused on collaboration with others.
Are comfortable working in an agile fashion.
Have an inherent interest in being involved and helping others.
Can effectively translate business needs into solutions.
Communicate well with both technical and business colleagues.

 Tip

If you have self-service content creators in your organization who constantly push
the boundaries of what can be done, they might be a great candidate to become a
recognized champion, or perhaps even a satellite member of the COE.

When recruiting for the COE, it's important to have a mix of complementary analytical
skills, technical skills, and business skills.

Roles and responsibilities


Very generalized roles within a COE are listed below. It's common for multiple people to
overlap roles, which is useful from a backup and cross-training perspective. It's also
common for the same person to serve multiple roles. For instance, most COE members
also serve as a coach or mentor.

Role Description

COE Manages the day-to-day operations of the COE. Interacts with the executive sponsor
leader and other organizational teams, such as the data governance board, as necessary.
Role Description

For an overview of additional roles and responsibilities, see the Governance article.

Coach Coaches and educates others on data and BI skills via office hours (community
engagement), best practices reviews, or co-development projects. Oversees and
participates in the discussion channel of the internal community. Interacts with, and
supports, the champions network.

Trainer Develops, curates, and delivers internal training materials, documentation, and
resources.

Data Domain-specific subject matter expert. Acts as a liaison between the COE and the
analyst business unit. Content creator for the business unit. Assists with content certification.
Works on co-development projects and proofs of concept.

Data Creates and manages data assets (such as shared semantic model—previously
modeler known as a dataset—and dataflows) to support other self-service content creators.

Report Creates and publishes reports, dashboards, and metrics.


creator

Data Plans for deployment and architecture, including integration with other services and
engineer data platforms. Publishes data assets which are utilized broadly across the
organization (such as a lakehouse, data warehouse, data pipeline, dataflow, or
semantic model).

User Assists with the resolution of data discrepancies and escalated help desk support
support issues.

As mentioned previously, the scope of responsibilities for a COE can vary significantly
between organizations. Therefore, the roles found for COE members can vary too.

Structuring a COE
The selected COE structure can vary among organizations. It's also possible for multiple
structures to exist inside of a single large organization. That's particularly true when
there are subsidiaries or when acquisitions have occurred.

7 Note

The following terms might differ to those defined for your organization, particularly
the meaning of federated, which tends to have many different IT-related meanings.

Centralized COE
A centralized COE comprises a single shared services team.

Pros:

There's a single point of accountability for a single team that manages standards,
best practices, and delivery end-to-end.
The COE is one group from an organizational chart perspective.
It's easy to start with this approach and then evolve to the unified or federated
model over time.

Cons:

A centralized team might have an authoritarian tendency to favor one-size-fits-all


decisions that don't always work well for all business units.
There can be a tendency to prefer IT skills over business skills.
Due to the centralized nature, it might be more difficult for the COE members to
sufficiently understand the needs of all business units.

Unified COE
A unified COE is a single, centralized, shared services team that has been expanded to
include embedded team members. The embedded team members are dedicated to
supporting a specific functional area or business unit.

Pros:

There's a single point of accountability for a single team that includes cross-
functional involvement from the embedded COE team members. The embedded
COE team members are assigned to various areas of the business.
The COE is one group from an organizational chart perspective.
The COE understands the needs of business units more deeply due to dedicated
members with domain expertise.

Cons:

The embedded COE team members, who are dedicated to a specific business unit,
have a different organizational chart responsibility than the people they serve
directly within the business unit. The organizational structure could potentially lead
to complications, differences in priorities, or necessitate the involvement of the
executive sponsor. Preferably, the executive sponsor has a scope of authority that
includes the COE and all involved business units to help resolve conflicts.

Federated COE
A federated COE comprises a shared services team (the core COE members) plus
satellite members from each functional area or major business unit. A federated team
works in coordination, even though its members reside in different business units.
Typically, satellite members are primarily focused on development activities to support
their business unit while the shared services personnel support the entire community.

Pros:

There's cross-functional involvement from satellite COE members who represent


their specific functional area and have domain expertise.
There's a balance of centralized and decentralized representation across the core
and satellite COE members.
When distributed data ownership situations exist—as could be the case when
business units take direct responsibility for data management activities—this
model is effective.

Cons:

Since core and satellite members span organizational boundaries, the federated
COE approach requires strong leadership, excellent communication, robust project
management, and ultra-clear expectations.
There's a higher risk of encountering competing priorities due to the federated
structure.
This approach typically involves part-time people and/or dotted line organizational
chart accountability that can introduce competing time pressures.

 Tip

Some organizations have success by using a rotational program. It involves


federated members joining the core COE for a period of time, such as six months.
This type of program allows federated members to learn best practices and
understand more deeply how and why things are done. Although each federated
member remains focused on their specific business unit, they gain a deeper
understanding of the organization's challenges. This deeper understanding leads to
a more productive partnership over time.

Decentralized COE
Decentralized COEs are independently managed by business units.

Pros:
A specialized data culture exists that's focused on the business unit, making it
easier to learn quickly and adapt.
Policies and practices are tailored to each business unit.
Agility, flexibility, and priorities are focused on the individual business unit.

Cons:

There's a risk that decentralized COEs operate in isolation. As a result, they might
not share best practices and lessons learned outside of their business unit.
Collaboration with a centralized team might be informal and/or inconsistent.
Inconsistent policies are created and applied across business units.
It's difficult to scale a decentralized model.
There's potential rework to bring one or more decentralized COEs in alignment
with organizational-wide policies.
Larger business units with significant funding might have more resources available
to them, which might not serve cost optimization goals from an organizational-
wide perspective.

) Important

A highly centralized COE tends to be more authoritarian, while highly decentralized


COEs tend to be more siloed. Each organization will need to weigh the pros and
cons that apply to them to determine the best choice. For most organizations, the
most effective approach tends to be the unified or federated, which bridges
organizational boundaries.

Funding the COE


The COE might obtain its operating budget in multiple ways:

Cost center.
Profit center with project budget(s).
A combination of cost center and profit center.

When the COE operates as a cost center, it absorbs the operating costs. Generally, it
involves an approved annual budget. Sometimes this is called a push engagement
model.

When the COE operates as a profit center (for at least part of its budget), it could accept
projects throughout the year based on funding from other business units. Sometimes
this is called a pull engagement model.
Funding is important because it impacts the way the COE communicates and engages
with the internal community. As the COE experiences more and more successes, they
might receive more requests from business units for help. It's especially the case as
awareness grows throughout the organization.

 Tip

The choice of funding model can determine how the COE actively grows its
influence and ability to help. The funding model can also have a big impact on
where authority resides and how decision-making works. Further, it impacts the
types of services a COE can offer, such as co-development projects and/or best
practices reviews. For more information, see the Mentoring and user enablement
article.

Some organizations cover the COE operating costs with chargebacks to business units
based on the usage goals of Fabric. For a shared capacity, this could be based on
number of active users. For Premium capacity, chargebacks could be allocated based on
which business units are using the capacity. Ideally, chargebacks are directly correlated
to the business value gained.

Considerations and key actions

Checklist - Considerations and key actions you can take to establish or improve your
COE.

" Define the scope of responsibilities for the COE: Ensure that you're clear on what
activities the COE can support. Once the scope of responsibilities is known, identify
the skills and competencies required to fulfill those responsibilities.
" Identify gaps in the ability to execute: Analyze whether the COE has the required
systems and infrastructure in place to meet its goals and scope of responsibilities.
" Determine the best COE structure: Identify which COE structure is most
appropriate (centralized, unified, federated, or decentralized). Verify that staffing,
roles and responsibilities, and appropriate organizational chart relationships (HR
reporting) are in place.
" Plan for future growth: If you're starting out with a centralized or decentralized
COE, consider how you will scale the COE over time by using the unified or
federated approach. Plan for any actions that you can take now that'll facilitate
future growth.
" Identify customers: Identify the internal community members, and any external
customers, to be served by the COE. Decide how the COE will generally engage with
those customers, whether it's a push model, pull model, or both models.
" Verify the funding model for the COE: Decide whether the COE is purely a cost
center with an operating budget, whether it will operate partially as a profit center,
and/or whether chargebacks to other business units will be required.
" Create a communication plan: Create you communications strategy to educate the
internal community of users about the services the COE offers, and how to engage
with the COE.
" Create goals and metrics: Determine how you'll measure effectiveness for the COE.
Create KPIs (key performance indicators) or OKRs (objectives and key results) to
validate that the COE consistently provides value to the user community.

Questions to ask

Use questions like those found below to assess the effectiveness of a COE.

Is there a COE? If so, who is in the COE and what's the structure?
If there isn't a COE, is there a central team that performs a similar function? Do
data decision makers in the organization understand what a COE does?
If there isn't a COE, does the organization aspire to create one? Why or why not?
Are there opportunities for federated or decentralized COE models due to a mix of
enterprise and departmental solutions?
Are there any missing roles and responsibilities from the COE?
To what extent does the COE engage with the user community? Do they mentor
users? Do they curate a centralized portal? Do they maintain centralized resources?
Is the COE recognized in the organization? Does the user community consider
them to be credible and helpful?
Do business users see central teams as enabling or restricting their work with data?
What's the COE funding model? Do COE customers financially contribute in some
way to the COE?
How consistent and transparent is the COE with their communication?
Maturity levels

The following maturity levels will help you assess the current state of your COE.

Level State of the Center of Excellence

100: Initial • One or more COEs exist, or the activities are performed within the data team, BI
team, or IT. There's no clarity on the specific goals nor expectations for
responsibilities.

• Requests for assistance from the COE are handled in an unplanned manner.

200: • The COE is in place with a specific charter to mentor, guide, and educate self-
Repeatable service users. The COE seeks to maximize benefits of self-service approaches to
data and BI while reducing the risks.

• The goals, scope of responsibilities, staffing, structure, and funding model are
established for the COE.

300: Defined • The COE operates with active involvement from all business units in a unified or
federated mode.

400: Capable • The goals of the COE align with organizational goals, and they are reassessed
regularly.

• The COE is well-known throughout the organization, and consistently proves its
value to the internal user community.

500: Efficient • Regular reviews of KPIs or OKRs evaluate COE effectiveness in a measurable way.

• Agility and implementing continual improvements from lessons learned


(including scaling out methods that work) are top priorities for the COE.

Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about
implementing governance guidelines, policies, and processes.
Microsoft Fabric adoption roadmap:
Governance
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

Data governance is a broad and complex topic. This article introduces key concepts and
considerations. It identifies important actions to take when adopting Microsoft Fabric,
but it's not a comprehensive reference for data governance.

As defined by the Data Governance Institute , data governance is "a system of decision
rights and accountabilities for information-related processes, executed according to
agreed-upon models which describe who can take what actions, with what information,
and when, under what circumstances, using what methods."

The term data governance is a misnomer. The primary focus for governance isn't on the
data itself. The focus is on governing what users do with the data. Put another way: the
true focus is on governing user's behavior to ensure organizational data is well
managed.

When focused on self-service data and business intelligence (BI), the primary goals of
governance are to achieve the proper balance of:

User empowerment: Empower the internal user community to be productive and


efficient, within requisite guardrails.
Regulatory compliance: Comply with the organization's industry, governmental,
and contractual regulations.
Internal requirements: Adhere to the organization's internal requirements.

The optimal balance between control and empowerment will differ between
organizations. It's also likely to differ among different business units within an
organization. You'll be most successful with a platform like Fabric when you put as much
emphasis on user empowerment as on clarifying its practical usage within established
guardrails.

 Tip
Think of governance as a set of established guidelines and formalized policies. All
governance guidelines and policies should align with your organizational data
culture and adoption objectives. Governance is enacted on a day-to-day basis by
your system oversight (administration) activities.

Governance strategy
When considering data governance in any organization, the best place to start is by
defining a governance strategy. By focusing first on the strategic goals for data
governance, all detailed decisions when implementing governance policies and
processes can be informed by the strategy. In turn, the governance strategy will be
defined by the organization's data culture.

Governance decisions are implemented with documented guidance, policies, and


processes. Objectives for governance of a self-service data and BI platform, such as
Fabric, include:

Empowering users throughout the organization to use data and make decisions,
within the defined boundaries.
Improving the user experience by providing clear and transparent guidance (with
minimal friction) on what actions are permitted, why, and how.
Ensuring that the data usage is appropriate for the needs of the business.
Ensuring that content ownership and stewardship responsibilities are clear. For
more information, see the Content ownership and management article.
Enhancing the consistency and standardization of working with data across
organizational boundaries.
Reducing risk of data leakage and misuse of data. For more information, see the
information protection and data loss prevention series of articles article.
Meeting regulatory, industry, and internal requirements for the proper use of data.

 Tip

A well-executed data governance strategy makes it easier for more users to work
with data. When governance is approached from the perspective of user
empowerment, users are more likely to follow the documented processes.
Accordingly, the users become a trusted partner too.

Governance success factors


Governance isn't well-received when it's enacted with top-down mandates that are
focused more on control than empowerment. Governing Fabric is most successful when:

The most lightweight governance model that accomplishes required objectives is


used.
Governance is approached on an iterative basis and doesn't significantly impede
productivity.
A bottom-up approach to formulating governance guidelines is used whenever
practical. The Center of Excellence (COE) and/or the data governance team
observes successful behaviors that are occurring within a business unit. The COE
then takes action to scale out to other areas of the organization.
Governance decisions are co-defined with input from different business units
before they're enacted. Although there are times when a specific directive is
necessary (particularly in heavily regulated industries), mandates should be the
exception rather than the rule.
Governance needs are balanced with flexibility and the ability to be productive.
Governance requirements can be satisfied as part of users' regular workflow,
making it easier for users to do the right thing in the right way with little friction.
The answer to new requests for data isn't "no" by default, but rather "yes and" with
clear, simple, transparent rules for what governance requirements are for data
access, usage, and sharing.
Users that need access to data have incentive to do so through normal channels,
complying with governance requirements, rather than circumventing them.
Governance decisions, policies, and requirements for users to follow are in
alignment with organizational data culture goals as well as other existing data
governance initiatives.
Decisions that affect what users can—and can't—do aren't made solely by a
system administrator.

Introduce governance to your organization


There are three primary timing methods organizations take when introducing Fabric
governance to an organization.
The methods in the above diagram include:

Method Strategy followed

Roll out Fabric first, then introduce governance: Fabric is made widely available to
users in the organization as a new self-service data and BI tool. Then, at some time in
the future, a governance effort begins. This method prioritizes agility.

Full governance planning first, then roll out Fabric: Extensive governance planning
occurs prior to permitting users to begin using Fabric. This method prioritizes control
and stability.

Iterative governance planning with rollouts of Fabric in stages: Just enough


governance planning occurs initially. Then Fabric is iteratively rolled out in stages to
individual teams while iterative governance enhancements occur. This method equally
prioritizes agility and governance.

Choose method 1 when Fabric is already used for self-service scenarios, and you're
ready to start working in a more efficient manner.

Choose method 2 when your organization already has a well-established approach to


governance that can be readily expanded to include Fabric.

Choose method 3 when you want to have a balance of control agility. This balanced
approach is the best choice for most organizations and most scenarios.

Each method is described in the following sections.

Method 1: Roll out Fabric first


Method 1 prioritizes agility and speed. It allows users to quickly get started creating
solutions. This method occurs when Fabric has been made widely available to users in
the organization as a new self-service data and BI tool. Quick wins and some successes
are achieved. At some point in the future, a governance effort begins, usually to bring
order to an unacceptable level of chaos since the self-service user population didn't
receive sufficient guidance.

Pros:

Fastest to get started


Highly capable users can get things done quickly
Quick wins are achieved

Cons:

Higher effort to establish governance once Fabric is used prevalently throughout


the organization
Resistance from self-service users who are asked to change what they've been
doing
Self-service users need to figure out things on their own, which is inefficient and
results in inconsistencies
Self-service users need to use their best judgment, which produces technical debt
to be resolved

See other possible cons in the Governance challenges section below.

Method 2: In-depth governance planning first


Method 2 prioritizes control and stability. It lies at the opposite end of the spectrum
from method 1. Method 2 involves doing extensive governance planning before rolling
out Fabric. This situation is most likely to occur when the implementation of Fabric is led
by IT. It's also likely to occur when the organization operates in a highly regulated
industry, or when an existing data governance board imposes significant prerequisites
and up-front requirements.

Pros:

More fully prepared to meet regulatory requirements


More fully prepared to support the user community

Cons:

Favors enterprise content development more than self-service


Slower to allow the user population to begin to get value and improve decision-
making
Encourages poor habits and workarounds when there's a significant delay in
allowing the use of data for decision-making
Method 3: Iterative governance with rollouts
Method 3 seeks a balance between agility and governance. It's an ideal scenario that
does just enough governance planning upfront. Frequent and continual governance
improvements iteratively occur over time alongside Fabric development projects that
deliver value.

Pros:

Puts equal priority on governance and user productivity


Emphasizes a learning as you go mentality
Encourages iterative releases to groups of users in stages

Cons:

Requires a high level of communication to be successful with agile governance


practices
Requires additional discipline to keep documentation and training current
Introducing new governance guidelines and policies too often causes a certain
level of user disruption

For more information about up-front planning, see the Preparing to migrate to Power BI
article.

Governance challenges
If your organization has implemented Fabric without a governance approach or strategic
direction (as described above by method 1), there could be numerous challenges
requiring attention. Depending on the approach that you've taken and your current
state, some of the following challenges could be applicable to your organization.

Strategy challenges
Lack of a cohesive data governance strategy that aligns with the business strategy
Lack of executive support for governing data as a strategic asset
Insufficient adoption planning for advancing adoption and the maturity level of BI
and analytics

People challenges
Lack of aligned priorities between centralized teams and business units
Lack of identified champions with sufficient expertise and enthusiasm throughout
the business units to advance organizational adoption objectives
Lack of awareness of self-service best practices
Resistance to following newly introduced governance guidelines and policies
Duplicate effort spent across business units
Lack of clear accountability, roles, and responsibilities

Process challenges
Lack of clearly defined processes resulting in chaos and inconsistencies
Lack of standardization or repeatability
Insufficient ability to communicate and share lessons learned
Lack of documentation and over-reliance on tribal knowledge
Inability to comply with security and privacy requirements

Data quality and data management challenges


Sprawl of data and reports
Inaccurate, incomplete, or outdated data
Lack of trust in the data, especially for content produced by self-service content
creators
Inconsistent reports produced without sufficient data validation
Valuable data not used or difficult to access
Fragmented, siloed, and duplicated data
Lack of data catalog, inventory, glossary, or lineage
Unclear data ownership and stewardship

Skills and data literacy challenges


Varying levels of ability to interpret, create, and communicate with data effectively
Varying levels of technical skillsets and skill gaps
Lack of ability to confidently manage data diversity and volume
Underestimating the level of complexity for BI solution development and
management throughout its entire lifecycle
Short tenure with continual staff transfers and turnover
Coping with the speed of change for cloud services

 Tip
Identifying your current challenges—as well as your strengths—is essential to do
proper governance planning. There's no single straightforward solution to the
challenges listed above. Each organization needs to find the right balance and
approach that solves the challenges that are most important to them. The
challenges presented above will help you identify how they might affect your
organization, so you can start thinking about what the right solution is for your
circumstances.

Governance planning
Some organizations have implemented Fabric without a governance approach or clear
strategic direction (as described above by method 1). In this case, the effort to begin
governance planning can be daunting.

If a formal governance body doesn't currently exist in your organization, then the focus
of your governance planning and implementation efforts will be broader. If, however,
there's an existing data governance board in the organization, then your focus is
primarily to integrate with existing practices and customize them to accommodate the
objectives for self-service and enterprise data and BI scenarios.

) Important

Governance is a big undertaking, and it's never completely done. Relentlessly


prioritizing and iterating on improvements will make the scope more manageable.
If you track your progress and accomplishments each week and each month, you'll
be amazed at the impact over time. The maturity levels at the end of each article in
this series can help you to assess where you are currently.

Some potential governance planning activities and outputs that you might find valuable
are described next.

Strategy
Key activities:

Conduct a series of workshops to gather information and assess the current state
of data culture, adoption, and data and BI practices. For guidance about how to
gather information and define the current state of BI adoption, including
governance, see BI strategic planning.
Use the current state assessment and information gathered to define the desired
future state, including governance objectives. For guidance about how to use this
current state definition to decide on your desired future state, see BI tactical
planning.
Validate the focus and scope of the governance program.
Identify existing bottom-up initiatives in progress.
Identify immediate pain points, issues, and risks.
Educate senior leadership about governance, and ensure executive sponsorship is
sufficient to sustain and grow the program.
Clarify where Power BI fits in to the overall BI and analytics strategy for the
organization.
Assess internal factors such as organizational readiness, maturity levels, and key
challenges.
Assess external factors such as risk, exposure, regulatory, and legal requirements—
including regional differences.

Key output:

Business case with cost/benefit analysis


Approved governance objectives, focus, and priorities that are in alignment with
high-level business objectives
Plan for short-term goals and priorities (quick wins)
Plan for long-term and deferred goals and priorities
Success criteria and measurable key performance indicators (KPIs)
Known risks documented with a mitigation plan
Plan for meeting industry, governmental, contractual, and regulatory requirements
that impact BI and analytics in the organization
Funding plan

People
Key activities:

Establish a governance board and identify key stakeholders.


Determine focus, scope, and a set of responsibilities for the governance board.
Establish a COE.
Determine focus, scope, and a set of responsibilities for COE.
Define roles and responsibilities.
Confirm who has decision-making, approval, and veto authority.

Key output:
Charter for the governance board
Charter and priorities for the COE
Staffing plan
Roles and responsibilities
Accountability and decision-making matrix
Communication plan
Issue management plan

Policies and processes


Key activities:

Analyze immediate pain points, issues, risks, and areas to improve the user
experience.
Prioritize data policies to be addressed by order of importance.
Identify existing processes in place that work well and can be formalized.
Determine how new data policies will be socialized.
Decide to what extent data policies might differ or be customized for different
groups.

Key output:

Process for how data policies and documentation will be defined, approved,
communicated, and maintained
Plan for requesting valid exceptions and departures from documented policies

Project management
The implementation of the governance program should be planned and managed as a
series of projects.

Key activities:

Establish a timeline with priorities and milestones.


Identify related initiatives and dependencies.
Identify and coordinate with existing bottom-up initiatives.
Create an iterative project plan that's aligned with high-level prioritization.
Obtain budget approval and funding.
Establish a tangible way to track progress.

Key output:

Project plan with iterations, dependencies, and sequencing


Cadence for retrospectives with a focus on continual improvements

) Important

The scope of activities listed above that will be useful to take on will vary
considerably between organizations. If your organization doesn't have existing
processes and workflows for creating these types of outputs, refer to the guidance
found in the adoption roadmap conclusion for some helpful resources, as well as
the implementation planning BI strategy articles.

Governance policies

Decision criteria
All governance decisions should be in alignment with the established goals for
organizational adoption. Once the strategy is clear, more tactical governance decisions
will need to be made which affect the day-to-day activities of the self-service user
community. These types of tactical decisions correlate directly to the data policies that
get created.

How we go about making governance decisions depends on:

Who owns and manages the data and BI content? The Content ownership and
management article introduced three types of strategies: business-led self-service,
managed self-service, and enterprise. Who owns and manages the content has a
significant impact on governance requirements.
What is the scope for delivery of the data and BI content? The Content delivery
scope article introduced four scopes for delivery of content: personal, team,
departmental, and enterprise. The scope of delivery has a considerable impact on
governance requirements.
What is the data subject area? The data itself, including its sensitivity level, is an
important factor. Some data domains inherently require tighter controls. For
instance, personally identifiable information (PII), or data subject to regulations,
should be subject to stricter governance requirements than less sensitive data.
Is the data, and/or the BI solution, considered critical? If you can't make an
informed decision easily without this data, you're dealing with critical data
elements. Certain reports and apps could be deemed critical because they meet a
set of predefined criteria. For instance, the content is delivered to executives.
Predefined criteria for what's considered critical helps everyone have clear
expectations. Critical data is usually subject to stricter governance requirements.
 Tip

Different combinations of the above four criteria will result in different governance
requirements for Fabric content.

Key Fabric governance decisions


As you explore your goals and objectives and pursue more tactical data governance
decisions as described above, it will be important to determine what the highest
priorities are. Deciding where to focus your efforts can be challenging.

The following list includes items that you might choose to prioritize when introducing
governance for Fabric.

Recommendations and requirements for content ownership and management


Recommendations and requirements for content delivery scope
Recommendations and requirements for content distribution and sharing with
colleagues, as well as for external users, such as customers, partners, or vendors
How users are permitted to work with regulated data and highly sensitive data
Allowed use of unverified data sources that are unknown to IT
When manually maintained data sources, such as Excel or flat files, are permitted
Who is permitted to create a workspace
How to manage workspaces effectively
How personal workspaces are effectively used
Which workspaces are assigned to Fabric capacity
Who is allowed to be a Fabric administrator
Security, privacy, and data protection requirements, and allowed actions for
content assigned to each sensitivity label
Allowed or encouraged use of personal gateways
Allowed or encouraged use of self-service purchasing of user licenses
Requirements for who can certify content, as well as requirements that must be
met
Application lifecycle management for managing content through its entire
lifecycle, including development, test, and production stages
Additional requirements applicable to critical content, such as data quality
verifications and documentation
Requirements to use standardized master data and common data definitions to
improve consistency across data assets
Recommendations and requirements for use of external tools by advanced content
creators
If you don't make governance decisions and communicate them well, users will use their
own judgment for how things should work—and that often results in inconsistent
approaches to common tasks.

Although not every governance decision needs to be made upfront, it's important that
you identify the areas of greatest risk in your organization. Then, incrementally
implement governance policies and processes that will deliver the most impact.

Data policies
A data policy is a document that defines what users can and can't do. You might call it
something different, but the goal remains the same: when decisions—such as those
discussed in the previous section—are made, they're documented for use and reference
by the community of users.

A data policy should be as short as possible. That way, it's easy for people to understand
what is being asked of them.

A data policy should include:

Policy name, purpose, description, and details


Specific responsibilities
Scope of the policy (organization-wide versus departmental-specific)
Audience for the policy
Policy owner, approver, and contact
How to request an exception
How the policy will be audited and enforced
Regulatory or legal requirements met by the policy
Reference to terminology definitions
Reference to any related guidelines or policies
Effective date, last revision date, and change log

7 Note

Locate, or link to, data policies from your centralized portal.

Here are three common data policy examples you might choose to prioritize.

Policy Description

Data ownership Specifies when an owner is required for a data asset, and what the data
policy owner's responsibilities include, such as: supporting colleagues who view the
Policy Description

content, maintaining appropriate confidentiality and security, and ensuring


compliance.

Data certification Specifies the process that is followed to certify content. Requirements might
(endorsement) include activities such as: data accuracy validation, data source and lineage
policy review, technical review of the data model, security review, and
documentation review.

Data classification Specifies activities that are allowed and not allowed per classification
and protection (sensitivity level). It should specify activities such as: allowed sharing with
policy external users, with or without a non-disclosure agreement (NDA),
encryption requirements, and ability to download the data. Sometimes, it's
also called a data handling policy or a data usage policy. For more
information, see the Information protection for Power BI article.

U Caution

Having a lot of documentation can lead to a false sense that everything is under
control, which can lead to complacency. The level of engagement that the COE has
with the user community is one way to improve the chances that governance
guidelines and policies are consistently followed. Auditing and monitoring activities
are also important.

Scope of policies
Governance decisions will rarely be one-size-fits-all across the entire organization. When
practical, it's wise to start with standardized policies, and then implement exceptions as
needed. Having a clearly defined strategy for how policies will be handled for
centralized and decentralized teams will make it much easier to determine how to
handle exceptions.

Pros of organization-wide policies:

Much easier to manage and maintain


Greater consistency
Encompasses more use cases
Fewer policies overall

Cons of organization-wide policies:

Inflexible
Less autonomy and empowerment
Pros of departmental-scope policies:

Expectations are clearer when tailored to a specific group


Customizable and flexible

Cons of departmental-scope policies:

More work to manage


More policies that are siloed
Potential for conflicting information
Difficult to scale more broadly throughout the organization

 Tip

Finding the right balance of standardization and customization for supporting self-
service data and BI across the organization can be challenging. However, by
starting with organizational policies and mindfully watching for exceptions, you can
make meaningful progress quickly.

Staffing and accountability


The organizational structure for data governance varies substantially between
organizations. In larger organizations there might be a data governance office with
dedicated staff. Some organizations have a data governance board, council, or steering
committee with assigned members coming from different business units. Depending on
the extent of the data governance body within the organization, there could be an
executive team separate from a functional team of people.

) Important

Regardless of how the governance body is structured, it's important that there's a
person or group with sufficient influence over data governance decisions. This
person should have authority to enforce those decisions across organizational
boundaries.

Checks and balances


Governance accountability is about checks and balances.
Starting at the bottom, the levels in the above diagram include:

Level Description

Operational - Business units: Level 1 is the foundation of a well-governed system, which


includes users within the business units performing their work. Self-service data and BI
creators have a lot of responsibilities related to authoring, publishing, sharing, security,
and data quality. Self-service data and BI consumers also have responsibilities for the
proper use of data.

Tactical - Supporting teams: Level 2 includes several groups that support the efforts of
the users in the business units. Supporting teams include the COE, enterprise data and BI,
the data governance office, as well as other ancillary teams. Ancillary teams can include IT,
security, HR, and legal. A change control board is included here as well.

Tactical - Audit and compliance: Level 3 includes internal audit, risk management, and
compliance teams. These teams provide guidance to levels 1 and 2. They also provide
enforcement when necessary.

Strategic - Executive sponsor and steering committee: The top level includes the
executive-level oversight of strategy and priorities. This level handles any escalated issues
that couldn't be solved at lower levels. Therefore, it's important to have a leadership team
with sufficient authority to be able to make decisions when necessary.

) Important

Everyone has a responsibility to adhere to policies for ensuring that organizational


data is secure, protected, and well-managed as an organizational asset. Sometimes
this is cited as everyone is a data steward. To make this a reality, start with the users
in the business units (level 1 described above) as the foundation.
Roles and responsibilities
Once you have a sense for your governance strategy, roles and responsibilities should
be defined to establish clear expectations.

Governance team structure, roles (including terminology), and responsibilities vary


widely among organizations. Very generalized roles are described in the table below. In
some cases, the same person could serve multiple roles. For instance, the Chief Data
Officer (CDO) could also be the executive sponsor.

Role Description

Chief Data Officer Defines the strategy for use of data as an enterprise asset. Oversees
or Chief Analytics enterprise-wide governance guidelines and policies.
Officer

Data governance Steering committee with members from each business unit who, as domain
board owners, are empowered to make enterprise governance decisions. They
make decisions on behalf of the business unit and in the best interest of the
organization. Provides approvals, decisions, priorities, and direction to the
enterprise data governance team and working committees.

Data governance Creates governance policies, standards, and processes. Provides enterprise-
team wide oversight and optimization of data integrity, trustworthiness, privacy,
and usability. Collaborates with the COE to provide governance education,
support, and mentoring to data owners and content creators.

Data governance Temporary or permanent teams that focus on individual governance topics,
working such as security or data quality.
committees

Change Coordinates the requirements, processes, approvals, and scheduling for


management release management processes with the objective of reducing risk and
board minimizing the impact of changes to critical applications.

Project Manages individual governance projects and the ongoing data governance
management office program.

Fabric executive Promotes adoption and the successful use of Fabric. Actively ensures that
sponsor Fabric decisions are consistently aligned with business objectives, guiding
principles, and policies across organizational boundaries. For more
information, see the Executive sponsorship article.

Center of Mentors the community of creators and consumers to promote the effective
Excellence use of Fabric for decision-making. Provides cross-departmental
coordination of Fabric activities to improve practices, increase consistency,
and reduce inefficiencies. For more information, see the Center of
Excellence article.
Role Description

Fabric champions A subset of content creators found within the business units who help
advance the adoption of Fabric. They contribute to data culture growth by
advocating the use of best practices and actively assisting colleagues. For
more information, see the Community of practice article.

Fabric Day-to-day-system oversight responsibilities to support the internal


administrators processes, tools, and people. Handles monitoring, auditing, and
management. For more information, see the System oversight article.

Information Provides occasional assistance to Fabric administrators for services related


technology to Fabric, such as Microsoft Entra ID (previously known as Azure Active
Directory), Microsoft 365, Teams, SharePoint, or OneDrive.

Risk management Reviews and assesses data sharing and security risks. Defines ethical data
policies and standards. Communicates regulatory and legal requirements.

Internal audit Auditing of compliance with regulatory and internal requirements.

Data steward Collaborates with governance committee and/or COE to ensure that
organizational data has acceptable data quality levels.

All BI creators and Adheres to policies for ensuring that data is secure, protected, and well-
consumers managed as an organizational asset.

 Tip

Name a backup for each person in key roles, for example, members of the data
governance board. In their absence, the backup person can attend meetings and
make time-sensitive decisions when necessary.

Considerations and key actions

Checklist - Considerations and key actions you can take to establish or strengthen your
governance initiatives.

" Align goals and guiding principles: Confirm that the high-level goals and guiding
principles of the data culture goals are clearly documented and communicated.
Ensure that alignment exists for any new governance guidelines or policies.
" Understand what's currently happening: Ensure that you have a deep
understanding of how Fabric is currently used for self-service and enterprise data
and BI scenarios. Document opportunities for improvement. Also, document
strengths and good practices that would be helpful to scale out more broadly.
" Prioritize new governance guidelines and policies: For prioritizing which new
guidelines or policies to create, select an important pain point, high priority need,
or known risk for a data domain. It should have significant benefit and can be
achieved with a feasible level of effort. When you implement your first governance
guidelines, choose something users are likely to support because the change is low
impact, or because they are sufficiently motivated to make a change.
" Create a schedule to review policies: Determine the cadence for how often data
policies are reevaluated. Reassess and adjust when needs change.
" Decide how to handle exceptions: Determine how conflicts, issues, and requests
for exceptions to documented policies will be handled.
" Understand existing data assets: Confirm that you understand what critical data
assets exist. Create an inventory of ownership and lineage, if necessary. Keep in
mind that you can't govern what you don't know about.
" Verify executive sponsorship: Confirm that you have support and sufficient
attention from your executive sponsor, as well as from business unit leaders.
" Prepare an action plan: Include the following key items:
Initial priorities: Select one data domain or business unit at a time.
Timeline: Work in iterations long enough to accomplish meaningful progress, yet
short enough to periodically adjust.
Quick wins: Focus on tangible, tactical, and incremental progress.
Success metrics: Create measurable metrics to evaluate progress.

Questions to ask

Use questions like those found below to assess governance.

At a high level, what's the current governance strategy? To what extent is the
purpose and importance of this governance strategy clear to both end users and
the central data and BI teams?
In general, is the current governance strategy effective?
What are the key regulatory and compliance criteria that the organization (or
specific business units) must adhere to? Where's this criteria documented? Is this
information readily available to people who work with data and share data items as
a part of their role?
How well does the current governance strategy align to the user's way of working?
Is a specific role or team responsible for governance in the organization?
Who has the authority to create and change governance policies?
Do governance teams use Microsoft Purview or another tool to support
governance activities?
What are the prioritized governance risks, such as risks to security, information
protection, and data loss prevention?
What's the potential business impact of the identified governance risks?
How frequently is the governance strategy re-evaluated? What metrics are used to
evaluate it, and what mechanisms exist for business users to provide feedback?
What types of user behaviors create risk when users work with data? How are
those risks mitigated?
What sensitivity labels are in place, if any? Are data and BI decision makers aware
of sensitivity labels and the benefits to the business?
What data loss prevention policies are in place, if any?
How is "Export to Excel" handled? What steps are taken to prevent data loss
prevention? What's the prevalence of "Export to Excel"? What do people do with
data once they have it in Excel?
Are there practices or solutions that are out of regulatory compliance that must be
urgently addressed? Are these examples justified with an explanation of the
potential business impact, should they not be addressed?

 Tip

"Export to Excel" is typically a controversial topic. Often, business users focus on the
requirement to have "Export to Excel" possible in BI solutions. Enabling "Export to
Excel" can be counter-productive because a business objective isn't to get data into
Excel. Instead, define why end users need the data in Excel. Ask what they do with
the data once it's in Excel, which business questions they try to answer, what
decisions they make, and what actions they take with the data.

Focusing on business decisions and actions helps steer focus away from tools and
features and toward helping people achieve their business objectives.

Maturity levels
The following maturity levels will help you assess the current state of your governance
initiatives.

Level State of governance

100: Initial • Due to a lack of governance planning, the good data management and informal
governance practices that are occurring are overly reliant on judgment and
experience level of individuals.

• There's a significant reliance on undocumented tribal knowledge.

200: • Some areas of the organization have made a purposeful effort to standardize,
Repeatable improve, and document their data management and governance practices.

• An initial governance approach exists. Incremental progress is being made.

300: Defined • A complete governance strategy with focus, objectives, and priorities is enacted
and broadly communicated.

• Specific governance guidelines and policies are implemented for the top few
priorities (pain points or opportunities). They're actively and consistently followed
by users.

• Roles and responsibilities are clearly defined and documented.

400: Capable • All Fabric governance priorities align with organizational goals and business
objectives. Goals are reassessed regularly.

• Processes exist to customize policies for decentralized business units, or to


handle valid exceptions to standard governance policies.

• It's clear where Fabric fits into the overall data and BI strategy for the
organization.

• Fabric activity log and API data is actively analyzed to monitor and audit Fabric
activities. Proactive action is taken based on the data.

500: Efficient • Regular reviews of KPIs or OKRs evaluate measurable governance goals. Iterative,
continual progress is a priority.

• Agility and implementing continual improvements from lessons learned


(including scaling out methods that work) are top priorities for the COE.

• Fabric activity log and API data is actively used to inform and improve adoption
and governance efforts.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about
mentoring and user enablement.
Microsoft Fabric adoption roadmap:
Mentoring and user enablement
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

A critical objective for adoption efforts is to enable users to accomplish as much as they
can within the requisite guardrails established by governance guidelines and policies.
For this reason, the act of mentoring users is one of the most important responsibilities
of the Center of Excellence (COE), and it has a direct influence on how user adoption
occurs. For more information about user adoption, see Microsoft Fabric adoption
maturity levels.

Skills mentoring
Mentoring and helping users in the Fabric community become more effective can take
on various forms, such as:

Office hours
Co-development projects
Best practices reviews
Extended support

Office hours
Office hours are a form of ongoing community engagements managed by the COE. As
the name implies, office hours are times of regularly scheduled availability where
members of the community can engage with experts from the COE to receive assistance
with minimal process overhead. Office hours are usually group-based, so Fabric
champions and other members of the community can also help solve an issue if a topic
is in their area of expertise.

Office hours are a very popular and productive activity in many organizations. Some
organizations call them drop-in hours or even a fun name such as Power Hour or Fabric
Fridays. The primary goal is usually to get questions answered, solve problems, and
remove blockers. Office hours can also be used as a platform for the user community to
share ideas, suggestions, and even complaints.

The COE publishes the times for regular office hours when one or more COE members
are available. Ideally, office hours are held on a regular and frequent basis. For instance,
it could be every Tuesday and Thursday. Consider offering different time slots or
rotating times if you have a global workforce.

 Tip

One option is to set specific office hours each week. However, users might not
show up, so that can end up being inefficient. Alternatively, consider leveraging
Microsoft Bookings to schedule office hours. It shows the blocks of time when
each COE expert is available, with Outlook integration ensuring availability is up to
date.

Office hours are an excellent user enablement approach because:

Content creators and the COE actively collaborate to answer questions and solve
problems together.
Real work is accomplished while learning and problem solving.
Others might observe, learn, and participate.
Individual groups can head to a breakout room to solve a specific problem.

Office hours benefit the COE as well because:

They're a great way for the COE to identify champions or users with specific skills
that the COE didn't previously know about.
The COE can learn what users throughout the organization are struggling with. It
helps inform whether additional resources, documentation, or training might be
required.

 Tip

It's common for some tough issues to come up during office hours that cannot be
solved quickly, such as getting a complex DAX calculation to work, or addressing
performance challenges in a complex solution. Set clear expectations for what's in
scope for office hours, and if there's any commitment for follow up.

Co-development projects
One way the COE can provide mentoring services is during a co-development project. A
co-development project is a form of assistance offered by the COE where a user or
business unit takes advantage of the technical expertise of the COE to solve business
problems with data. Co-development involves stakeholders from the business unit and
the COE working in partnership to build a high-quality self-service analytics or business
intelligence (BI) solution that the business stakeholders couldn't deliver independently.

The goal of co-development is to help the business unit develop expertise over time
while also delivering value. For example, the sales team has a pressing need to develop
a new set of commission reports, but the sales team doesn't yet have the knowledge to
complete it on their own.

A co-development project forms a partnership between the business unit and the COE.
In this arrangement, the business unit is fully invested, deeply involved, and assumes
ownership of the project.

Time involvement from the COE reduces over time until the business unit gains expertise
and becomes self-reliant.

The active involvement shown in the above diagram changes over time, as follows:

Business unit: 50% initially, up to 75%, finally at 98%-100%.


COE: 50% initially, down to 25%, finally at 0%-2%.

Ideally, the period for the gradual reduction in involvement is identified up-front in the
project. This way, both the business unit and the COE can sufficiently plan the timeline
and staffing.

Co-development projects can deliver significant short- and long-term benefits. In the
short term, the involvement from the COE can often result in a better-designed and
better-performing solution that follows best practices and aligns with organizational
standards. In the long term, co-development helps increase the knowledge and
capabilities of the business stakeholder, making them more self-sufficient, and more
confident to deliver quality self-service data and BI solutions in the future.

) Important

Essentially, a co-development project helps less experienced users learn the right
way to do things. It reduces the risk that refactoring might be needed later, and it
increases the ability for a solution to scale and grow over time.

Best practices reviews


The COE could also offer best practices reviews. A best practices review can be extremely
helpful for content creators who would like to validate their work. They might also be
known as advisory services, internal consulting time, or technical reviews. Unlike a co-
development project (described previously), a best practices review occurs after the
solution has been developed.

During a review, an expert from the COE evaluates self-service Fabric content developed
by a member of the community and identifies areas of risk or opportunities for
improvement.

Here are some examples of when a best practices review could be beneficial.

The sales team has a Power BI app that they intend to distribute to thousands of
users throughout the organization. Since the app represents high priority content
distributed to a large audience, they'd like to have it certified. The standard
process to certify content includes a best practices review.
The finance team would like to assign a workspace to a capacity. A review of the
workspace content is required to ensure sound development practices are
followed. This type of review is common when the capacity is shared among
multiple business units. (A review might not be required when the capacity is
assigned to only one business unit.)
The operations team is creating a new Fabric solution they expect to be widely
used. They would like to request a best practices review before it goes into user
acceptance testing (UAT), or before a request is submitted to the change
management board.

A best practices review is most often focused on the semantic model (previously known
as a dataset) design, though the review can encompass all types of data items (such as a
lakehouse, data warehouse, data pipeline, dataflow, or semantic model). The review can
also encompass reporting items (such as reports, dashboards, or metrics).

Before content is deployed, a best practices review can be used to verify other design
decisions, like:

Code in notebooks follows organizational standards and best practices.


The appropriate data preparation approach (dataflows, pipelines, notebooks, and
others) are used where needed.
Data sources used are appropriate and query folding is invoked whenever possible
where Power Query and dataflows are used.
Data preparation steps are clean, orderly, and efficient.
Connectivity mode and storage mode choices (for example, Direct Lake, import,
live connection, DirectQuery, and composite model frameworks) are appropriate.
Location for data sources, like flat files, and original Power BI Desktop files are
suitable (preferably stored in a backed-up location with versioning and appropriate
security, such as Teams files or a SharePoint shared library).
Semantic models are well-designed, clean, and understandable, and use a star
schema design.
Relationships are configured correctly.
DAX calculations use efficient coding practices (particularly if the data model is
large).
The semantic model size is within a reasonable limit and data reduction techniques
are applied.
Row-level security (RLS) appropriately enforces data permissions.
Data is accurate and has been validated against the authoritative source(s).
Approved common definitions and terminology are used.
Good data visualization practices are followed, including designing for
accessibility.

Once the content has been deployed, the best practices review isn't necessarily
complete yet. Completing the remainder of the review could also include items such as:

The target workspace is suitable for the content.


Workspace security roles are appropriate for the content.
Other permissions (such as app audience permissions, Build permission, or use of
the individual item sharing feature) are correctly and appropriately configured.
Contacts are identified, and correctly correlate to the owners of the content.
Sensitivity labels are correctly assigned.
Fabric item endorsement (certified or promoted) is appropriate.
Data refresh is configured correctly, failure notifications include the proper users,
and uses the appropriate data gateway in standard mode (if applicable).
All appropriate semantic model best practices rules are followed and, preferably,
are automated via a community tool called Best Practices Analyzer for maximum
efficiency and productivity.

Extended support
From time to time, the COE might get involved with complex issues escalated from the
help desk. For more information, see the User support article.

7 Note

Offering mentoring services might be a culture shift for your organization. Your
reaction might be that users don't usually ask for help with a tool like Excel, so why
would they with Power BI? The answer lies in the fact that Power BI and Fabric are
extraordinarily powerful tools. They provide data preparation and data modeling
capabilities in addition to data visualization. Having the ability to aid and enable
users can significantly improve their skills and increase the quality of their solutions
—it reduces risks too.

Centralized portal
A single centralized portal, or hub, is where the user community can find:

Access to the community Q&A forum.


Announcements of interest to the community, such as new features and release
plan updates.
Schedules and registration links for office hours, lunch and learns, training
sessions, and user group meetings.
Announcements of key changes to content and change log (if appropriate).
How to request help or support.
Training materials.
Documentation, onboarding materials, and frequently asked questions (FAQ).
Governance guidance and approaches recommended by the COE.
Report templates.
Examples of best practices solutions.
Recordings of knowledge sharing sessions.
Entry points for accessing managed processes, such as license acquisition, access
requests, and gateway configuration.

 Tip
In general, only 10%-20% of your community will go out of their way to actively
seek out training and educational information. These types of users might naturally
evolve to become your champions. Everyone else is usually just trying to get the
job done as quickly as possible, because their time, focus, and energy are needed
elsewhere. Therefore, it's crucial to make information easy for your community
users to find.

The goal is to consistently direct users in the community to the centralized portal to find
information. The corresponding obligation for the COE is to ensure that the information
users need is available in the centralized portal. Keeping the portal updated requires
discipline when everyone is busy.

In larger organizations, it can be difficult to implement one single centralized portal.


When it's not practical to consolidate into a single portal, a centralized hub can serve as
an aggregator, which contains links to the other locations.

) Important

Although saving time finding information is important, the goal of a centralized


portal is more than that. It's about making information readily available to help
your user community do the right thing. They should be able to find information
during their normal course of work, with as little friction as possible. Until it's easier
to complete a task within the guardrails established by the COE and data
governance team, some users will continue to complete their tasks by
circumventing policies that are put in place. The recommended path must become
the path of least resistance. Having a centralized portal can help achieve this goal.

It takes time for community users to think of the centralized portal as their natural first
stop for finding information. It takes consistent redirection to the portal to change
habits. Sending someone a link to an original document location in the portal builds
better habits than, for instance, including the answer in an email response. It's the same
challenge described in the User support article.

Training
A key factor for successfully enabling self-service users in a Fabric community is training.
It's important that the right training resources are readily available and easily
discoverable. While some users are so enthusiastic about analytics that they'll find
information and figure things out on their own, it isn't true for most of the user
community.
Making sure your self-service users (particularly content creators and owners) have
access to the training resources they need to be successful doesn't mean that you need
to develop your own training content. Developing training content is often
counterproductive due to the rapidly evolving nature of the product. Fortunately, an
abundance of training resources is available in the worldwide community. A curated set
of links goes a long way to help users organize and focus their training efforts, especially
for tool training, which focuses on the technology. All external links should be validated
by the COE for accuracy and credibility. It's a key opportunity for the COE to add value
because COE stakeholders are in an ideal position to understand the learning needs of
the community, and to identify and locate trusted sources of quality learning materials.

You'll find the greatest return on investment with creating custom training materials for
organizational-specific processes, while relying on content produced by others for
everything else. It's also useful to have a short training class that focuses primarily on
topics like how to find documentation, getting help, and interacting with the
community.

 Tip

One of the goals of training is to help users learn new skills while helping them
avoid bad habits. It can be a balancing act. For instance, you don't want to
overwhelm new users by adding in a lot of complexity and friction to a beginner-
level class for report creators. However, it's a great investment to make newer
content creators aware of things that could otherwise take them a while to figure
out. An ideal example is teaching the ability to use a live connection to report from
an existing semantic model. By teaching this concept at the earliest logical time,
you can save a less experienced creator thinking they always need one semantic
model for every report (and encourage the good habit of reusing existing semantic
models across reports).

Some larger organizations experience continual employee transfers and turnover. Such
frequent change results in an increased need for a repeatable set of training resources.

Training resources and approaches


There are many training approaches because people learn in different ways. If you can
monitor and measure usage of your training materials, you'll learn over time what works
best.

Some training might be delivered more formally, such as classroom training with hands-
on labs. Other types of training are less formal, such as:
Lunch and learn presentations
Short how-to videos targeted to a specific goal
Curated set of online resources
Internal user group presentations
One-hour, one-week, or one-month challenges
Hackathon-style events

The advantages of encouraging knowledge sharing among colleagues are described in


the Community of practice article.

 Tip

Whenever practical, learning should be correlated with building something


meaningful and realistic. However, simple demo data does have value during a
training course. It allows a learner to focus on how to use the technology rather
than the data itself. After completion of introductory session(s), consider offering a
bring your own data type of session. These types of sessions encourage the learner
to apply their new technical skills to an actual business problem. Try to include
multiple facilitators from the COE during this type of follow-up session so questions
can be answered quickly.

The types of users you might target for training include:

Content owners, subject matter experts (SMEs), and workspace administrators


Data creators (for example, users who create semantic models for report creators
to use, or who create dataflows, lakehouses, or warehouses for other semantic
model creators to use)
Report creators
Content consumers and viewers
Satellite COE members and the champions network
Fabric administrators

) Important

Each type of user represents a different audience that has different training needs.
The COE will need to identify how best to meet the needs of each audience. For
instance, one audience might find a standard introductory Power BI Desktop class
overwhelming, whereas another will want more challenging information with depth
and detail for end-to-end solutions that include multiple Fabric workloads. If you
have a diverse population of Fabric content creators, consider creating personas
and tailoring the experience to an extent that's practical.

The completion of training can be a leading indicator for success with user adoption.
Some organizations add an element of fun by granting badges, like blue belt or black
belt, as users progress through the training programs.

Give some consideration to how you want to handle users at various stages of user
adoption. Training needs are very different for:

Onboarding new users (sometimes referred to as training day zero).


Users with minimal experience.
More experienced users.

How the COE invests its time in creating and curating training materials will change over
time as adoption and maturity grows. You might also find over time that some
community champions want to run their own tailored set of training classes within their
functional business unit.

Sources for trusted Fabric training content


A curated set of online resources is valuable to help community members focus and
direct their efforts on what's important. Some publicly available training resources you
might find helpful include:

Microsoft Learn training for Power BI


Microsoft Learn training for Fabric
Power BI courses and "in a day" training materials
LinkedIn Learning for Power BI
LinkedIn Learning for Fabric

Consider using Microsoft Viva Learning , which is integrated into Microsoft Teams. It
includes content from sources such as Microsoft Learn and LinkedIn Learning . Custom
content produced by your organization can be included as well.

In addition to Microsoft content and custom content produced by your organization,


you might choose to provide your user community with a curated set of recommended
links to trusted online sources. There's a wide array of videos, blogs, and articles
produced by the worldwide community. The community comprises Fabric and Power BI
experts, Microsoft Most Valued Professions (MVPs) , and enthusiasts. Providing a
curated learning path that contains specific, reputable, current, and high-quality
resources will provide the most value to your user community.
If you do make the investment to create custom in-house training, consider creating
short, targeted content that focuses on solving one specific problem. It makes the
training easier to find and consume. It's also easier to maintain and update over time.

 Tip

The Help and Support menu in the Fabric portal is customizable. When your
centralized location for training documentation is operational, update the tenant
setting in the Admin portal with the link. The link can then be accessed from menu
when users select the Get Help option. Also, be sure to teach users about the Help
ribbon tab in Power BI Desktop. It includes links to guided learning, training videos,
documentation, and more.

Documentation
Concise, well-written documentation can be a significant help for users trying to get
things done. Your needs for documentation, and how it's delivered, will depend on how
Fabric is managed in your organization. For more information, see the Content
ownership and management article.

Certain aspects of Fabric tend to be managed by a centralized team, such as the COE.
The following types of documentation are helpful in these situations:

How to request a Power BI license (and whether there are requirements for
manager approval)
How to request a new capacity
How to request a new workspace
How to request a workspace be added to an existing capacity
How to request access to a gateway data source
How to request software installation

 Tip

For certain activities that are repeated over and over, consider automating them
using Power Apps and Power Automate. In this case, your documentation will also
include how to access and use the Power Platform functionality.

Different aspects of your documentation can be managed by self-service users,


decentralized teams, or by a centralized team. The following types of documentation
might differ based on who owns and manages the content:
How to request a new report
How to request a report enhancement
How to request access to data
How to request new data be prepared and made available for use
How to request an enhancement to existing data or visualizations

 Tip

When planning for a centralized portal, as described earlier in this article, plan how
to handle situations when guidance or governance policies need to be customized
for one or more business units.

There are also going to be some governance decisions that have been made and should
be documented, such as:

How to request content be certified


What are the approved file storage locations
What are the data retention and purge requirements
What are the requirements for handling sensitive data and personally identifiable
information (PII)

Documentation should be located in your centralized portal, which is a searchable


location where, preferably, users already work. Either Teams or SharePoint work very
well. Creating documentation in either wiki pages or in documents can work equally
well, provided that the content is organized well and is easy to find. Shorter documents
that focus on one topic are usually easier to consume than long, comprehensive
documents.

) Important

One of the most helpful pieces of documentation you can publish for the
community is a description of the tenant settings, and the group memberships
required for each tenant setting. Users read about features and functionality online,
and sometimes find that it doesn't work for them. When they are able to quickly
look up your organization's tenant settings, it can save them from becoming
frustrated and attempting workarounds. Effective documentation can reduce the
number of help desk tickets that are submitted. It can also reduce the number of
people who need to be assigned the Fabric administrator role (who might have this
role solely for the purpose of viewing settings).
Over time, you might choose to allow certain types of documentation to be maintained
by the community if you have willing volunteers. In this case, you might want to
introduce an approval process for changes.

When you see questions repeatedly arise in the Q&A forum (as described in the User
support article), during office hours, or during lunch and learns, it's a great indicator that
creating new documentation might be appropriate. When the documentation exists, it
allows colleagues to reference it when needed. Documentation contributes to user
enablement and a self-sustaining community.

 Tip

When creating custom documentation or training materials, reference existing


Microsoft sites using links whenever possible. Most community bloggers don't
keep blog posts or videos up to date.

Power BI template files


A Power BI template is a .pbit file. It can be provided as a starting point for content
creators. It's the same as a .pbix file, which can contain queries, a data model, and a
report, but with one exception: the template file doesn't contain any data. Therefore, it's
a smaller file that can be shared with content creators and owners, and it doesn't
present a risk of inappropriately sharing data.

Providing Power BI template files for your community is a great way to:

Promote consistency.
Reduce learning curve.
Show good examples and best practices.
Increase efficiency.

Power BI template files can improve efficiency and help people learn during the normal
course of their work. A few ways that template files are helpful include:

Reports can use examples of good visualization practices


Reports can incorporate organizational branding and design standards
Semantic models can include the structure for commonly used tables, like a date
table
Helpful DAX calculations can be included, like a year-over-year (YoY) calculation
Common parameters can be included, like a data source connection string
An example of report and/or semantic model documentation can be included
7 Note

Providing templates not only saves your content creators time, it also helps them
move quickly beyond a blank page in an empty solution.

Power BI project files


A Power BI project is a .pbip file. Like a template file (previously described), a project file
doesn't contain any data. It's a file format that advanced content creators can use for
advanced data model and report management scenarios. For example, you can use
project files to save time in development by sharing common model patterns, like date
tables, DAX measure expressions, or calculation groups.

You can use Power BI project files with Power BI Desktop developer mode for:

Advanced editing and authoring (for example, in a code editor such as Visual
Studio Code).
Purposeful separation of semantic model and report items (unlike the .pbix or .pbit
files).
Enabling multiple content creators and developers to work on the same project
concurrently.
Integrating with source control (such as by using Fabric Git integration).
Using continuous integration and continuous delivery (CI/CD) techniques to
automate integration, testing and deployment of changes, or versions of content.

7 Note

Power BI includes capabilities such as .pbit template files and .pbip project files that
make it simple to share starter resources with authors. Other Fabric workloads
provide different approaches to content development and sharing. Having a set of
starter resources is important regardless of the items being shared. For example,
your portal might include a set of SQL scripts or notebooks that present tested
approaches to solve common problems.

Considerations and key actions


Checklist - Considerations and key actions you can take to establish, or improve,
mentoring and user enablement.

" Consider what mentoring services the COE can support: Decide what types of
mentoring services the COE is capable of offering. Types can include office hours,
co-development projects, and best practices reviews.
" Communicate regularly about mentoring services: Decide how you will
communicate and advertise mentoring services, such as office hours, to the user
community.
" Establish a regular schedule for office hours: Ideally, hold office hours at least once
per week (depending on demand from users as well as staffing and scheduling
constraints).
" Decide what the expectations will be for office hours: Determine what the scope
of allowed topics or types of issues users can bring to office hours. Also, determine
how the queue of office hours requests will work, whether any information should
be submitted ahead of time, and whether any follow up afterwards can be
expected.
" Create a centralized portal: Ensure that you have a well-supported centralized hub
where users can easily find training materials, documentation, and resources. The
centralized portal should also provide links to other community resources such as
the Q&A forum and how to find help.
" Create documentation and resources: In the centralized portal, create, compile,
and publish useful documentation. Identify and promote the top 3-5 resources that
will be most useful to the user community.
" Update documentation and resources regularly: Ensure that content is reviewed
and updated on a regular basis. The objective is to ensure that the information
available in the portal is current and reliable.
" Compile a curated list of reputable training resources: Identify training resources
that target the training needs and interests of your user community. Post the list in
the centralized portal and create a schedule to review and validate the list.
" Consider whether custom in-house training will be useful: Identify whether
custom training courses, developed in-house, will be useful and worth the time
investment. Invest in creating content that's specific to the organization.
" Provide templates and projects: Determine how you'll use templates including
Power BI template files and Power BI project files. Include the resources in your
centralized portal, and in training materials.
" Create goals and metrics: Determine how you'll measure effectiveness of the
mentoring program. Create KPIs (key performance indicators) or OKRs (objectives
and key results) to validate that the COE's mentoring efforts strengthen the
community and its ability to provide self-service BI.
Questions to ask

Use questions like those found below to assess mentoring and user enablement.

Is there an effective process in place for users to request training?


Is there a process in place to evaluate user skill levels (such as beginner,
intermediate, or advanced)? Can users study for and achieve Microsoft
certifications by using company resources?
What's the onboarding process to introduce new people in the user community to
data and BI solutions, tools, and processes?
Have all users followed the appropriate Microsoft Learn learning paths for their
roles during onboarding?
What kinds of challenges do users experience due to lack of training or
mentorship?
What impact does lack of enablement have on the business?
When users exhibit behavior that creates governance risks, are they punished or do
they undergo education and mentorship?
What training materials are in place to educate people about governance
processes and policies?
Where's the central documentation maintained? Who maintains it?
Do central resources exist, like organizational design guidelines, themes, or
template files?

Maturity levels

The following maturity levels will help you assess the current state of your mentoring
and user enablement.

Level State of mentoring and user enablement

100: Initial • Some documentation and resources exist. However, they're siloed and
inconsistent.
Level State of mentoring and user enablement

• Few users are aware of, or take advantage of, available resources.

200: • A centralized portal exists with a library of helpful documentation and resources.
Repeatable
• A curated list of training links and resources are available in the centralized
portal.

• Office hours are available so the user community can get assistance from the
COE.

300: • The centralized portal is the primary hub for community members to locate
Defined training, documentation, and resources. The resources are commonly referenced
by champions and community members when supporting and learning from each
other.

• The COE's skills mentoring program is in place to assist users in the community in
various ways.

400: • Office hours have regular and active participation from all business units in the
Capable organization.

• Best practices reviews from the COE are regularly requested by business units.

• Co-development projects are repeatedly executed with success by the COE and
members of business units.

500: • Training, documentation, and resources are continually updated and improved by
Efficient the COE to ensure the community has current and reliable information.

• Measurable and tangible business value is gained from the mentoring program
by using KPIs or OKRs.

Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
community of practice.
Microsoft Fabric adoption roadmap:
Community of practice
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

A community of practice is a group of people with a common interest that interacts with,
and helps, each other on a voluntary basis. Using a tool such as Microsoft Fabric to
produce effective analytics is a common interest that can bring people together across
an organization.

The following diagram provides an overview of an internal community.

The above diagram shows the following:

The community of practice includes everyone with an interest in Fabric.


The Center of Excellence (COE) forms the nucleus of the community. The COE
oversees the entire community and interacts most closely with its champions.
Self-service content creators and subject matter experts (SMEs) produce, publish,
and support content that's used by their colleagues, who are consumers.
Content consumers view content produced by both self-service creators and
enterprise business intelligence (BI) developers.
Champions are a subset of the self-service content creators. Champions are in an
excellent position to support their fellow content creators to generate effective
analytics solutions.

Champions are the smallest group among creators and SMEs. Self-service content
creators and SMEs represent a larger number of people. Content consumers represent
the largest number of people in most organizations.

7 Note

All references to the Fabric community in this adoption series of articles refer to
internal users, unless explicitly stated otherwise. There's an active and vibrant
worldwide community of bloggers and presenters who produce a wealth of
knowledge about Fabric. However, internal users are the focus of this article.

For information about related topics including resources, documentation, and training
provided for the Fabric community, see the Mentoring and user enablement article.

Champions network
One important part of a community of practice is its champions. A champion is a self-
service content creator who works in a business unit that engages with the COE. A
champion is recognized by their peers as the go-to expert. A champion continually
builds and shares their knowledge even if it's not an official part of their job role.
Champions influence and help their colleagues in many ways including solution
development, learning, skills improvement, troubleshooting, and keeping up to date.

Champions emerge as leaders of the community of practice who:

Have a deep interest in analytics being used effectively and adopted successfully
throughout the organization.
Possess strong technical skills as well as domain knowledge for their functional
business unit.
Have an inherent interest in getting involved and helping others.
Are early adopters who are enthusiastic about experimenting and learning.
Can effectively translate business needs into solutions.
Communicate well with colleagues.

 Tip

To add an element of fun, some organizations refer to their champions network as


ambassadors, Jedis, ninjas, or rangers. Microsoft has an internal community called BI
Champs.

Often, people aren't directly asked to become champions. Commonly, champions are
identified by the COE and recognized for the activities they're already doing, such as
frequently answering questions on an internal discussion channel or participating in
lunch and learn sessions.

Different approaches will be more effective for different organizations, and each
organization will find what works best for them as their maturity level increases.

) Important

Someone very well might be acting in the role of a champion without even
knowing it, and without any formal recognition. The COE should always be on the
lookout for champions. COE members should actively monitor the discussion
channel to see who is particularly helpful. The COE should deliberately encourage
and support potential champions, and when appropriate, invite them into a
champions network to make the recognition formal.

Knowledge sharing
The overriding objective of a community of practice is to facilitate knowledge sharing
among colleagues and across organizational boundaries. There are many ways
knowledge sharing occurs. It could be during the normal course of work. Or, it could be
during a more structured activity, such as:

Activity Description

Discussion A Q&A forum where anyone in the community can post and view messages.
channel Often used for help and announcements. For more information, see the User
support article.

Lunch and learn Regularly scheduled sessions where someone presents a short session about
sessions something they've learned or a solution they've created. The goal is to get a
Activity Description

variety of presenters involved, because it's a powerful message to hear


firsthand what colleagues have achieved.

Office hours with Regularly scheduled times when COE experts are available so the community
the COE can engage with them. Community users can receive assistance with minimal
process overhead. For more information, see the Mentoring and user
enablement article.

Internal blog Short blog posts, usually covering technical how-to topics.
posts or wiki posts

Internal analytics A subset of the community that chooses to meet as a group on a regularly
user group scheduled basis. User group members often take turns presenting to each
other to share knowledge and improve their presentation skills.

Book club A subset of the community select a book to read on a schedule. They discuss
what they've learned and share their thoughts with each other.

Internal analytics An annual or semi-annual internal conference that delivers a series of


conferences or sessions focused on the needs of self-service content creators, subject
events matter experts, and stakeholders.

 Tip

Inviting an external presenter can reduce the effort level and bring a fresh
viewpoint for learning and knowledge sharing.

Incentives
A lot of effort goes into forming and sustaining a successful community. It's
advantageous to everyone to empower and reward users who work for the benefit of
the community.

Rewarding community members


Incentives that the entire community (including champions) find particularly rewarding
can include:

Contests with a small gift card or time off: For example, you might hold a
performance tuning event with the winner being the person who successfully
reduced the size of their data model the most.
Ranking based on help points: The more frequently someone participates in Q&A,
they achieve a change in status on a leaderboard. This type of gamification
promotes healthy competition and excitement. By getting involved in more
conversations, the participant learns and grows personally in addition to helping
their colleagues.
Leadership communication: Reach out to a manager when someone goes above
and beyond so that their leader, who might not be active in the community, sees
the value that their staff member provides.

Rewarding champions
Different types of incentives will appeal to different types of people. Some community
members will be highly motivated by praise and feedback. Some will be inspired by
gamification and a bit of fun. Others will highly value the opportunity to improve their
level of knowledge.

Incentives that champions find particularly rewarding can include:

More direct access to the COE: The ability to have connections in the COE is
valuable. It's depicted in the diagram shown earlier in this article.
Champion of the month: Publicly thank one of your champions for something
outstanding they did recently. It could be a fun tradition at the beginning of a
monthly lunch and learn.
A private experts discussion area: A private area for the champions to share ideas
and learn from each other is usually highly valued.
Specialized or deep dive information and training: Access to additional
information to help champions grow their skillsets (as well as help their colleagues)
will be appreciated. It could include attending advanced training classes or
conferences.

Communication plan
Communication with the community occurs through various types of communication
channels. Common communication channels include:

Internal discussion channel or forum.


Announcements channel.
Organizational newsletter.

The most critical communication objectives include ensuring your community members
know that:

The COE exists.


How to get help and support.
Where to find resources and documentation.
Where to find governance guidelines.
How to share suggestions and ideas.

 Tip

Consider requiring a simple quiz before a user is granted a Power BI or Fabric


license. This quiz is a misnomer because it doesn't focus on any technical skills.
Rather, it's a short series of questions to verify that the user knows where to find
help and resources. It sets them up for success. It's also a great opportunity to have
users acknowledge any governance policies or data privacy and protection
agreements you need them to be aware of. For more information, see the System
oversight article.

Types of communication
There are generally four types of communication to plan for:

New employee communications can be directed to new employees (and


contractors). It's an excellent opportunity to provide onboarding materials for how
to get started. It can include articles on topics like how to get Power BI Desktop
installed, how to request a license, and where to find introductory training
materials. It can also include general data governance guidelines that all users
should be aware of.
Onboarding communications can be directed to employees who are just acquiring
a license or are getting involved with the community of practice. It presents an
excellent opportunity to provide the same materials as given to new employee
communications (as mentioned above).
Ongoing communications can include regular announcements and updates
directed to all users, or subsets of users, like:
Announcements about changes that are planned to key organizational content.
For example, changes are to be published for a critical shared semantic model
(previously known as a dataset) that's used heavily throughout the organization.
It can also include the announcement of new features. For more information
about planning for change, see the Tenant-level monitoring article.
Feature announcements, which are more likely to receive attention from the
reader if the message includes meaningful context about why it's important.
(Although an RSS feed can be a helpful technique, with the frequent pace of
change, it can become noisy and might be ignored.)
Situational communications can be directed to specific users or groups based on
a specific occurrence discovered while monitoring the platform. For example,
perhaps you notice a significant amount of sharing from the personal workspace a
particular user, so you choose to send them some information about the benefits
of workspaces and Power BI apps.

 Tip

One-way communication to the user community is important. Don't forget to also


include bidirectional communication options to ensure the user community has an
opportunity to provide feedback.

Community resources
Resources for the internal community, such as documentation, templates, and training,
are critical for adoption success. For more information about resources, see the
Mentoring and user enablement article.

Considerations and key actions

Checklist - Considerations and key actions you can take for the community of practice
follow.

Initiate, grow, and sustain your champions network:

" Clarify goals: Clarify what your specific goals are for cultivating a champions
network. Make sure these goals align with your overall data and BI strategy, and
that your executive sponsor is on board.
" Create a plan for the champions network: Although some aspects of a champions
network will always be informally led, determine to what extent the COE will
purposefully cultivate and support champion efforts throughout individual business
units. Consider how many champions is ideal for each functional business area.
Usually, 1-2 champions per area works well, but it can vary based on the size of the
team, the needs of the self-service community, and how the COE is structured.
" Decide on commitment level for champions: Decide what level of commitment
and expected time investment will be required of champions. Be aware that the
time investment will vary from person to person, and team to team due to different
responsibilities. Plan to clearly communicate expectations to people who are
interested in getting involved. Obtain manager approval when appropriate.
" Decide how to identify champions: Determine how you will respond to requests to
become a champion, and how the COE will seek out champions. Decide if you will
openly encourage interested employees to self-identify as a champion and ask to
learn more (less common). Or, whether the COE will observe efforts and extend a
private invitation (more common).
" Determine how members of the champions network will be managed: One
excellent option for managing who the champions are is with a security group.
Consider:
How you will communicate with the champions network (for example, in a Teams
channel, a Yammer group, and/or an email distribution list).
How the champions network will communicate and collaborate with each other
directly (across organizational boundaries).
Whether a private and exclusive discussion forum for champions and COE
members is appropriate.
" Plan resources for champions: Ensure members of the champions network have
the resources they need, including:
Direct access to COE members.
Influence on data policies being implemented (for example, requirements for a
semantic model certification policy).
Influence on the creation of best practices and guidance (for example,
recommendations for accessing a specific source system).
" Involve champions: Actively involve certain champions as satellite members of the
COE. For more information about ways to structure the COE, see the Center of
Excellence article.
" Create a feedback loop for champions: Ensure that members of the champions
network can easily provide information or submit suggestions to the COE.
" Routinely provide recognition and incentives for champions: Not only is praise an
effective motivator, but the act of sharing examples of successful efforts can
motivate and inspire others.

Improve knowledge sharing:

" Identify knowledge sharing activities: Determine what kind of activities for


knowledge sharing fit well into the organizational data culture. Ensure that all
planned knowledge sharing activities are supportable and sustainable.
" Confirm roles and responsibilities: Verify who will take responsibility for
coordinating all knowledge sharing activities.

Introduce incentives:
" Identify incentives for champions: Consider what type of incentives you could offer
to members of your champions network.
" Identify incentives for community members: Consider what type of incentives you
could offer to your broader internal community.

Improve communications:

" Establish communication methods: Evaluate which methods of communication fit


well in your data culture. Set up different ways to communicate, including history
retention and search.
" Identify responsibility: Determine who will be responsible for different types of
communication, how, and when.

Questions to ask

Use questions like those found below to assess the community of practice.

Is there a centralized portal for a community of practice to engage in knowledge


sharing?
Do technical questions and requests for support always go through central teams
like the COE or support? Alternatively, to what extent is the community of practice
engaging in knowledge sharing?
Do any incentives exist for people to engage in knowledge sharing or improve
their skills with data and BI tools?
Is there a system of recognition to acknowledge significant self-service efforts in
teams?
Are champions recognized among the user community? If so, what explicit
recognition do they get for their expertise? How are they identified?
If no champions are recognized, are there any potential candidates?
What role do central teams envision that champions play in community of
practice?
How often do central data and BI teams engage with the user community? What
medium do these interactions take? Are they bidirectional discussions or
unidirectional communications?
How are changes and announcements communicated within the community of
practice?
Among the user community, who is the most enthusiastic about analytics and BI
tools? Who is the least enthusiastic, or the most negative, and why?

Maturity levels

The following maturity levels will help you assess the current state of your community of
practice.

Level State of the community of practice

100: Initial • Some self-service content creators are doing great work throughout the
organization. However, their efforts aren't recognized.

• Efforts to purposefully share knowledge across the organizational boundaries


are rare and unstructured.

• Communication is inconsistent, without a purposeful plan.

200: • The first set of champions are identified.


Repeatable
• The goals for a champions network are identified.

• Knowledge sharing practices are gaining traction.

300: Defined • Knowledge sharing in multiple forms is a normal occurrence. Information


sharing happens frequently and purposefully.

• Goals for transparent communication with the user community are defined.

400: Capable • Champions are identified for all business units. They actively support colleagues
in their self-service efforts.

• Incentives to recognize and reward knowledge sharing efforts are a common


occurrence.

• Regular and frequent communication occurs based on a predefined


communication plan.

500: Efficient • Bidirectional feedback loops exist between the champions network and the COE.

• Key performance indicators measure community engagement and satisfaction.


Level State of the community of practice

• Automation is in place when it adds direct value to the user experience (for
example, automatic access to a group that provides community resources).

Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about user
support.
Microsoft Fabric adoption roadmap:
User support
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

This article addresses user support. It focuses primarily on the resolution of issues.

The first sections of this article focus on user support aspects you have control over
internally within your organization. The final topics focus on external resources that are
available.

For a description of related topics, including skills mentoring, training, documentation,


and co-development assistance provided to the internal Fabric user community, see the
Mentoring and user enablement article. The effectiveness of those activities can
significantly reduce the volume of formal user support requests and increase user
experience overall.

Types of user support


If a user has an issue, do they know what their options are to resolve it?

The following diagram shows some common types of user support that organizations
employ successfully:

The six types of user support shown in the above diagram include:

Type Description

Intra-team support (internal) is very informal. Support occurs when team members learn
from each other during the natural course of their job.
Type Description

Internal community support (internal) can be organized informally, formally, or both. It


occurs when colleagues interact with each other via internal community channels.

Help desk support (internal) handles formal support issues and requests.

Extended support (internal) involves handling complex issues escalated by the help desk.

Microsoft support (external) includes support for licensed users and Fabric administrators.
It also includes comprehensive documentation.

Community support (external) includes the worldwide community of experts, Microsoft


Most Valued Professionals (MVPs) , and enthusiasts who participate in forums and
publish content.

In some organizations, intra-team and internal community support are most relevant for
self-service data and business intelligence (BI)—content is owned and managed by
creators and owners in decentralized business units. Conversely, the help desk and
extended support are reserved for technical issues and enterprise data and BI (content is
owned and managed by a centralized BI team or Center of Excellence). In some
organizations, all four types of support could be relevant for any type of content.

 Tip

For more information about business-led self-service, managed self-service, and


enterprise data and BI concepts, see the Content ownership and management
article.

Each of the six types of user support introduced above are described in further detail in
this article.

Intra-team support
Intra-team support refers to when team members learn from and help each other during
their daily work. Self-service content creators who emerge as your champions tend to
take on this type of informal support role voluntarily because they have an intrinsic
desire to help. Although it's an informal support mode, it shouldn't be undervalued.
Some estimates indicate that a large percentage of learning at work is peer learning,
which is particularly helpful for analysts who are creating domain-specific analytics
solutions.
7 Note

Intra-team support does not work well for individuals who are the only data analyst
within a department. It's also not effective for those who don't have very many
connections yet in their organization. When there aren't any close colleagues to
depend on, other types of support, as described in this article, become more
important.

Internal community support


Assistance from your fellow community members often takes the form of messages in a
discussion channel, or a forum set up specifically for the community of practice. For
example, someone posts a message that they're having problems getting a DAX
calculation to work or are looking for the right Python module to import. They then
receive a response from someone in the organization with suggestions or links.

 Tip

The goal of an internal Fabric community is to be self-sustaining, which can lead to


reduced formal support demands and costs. It can also facilitate managed self-
service content creation occurring on a broader scale versus a purely centralized
approach. However, there will always be a need to monitor, manage, and nurture
the internal community. Here are two specific tips:

Be sure to cultivate multiple experts in the more difficult topics like T-SQL,
Python, Data Analysis eXpressions (DAX) and the Power Query M formula
language. When a community member becomes a recognized expert, they
could become overburdened with too many requests for help.
A greater number of community members might readily answer certain types
of questions (for example, report visualizations), whereas a smaller number of
members will answer others (for example, complex T-SQL or DAX). It's
important for the COE to allow the community a chance to respond yet also
be willing to promptly handle unanswered questions. If users repeatedly ask
questions and don't receive an answer, it will significantly hinder growth of
the community. In this case, a user is likely to leave and never return if they
don't receive any responses to their questions.
An internal community discussion channel is commonly set up as a Teams channel or a
Yammer group. The technology chosen should reflect where users already work, so that
the activities occur within their natural workflow.

One benefit of an internal discussion channel is that responses can come from people
that the original requester has never met before. In larger organizations, a community of
practice brings people together based on a common interest. It can offer diverse
perspectives for getting help and learning in general.

Use of an internal community discussion channel allows the Center of Excellence (COE)
to monitor the kind of questions people are asking. It's one way the COE can understand
the issues users are experiencing (commonly related to content creation, but it could
also be related to consuming content).

Monitoring the discussion channel can also reveal additional analytics experts and
potential champions who were previously unknown to the COE.

) Important

It's a best practice to continually identify emerging champions, and to engage with
them to make sure they're equipped to support their colleagues. As described in
the Community of practice article, the COE should actively monitor the discussion
channel to see who is being helpful. The COE should deliberately encourage and
support community members. When appropriate, invite them into the champions
network.

Another key benefit of a discussion channel is that it's searchable, which allows other
people to discover the information. It is, however, a change of habit for people to ask
questions in an open forum rather than private messages or email. Be sensitive to the
fact that some individuals aren't comfortable asking questions in such a public way. It
openly acknowledges what they don't know, which might be embarrassing. This
reluctance might reduce over time by promoting a friendly, encouraging, and helpful
discussion channel.

 Tip

You might be tempted to create a bot to handle some of the most common,
straightforward questions from the community. A bot can work for uncomplicated
questions such as "How do I request a license?" or "How do I request a
workspace?" Before taking this approach, consider if there are enough routine and
predictable questions that would make the user experience better rather than
worse. Often, a well-created FAQ (frequently asked questions) works better, and it's
faster to develop and easier to maintain.

Help desk support


The help desk is usually operated as a shared service, staffed by the IT department.
Users who will likely rely on a more formal support channel include those who are:

Less experienced users.


Newer to the organization.
Reluctant to post a message to the internal discussion community.
Lacking connections and colleagues within the organization.

There are also certain technical issues that can't be fully resolved without IT involvement,
like software installation and upgrade requests when machines are IT-managed.

Busy help desk personnel are usually dedicated to supporting multiple technologies. For
this reason, the easiest types of issues to support are those which have a clear resolution
and can be documented in a knowledgebase. For instance, software installation
prerequisites or requirements to get a license.

Some organizations ask the help desk to handle only very simple break-fix issues. Other
organizations have the help desk get involved with anything that is repeatable, like new
workspace requests, managing gateway data sources, or requesting a new capacity.

) Important

Your Fabric governance decisions will directly impact the volume of help desk
requests. For example, if you choose to limit workspace creation permissions in
the tenant settings, it will result in users submitting help desk tickets. While it's a
legitimate decision to make, you must be prepared to satisfy the request very
quickly. Respond to this type of request within 1-4 hours, if possible. If you delay
too long, users will use what they already have or find a way to work around your
requirements. That might not be the ideal scenario. Promptness is critical for certain
help desk requests. Consider that automation by using Power Apps and Power
Automate can help make some processes more efficient. For more information, see
Tenant-level workspace planning.

Over time, troubleshooting and problem resolution skills become more effective as help
desk personnel expand their knowledgebase and experience with supporting Fabric. The
best help desk personnel are those who have a good grasp of what users need to
accomplish.

 Tip

Purely technical issues, for example data refresh failure or the need to add a new
user to a gateway data source, usually involve straightforward responses
associated with a service-level agreement (SLA). For instance, there could be an SLA
to respond to blocking issues within one hour and resolve them within eight hours.
It's generally more difficult to define SLAs for troubleshooting issues, like data
discrepancies.

Extended support
Since the COE has deep insight into how Fabric is used throughout the organization,
they're a great option for extended support should a complex issue arise. Involving the
COE in the support process should be by an escalation path.

Managing requests as purely an escalation path from the help desk gets difficult to
enforce since COE members are often well-known to business users. To encourage the
habit of going through the proper channels, COE members should redirect users to
submit a help desk ticket. It will also improve the data quality for analyzing help desk
requests.

Microsoft support
In addition to the internal user support approaches discussed in this article, there are
valuable external support options directly available to users and Fabric administrators
that shouldn't be overlooked.

Microsoft documentation
Check the Fabric support website for high-priority issues that broadly affect all
customers. Global Microsoft 365 administrators have access to additional support issue
details within the Microsoft 365 portal.

Refer to the comprehensive Fabric documentation. It's an authoritative resource that can
aid troubleshooting and search for information. You can prioritize results from the
documentation site. For example, enter a site-targeted search request into your web
search engine, like power bi gateway site:learn.microsoft.com .
Power BI Pro and Premium Per User end-user support
Licensed users are eligible to log a support ticket with Microsoft .

 Tip

Make it clear to your internal user community whether you prefer technical issues
to be reported to the internal help desk. If your help desk is equipped to handle the
workload, having a centralized internal area collect user issues can provide a
superior user experience versus every user trying to resolve issues on their own.
Having visibility and analyzing support issues is also helpful for the COE.

Administrator support
There are several support options available for Fabric administrators.

For customers who have a Microsoft Unified Support contract, consider granting help
desk and COE members access to the Microsoft Services Hub . One advantage of the
Microsoft Services Hub is that your help desk and COE members can be set up to
submit and view support requests.

Worldwide community support


In addition to the internal user support approaches described in this article, and
Microsoft support options described previously, you can leverage the worldwide Fabric
community.

The worldwide community is useful when a question can be easily understood by


someone without domain knowledge, and when it doesn't involve confidential data or
sensitive internal processes.

Publicly available community forums


There are several public community forums where users can post issues and receive
responses from any user in the world. Getting answers from anyone, anywhere, can be
very powerful and exceedingly helpful. However, as is the case with any public forum, it's
important to validate the advice and information posted on the forum. The advice
posted on the internet might not be suitable for your situation.

Publicly available discussion areas


It's very common to see people posting Fabric technical questions on social media
platforms. You might find discussions, post announcements, and users helping each
other.

Community documentation
The Fabric global community is vibrant. Every day, there are a great number of Fabric
blog posts, articles, webinars, and videos published. When relying on community
information for troubleshooting, watch out for:

How recent the information is. Try to verify when it was published or last updated.
Whether the situation and context of the solution found online truly fits your
circumstance.
The credibility of the information being presented. Rely on reputable blogs and
sites.

Considerations and key actions

Checklist - Considerations and key actions you can take for user support follow.

Improve your intra-team support:

" Provide recognition and encouragement: Provide incentives to your champions as


described in the Community of practice article.
" Reward efforts: Recognize and praise meaningful grassroots efforts when you see
them happening.
" Create formal roles: If informal intra-team efforts aren't adequate, consider
formalizing the roles you want to enact in this area. Include the expected
contributions and responsibilities in the HR job description, when appropriate.

Improve your internal community support:

" Continually encourage questions: Encourage users to ask questions in the


designated community discussion channel. As the habit builds over time, it will
become normalized to use that as the first option. Over time, it will evolve to
become more self-supporting.
" Actively monitor the discussion area: Ensure that the appropriate COE members
actively monitor this discussion channel. They can step in if a question remains
unanswered, improve upon answers, or make corrections when appropriate. They
can also post links to additional information to raise awareness of existing
resources. Although the goal of the community is to become self-supporting, it still
requires dedicated resources to monitor and nurture it.
" Communicate options available: Make sure your user population knows the
internal community support area exists. It could include the prominent display of
links. You can include a link in regular communications to your user community. You
can also customize the help menu links in the Fabric portal to direct users to your
internal resources.
" Set up automation: Ensure that all licensed users automatically have access to the
community discussion channel. It's possible to automate license setup by using
group-based licensing.

Improve your internal help desk support:

" Determine help desk responsibilities: Decide what the initial scope of Fabric
support topics that the help desk will handle.
" Assess the readiness level: Determine whether your help desk is prepared to
handle Fabric support. Identify whether there are readiness gaps to be addressed.
" Arrange for additional training: Conduct knowledge transfer sessions or training
sessions to prepare the help desk staff.
" Update the help desk knowledgebase: Include known questions and answers in a
searchable knowledgebase. Ensure someone is responsible for regular updates to
the knowledgebase to reflect new and enhanced features over time.
" Set up a ticket tracking system: Ensure a good system is in place to track requests
submitted to the help desk.
" Decide whether anyone will be on-call for any issues related to Fabric: If
appropriate, ensure the expectations for 24/7 support are clear.
" Determine what SLAs will exist: When a specific service level agreement (SLA)
exists, ensure that expectations for response and resolution are clearly documented
and communicated.
" Be prepared to act quickly: Be prepared to address specific common issues
extremely quickly. Slow support response will result in users finding workarounds.

Improve your internal COE extended support:

" Determine how escalated support will work: Decide what the escalation path will
be for requests the help desk cannot directly handle. Ensure that the COE (or
equivalent personnel) is prepared to step in when needed. Clearly define where
help desk responsibilities end, and where COE extended support responsibilities
begin.
" Encourage collaboration between COE and system administrators: Ensure that
COE members and Fabric administrators have a direct escalation path to reach
global administrators for Microsoft 365 and Azure. It's critical to have a
communication channel when a widespread issue arises that's beyond the scope of
Fabric.
" Create a feedback loop from the COE back to the help desk: When the COE learns
of new information, the IT knowledgebase should be updated. The goal is for the
primary help desk personnel to continually become better equipped at handling
more issues in the future.
" Create a feedback loop from the help desk to the COE: When support personnel
observe redundancies or inefficiencies, they can communicate that information to
the COE, who might choose to improve the knowledgebase or get involved
(particularly if it relates to governance or security).

Questions to ask

Use questions like those found below to assess user support.

Who is responsible for supporting enterprise data and BI solutions? What about
self-service solutions?
How are the business impact and urgency of issues identified to effectively detect
and prioritize critical issues?
Is there a clear process for business users to report issues with data and BI
solutions? How does this differ between enterprise and self-service solutions?
What are the escalation paths?
What types of issues do content creators and consumers typically experience? For
example, do they experience data quality issues, performance issues, access issues,
and others?
Are any issues closed without them being resolved? Are there "known issues" in
data items or reports today?
Is a process in place for data asset owners to escalate issues with self-service BI
solutions to central teams like the COE?
How frequent are issues in the data and existing solutions? What proportion of
these issues are found before they impact business end users?
How long does it typically take to resolve issues? Is this timing sufficient for
business users?
What are examples of recent issues and the concrete impact on the business?
Do enterprise teams and content creators know how to report Fabric issues to
Microsoft? Can enterprise teams effectively leverage community resources to
unblock critical issues?

U Caution

When assessing user support and describing risks or issues, be careful to use
neutral language that doesn't place blame on individuals or teams. Ensure
everyone's perspective is fairly represented in an assessment. Focus on objective
facts to accurately understand and describe the context.

Maturity levels

The following maturity levels will help you assess the current state of your Power BI user
support.

Level State of user support

100: Initial • Individual business units find effective ways of supporting each other. However,
the tactics and practices are siloed and not consistently applied.

• An internal discussion channel is available. However, it's not monitored closely.


Therefore, the user experience is inconsistent.

200: • The COE actively encourages intra-team support and growth of the champions
Repeatable network.

• The internal discussion channel gains traction. It's become known as the default
place for questions and discussions.

• The help desk handles a small number of the most common technical support
issues.

300: Defined • The internal discussion channel is popular and largely self-sustaining. The COE
actively monitors and manages the discussion channel to ensure that all questions
are answered quickly and correctly.
Level State of user support

• A help desk tracking system is in place to monitor support frequency, response


topics, and priorities.

• The COE provides appropriate extended support when required.

400: Capable • The help desk is fully trained and prepared to handle a broader number of
known and expected technical support issues.

• SLAs are in place to define help desk support expectations, including extended
support. The expectations are documented and communicated so they're clear to
everyone involved.

500: Efficient • Bidirectional feedback loops exist between the help desk and the COE.

• Key performance indicators measure satisfaction and support methods.

• Automation is in place to allow the help desk to react faster and reduce errors
(for example, use of APIs and scripts).

Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about system
oversight and administration activities.
Microsoft Fabric adoption roadmap:
System oversight
Article • 11/24/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

System oversight—also known as Fabric administration—is the ongoing, day-to-day,


administrative activities. It's specifically concerned with:

Governance: Enact governance guidelines and policies to support self-service and


enterprise data and business intelligence (BI) scenarios.
User empowerment: Facilitate and support the internal processes and systems that
empower the internal user community to the extent possible, while adhering to the
organization's regulations and requirements.
Adoption: Allow for broader organizational adoption of Fabric with effective
governance and data management practices.

) Important

Your organizational data culture objectives provide direction for your governance
decisions, which in turn dictate how Fabric administration activities take place and
by whom.

System oversight is a broad and deep topic. The goal of this article is to introduce some
of the most important considerations and actions to help you become successful with
your organizational adoption objectives.

Fabric administrators
The Fabric administrator role is a defined role in Microsoft 365, which delegates a subset
of management activities. Global Microsoft 365 administrators are implicitly Fabric
administrators. Power Platform administrators are also implicitly Fabric administrators.

A key governance decision is who to assign as a Fabric administrator. It's a centralized


role that affects your entire tenant. Ideally, there are two to four people in the
organization who are capable of managing Fabric. Your administrators should operate in
close coordination with the Center of Excellence (COE).

High privilege role


The Fabric administrator role is a high privilege role because:

User experience: Settings that are managed by a Fabric administrator have a


significant effect on user capabilities and user experience. For more information,
see Govern tenant settings.
Full security access: Fabric administrators can update access permissions for
workspaces in the tenant. The result is that an administrator can allow permission
to view or download data and reports as they see fit. For more information, see
Govern tenant settings.
Personal workspace access: Fabric administrators can access contents and govern
the personal workspace of any user.
Metadata: Fabric administrators can view all tenant metadata, including all user
activities that occur in the Fabric portal (described in the Auditing and monitoring
section below).

) Important

Having too many Fabric administrators is a risk. It increases the probability of


unapproved, unintended, or inconsistent management of the tenant.

Roles and responsibilities


The types of activities that an administrator will do on a day-to-day basis will differ
between organizations. What's important, and given priority in your data culture, will
heavily influence what an administrator does to support business-led self-service,
managed self-service, and enterprise data and BI scenarios. For more information, see
the Content ownership and management article.

 Tip

The best type of person to serve as a Fabric administrator is one who has enough
knowledge about the tools and workloads to understand what self-service users
need to accomplish. With this understanding, the administrator can balance user
empowerment and governance.
In addition to the Fabric administrator, there are other roles which use the term
administrator. The following table describes the roles that are commonly and regularly
used.

Role Scope Description

Fabric Tenant Manages tenant settings and other settings in the Fabric portal.
administrator All general references to administrator in this article refer to
this type of administrator.

Capacity One Manages workspaces and workloads, and monitors the health
administrator capacity of a Fabric capacity.

Data gateway One Manages gateway data source configuration, credentials, and
administrator gateway users assignments. Might also handle gateway software
updates (or collaborate with infrastructure team on updates).

Workspace One Manages workspace settings and access.


administrator workspace

The Fabric ecosystem of workloads is broad and deep. There are many ways that Fabric
integrates with other systems and platforms. From time to time, it'll be necessary to
work with other administrators and IT professionals. For more information, see
Collaborate with other administrators.

The remainder of this article provides an overview of the most common activities that a
Fabric administrator does. It focuses on activities that are important to carry out
effectively when taking a strategic approach to organizational adoption.

Service management
Overseeing the tenant is a crucial aspect to ensure that all users have a good experience
with Power BI. A few of the key governance responsibilities of a Fabric administrator
include:

Tenant settings: Control which Power BI features and capabilities are enabled, and
for which users in your organization.
Domains: Group together two or more workspaces that have similar
characteristics.
Workspaces: Review and manage workspaces in the tenant.
Embed codes: Govern which reports have been published publicly on the internet.
Organizational visuals: Register and manage organizational visuals.
Azure connections: Integrate with Azure services to provide additional
functionality.
For more information, see Tenant administration.

User machines and devices


The adoption of Fabric depends directly on content creators and consumers having the
tools and applications they need. Here are some important questions to consider.

How will users request access to new tools? Will access to licenses, data, and
training be available to help users use tools effectively?
How will content consumers view content that's been published by others?
How will content creators develop, manage, and publish content? What's your
criteria for deciding which tools and applications are appropriate for which use
cases?
How will you install and set up tools? Does that include related prerequisites and
data connectivity components?
How will you manage ongoing updates for tools and applications?

For more information, see User tools and devices.

Architecture
In the context of Fabric, architecture relates to data architecture, capacity management,
and data gateway architecture and management.

Data architecture
Data architecture refers to the principles, practices, and methodologies that govern and
define what data is collected, and how it's ingested, stored, managed, integrated,
modeled, and used.

There are many data architecture decisions to make. Frequently the COE engages in
data architecture design and planning. It's common for administrators to get involved as
well, especially when they manage databases or Azure infrastructure.

) Important

Data architecture decisions have a significant impact on Fabric adoption, user


satisfaction, and individual project success rates.

A few data architecture considerations that affect adoption include:


Where does Fabric fit into the organization's entire data architecture? Are there
other existing components such as an enterprise data warehouse (EDW) or a data
lake that will be important to factor into plans?
Is Fabric used end-to-end for data preparation, data modeling, and data
presentation or is Fabric used for only some of those capabilities?
Are managed self-service patterns followed to find the best balance between data
reusability and report creator flexibility?
Where will users consume the content? Generally, the three main ways to deliver
content are: the Fabric portal, Power BI Report Server, and embedded in custom
applications. Additionally, Microsoft Teams is a convenient alternative for users
who spend a lot of time in Teams.
Who is responsible for managing and maintaining the data architecture? Is it a
centralized team, or a decentralized team? How is the COE represented in this
team? Are certain skillsets required?
What data sources are the most important? What types of data will we be
acquiring?
What semantic model connectivity mode and storage mode choices (for example,
Direct Lake, import, live connection, DirectQuery, or composite model frameworks)
are the best fit for the use cases?
To what extent is data reusability encouraged using lakehouses, warehouses, and
shared semantic models?
To what extent is the reusability of data preparation logic and advanced data
preparation encouraged by using data pipelines, notebooks, and dataflows?

It's important for administrators to become fully aware of Fabric's technical capabilities
—as well as the needs and goals of their stakeholders—before they make architectural
decisions.

 Tip

Get into the good habit of completing a technical proof of concept (POC) to test
out assumptions and ideas. Some organizations also call them micro-projects when
the goal is to deliver a small unit of work. The goal of a POC is to address
unknowns and reduce risk as early as possible. A POC doesn't have to be
throwaway work, but it should be narrow in scope. Best practices reviews, as
described in the Mentoring and user enablement article, are another useful way to
help content creators with important architectural decisions.

Capacity management
Capacity includes features and capabilities to deliver analytics solutions at scale. There
are two types of Fabric organizational licenses: Premium per User (PPU) and capacity.
There are several types of capacity licenses. The type of capacity license determines
which Fabric workloads are supported.

The use of capacity can play a significant role in your strategy for creating, managing,
publishing, and distributing content. A few of the top reasons to invest in capacity
include:

Unlimited Power BI content distribution to large numbers of read-only users.


Content consumption by users with a free Power BI license is available in Premium
capacity only, not PPU. Content consumption by free users is also available with an
F64 Fabric capacity license or higher.
Access to Fabric experiences for producing end-to-end analytics.
Deployment pipelines to manage the publication of content to development, test,
and production workspaces. They're highly recommended for critical content to
improve release stability.
XMLA endpoint, which is an industry standard protocol for managing and
publishing a semantic model, or querying the semantic model from any XMLA-
compliant tool.
Increased model size limits, including large semantic model support.
More frequent data refreshes.
Storage of data in a specific geographic area that's different from the home region.

The above list isn't all-inclusive. For a complete list, see Power BI Premium features.

Manage Fabric capacity

Overseeing the health of Fabric capacity is an essential ongoing activity for


administrators. Each capacity SKU includes a set of resources. Capacity units (CUs) are
used to measure compute resources for each SKU.

U Caution

Lack of management, and consistently exceeding the limits of your capacity


resources can often result in performance challenges and user experience
challenges. Both challenges, if not managed correctly, can contribute to negative
impact on adoption efforts.

Suggestions for managing Fabric capacity:


Define who is responsible for managing the capacity. Confirm the roles and
responsibilities so that it's clear what action will be taken, why, when, and by
whom.
Create a specific set of criteria for content that will be published to capacity. It's
especially relevant when a single capacity is used by multiple business units
because the potential exists to disrupt other users if the capacity isn't well-
managed. Consider requiring a best practices review (such as reasonable semantic
model size and efficient calculations) before publishing new content to a
production capacity.
Regularly use the Fabric capacity metrics app to understand resource utilization
and patterns for the capacity. Most importantly, look for consistent patterns of
overutilization, which will contribute to user disruptions. An analysis of usage
patterns should also make you aware if the capacity is underutilized, indicating
more value could be gained from the investment.
Set the tenant setting so Fabric notifies you if the capacity becomes overloaded ,
or if an outage or incident occurs.

Autoscale
Autoscale is intended to handle occasional or unexpected bursts in capacity usage
levels. Autoscale can respond to these bursts by automatically increasing CPU resources
to support the increased workload.

Automated scaling up reduces the risk of performance and user experience challenges
in exchange for a financial impact. If the capacity isn't well-managed, autoscale might
trigger more often than expected. In this case, the metrics app can help you to
determine underlying issues and do capacity planning.

Decentralized capacity management

Capacity administrators are responsible for assigning workspaces to a specific capacity.

Be aware that workspace administrators can also assign a workspace to PPU if the
workspace administrator possesses a PPU license. However, it would require that all
other workspace users must also have a PPU license to collaborate on, or view, Power BI
content in the workspace. Other Fabric workloads can't be included in a workspace
assigned to PPU.

It's possible to set up multiple capacities to facilitate decentralized management by


different business units. Decentralizing management of certain aspects of Fabric is a
great way to balance agility and control.
Here's an example that describes one way you could manage your capacity.

Purchase a P3 capacity node in Microsoft 365. It includes 32 virtual cores (v-cores).


Use 16 v-cores to create the first capacity. It will be used by the Sales team.
Use 8 v-cores to create the second capacity. It will be used by the Operations team.
Use the remaining 8 v-cores to create the third capacity. It will support general use.

The previous example has several advantages.

Separate capacity administrators can be set up for each capacity. Therefore, it


facilitates decentralized management situations.
If a capacity isn't well-managed, the effect is confined to that capacity only. The
other capacities aren't impacted.
Billing and chargebacks to other business units are straightforward.
Different workspaces can be easily assigned to the separate capacities.

However, the previous example has disadvantages, too.

The limits per capacity are lower. The maximum memory size allowed for semantic
models isn't the entire P3 capacity node size that was purchased. Rather, it's the
assigned capacity size where the semantic model is hosted.
It's more likely one of the smaller capacities will need to be scaled up at some
point in time.
There are more capacities to manage in the tenant.

7 Note

Resources for Power BI Premium per Capacity are referred to as v-cores. However, a
Fabric capacity refers to them as capacity units (CUs). The scale for CUs and v-cores
is different for each SKU. For more information, see the Fabric licensing
documentation.

Data gateway architecture and management


A data gateway facilitates the secure and efficient transfer of data between
organizational data sources and the Fabric service. A data gateway is needed for data
connectivity to on-premises or cloud services when a data source is:

Located within the enterprise data center.


Configured behind a firewall.
Within a virtual network.
Within a virtual machine.
There are three types of gateways.

On-premises data gateway (standard mode) is a gateway service that supports


connections to registered data sources for many users to use. The gateway
software installations and updates are installed on a machine that's managed by
the customer.
On-premises data gateway (personal mode) is a gateway service that supports
data refresh only. This gateway mode is typically installed on the PC of a content
creator. It supports use by one user only. It doesn't support live connection or
DirectQuery connections.
Virtual network data gateway is a Microsoft managed service that supports
connectivity for many users. Specifically, it supports connectivity for semantic
models and dataflows stored in workspaces assigned to Premium capacity or
Premium Per User.

 Tip

The decision of who can install gateway software is a governance decision. For
most organizations, use of the data gateway in standard mode, or a virtual network
data gateway, should be strongly encouraged. They're far more scalable,
manageable, and auditable than data gateways in personal mode.

Decentralized gateway management


The On-premises data gateway (standard mode) and Virtual network data gateway
support specific data source types that can be registered, together with connection
details and how credentials are stored. Users can be granted permission to use the
gateway data source so that they can schedule a refresh or run DirectQuery queries.

Certain aspects of gateway management can be done effectively on a decentralized


basis to balance agility and control. For example, the Operations group might have a
gateway dedicated to its team of self-service content creators and data owners.

Decentralized gateway management works best when it's a joint effort as follows.

Managed by the decentralized data owners:

Departmental data source connectivity information and privacy levels.


Departmental data source stored credentials (including responsibility for updating
routine password changes).
Departmental data source users who are permitted to use each data source.
Managed by centralized data owners (includes data sources that are used broadly across
the organization; management is centralized to avoid duplicated data sources):

Centralized data source connectivity information and privacy levels.


Centralized data source stored credentials (including responsibility for updating
routine password changes).
Centralized data source users who are permitted to use each data source.

Managed by IT:

Gateway software updates (gateway updates are usually released monthly).


Installation of drivers and custom connectors (the same ones that are installed on
user machines).
Gateway cluster management (number of machines in the gateway cluster for high
availability, disaster recovery, and to eliminate a single point of failure, which can
cause significant user disruptions).
Server management (for example, operating system, RAM, CPU, or networking
connectivity).
Management and backup of gateway encryption keys.
Monitoring of gateway logs to assess when scale-up or scale-out is necessary.
Alerting of downtime or persistent low resources on the gateway machine.

 Tip

Allowing a decentralized team to manage certain aspects of the gateway means


they can move faster. The tradeoff of decentralized gateway management does
mean running more gateway servers so that each can be dedicated to a specific
area of the organization. If gateway management is handled entirely by IT, it's
imperative to have a good process in place to quickly handle requests to add data
sources and apply user updates.

User licenses
Every user needs a commercial license, which is integrated with a Microsoft Entra
identity. The user license could be Free, Pro, or Premium Per User (PPU).

A user license is obtained via a subscription, which authorizes a certain number of


licenses with a start and end date.

7 Note
Although each user requires a license, a Pro or PPU license is only required to share
Power BI content. Users with a free license can create and share Fabric content
other than Power BI items.

There are two approaches to procuring subscriptions.

Centralized: Microsoft 365 billing administrator purchases a subscription for Pro or


PPU . It's the most common way to manage subscriptions and assign licenses.
Decentralized: Individual departments purchase a subscription via self-service
purchasing.

Self-service purchasing
An important governance decision relates to what extent self-service purchasing will be
allowed or encouraged.

Self-service purchasing is useful for:

Larger organizations with decentralized business units that have purchasing


authority and want to handle payment directly with a credit card.
Organizations that intend to make it as easy as possible to purchase subscriptions
on a monthly commitment.

Consider disabling self-service purchasing when:

Centralized procurement processes are in place to meet regulatory, security, and


governance requirements.
Discounted pricing is obtained through an Enterprise Agreement (EA).
Existing processes are in place to handle intercompany chargebacks.
Existing processes are in place to handle group-based licensing assignments.
Prerequisites are required for obtaining a license, such as approval, justification,
training, or a governance policy requirement.
There's a valid need, such as a regulatory requirement, to control access closely.

User license trials


Another important governance decision is whether user license trials are allowed. By
default, trials are enabled. That means when content is shared with a colleague, if the
recipient doesn't have a Pro or PPU license, they'll be prompted to start a trial to view
the content (if the content doesn't reside within a workspace backed by capacity). The
trial experience is intended to be a convenience that allows users to continue with their
normal workflow.
Generally, disabling trials isn't recommended. It can encourage users to seek
workarounds, perhaps by exporting data or working outside of supported tools and
processes.

Consider disabling trials only when:

There are serious cost concerns that would make it unlikely to grant full licenses at
the end of the trial period.
Prerequisites are required for obtaining a license (such as approval, justification, or
a training requirement). It's not sufficient to meet this requirement during the trial
period.
There's a valid need, such as a regulatory requirement, to control access to the
Fabric service closely.

 Tip

Don't introduce too many barriers to obtaining a Fabric license. Users who need to
get work done will find a way, and that way might involve workarounds that aren't
ideal. For instance, without a license to use Fabric, people might rely far too much
on sharing files on a file system or via email when significantly better approaches
are available.

Cost management
Managing and optimizing the cost of cloud services, like Fabric, is an important activity.
Here are several activities you can consider.

Analyze who is using—and, more to the point, not using—their allocated Fabric
licenses and make necessary adjustments. Fabric usage is analyzed using the
activity log.
Analyze the cost effectiveness of capacity or Premium Per User. In addition to the
additional features, perform a cost/benefit analysis to determine whether capacity
licensing is more cost-effective when there are a large number of consumers.
Carefully monitor and manage Fabric capacity. Understanding usage patterns over
time will allow you to predict when to purchase more capacity. For example, you
might choose to scale up a single capacity from a P1 to P2, or scale out from one
P1 capacity to two P1 capacities.
If there are occasional spikes in the level of usage, use of autoscale with Fabric is
recommended to ensure the user experience isn't interrupted. Autoscale will scale
up capacity resources for 24 hours, then scale them back down to normal levels (if
sustained activity isn't present). Manage autoscale cost by constraining the
maximum number of v-cores, and/or with spending limits set in Azure. Due to the
pricing model, autoscale is best suited to handle occasional unplanned increases in
usage.
For Azure data sources, co-locate them in the same region as your Fabric tenant
whenever possible. It will avoid incurring Azure egress charges . Data egress
charges are minimal, but at scale can add up to be considerable unplanned costs.

Security, information protection, and data loss


prevention
Security, information protection, and data loss prevention (DLP) are joint responsibilities
among all content creators, consumers, and administrators. That's no small task because
there's sensitive information everywhere: personal data, customer data, or customer-
authored data, protected health information, intellectual property, proprietary
organizational information, just to name a few. Governmental, industry, and contractual
regulations could have a significant impact on the governance guidelines and policies
that you create related to security.

The Power BI security whitepaper is an excellent resource for understanding the breadth
of considerations, including aspects that Microsoft manages. This section will introduce
several topics that customers are responsible for managing.

User responsibilities
Some organizations ask Fabric users to accept a self-service user acknowledgment. It's a
document that explains the user's responsibilities and expectations for safeguarding
organizational data.

One way to automate its implementation is with a Microsoft Entra terms of use policy.
The user is required to view and agree to the policy before they're permitted to visit the
Fabric portal for the first time. You can also require it to be acknowledged on a recurring
basis, like an annual renewal.

Data security
In a cloud shared responsibility model, securing the data is always the responsibility of
the customer. With a self-service data platform, self-service content creators have
responsibility for properly securing the content that they shared with colleagues.

The COE should provide documentation and training where relevant to assist content
creators with best practices (particularly situations for dealing with ultra-sensitive data).
Administrators can be help by following best practices themselves. Administrators can
also raise concerns when they see issues that could be discovered when managing
workspaces, auditing user activities, or managing gateway credentials and users. There
are also several tenant settings that are usually restricted except for a few users (for
instance, the ability to publish to web or the ability to publish apps to the entire
organization).

External guest users


External users—such as partners, customers, vendors, and consultants—are a common
occurrence for some organizations, and rare for others. How you handle external users is
a governance decision.

External user access is controlled by tenant settings and certain Microsoft Entra ID
settings. For details of external user considerations, review the Distribute Power BI
content to external guest users using Microsoft Entra B2B whitepaper.

Information protection and data loss prevention


Fabric supports capabilities for information protection and data loss prevention (DLP) in
the following ways.

Information protection: Microsoft Purview Information Protection (formerly known


as Microsoft Information Protection) includes capabilities for discovering,
classifying, and protecting data. A key principle is that data can be better protected
once it's been classified. The key building block for classifying data is sensitivity
labels. For more information, see Information protection for Power BI planning.
Data loss prevention for Power BI: Microsoft Purview Data Loss Prevention
(formerly known as Office 365 Data Loss Prevention) supports DLP policies for
Power BI. By using sensitivity labels or sensitive information types, DLP policies for
Power BI help an organization locate sensitive semantic models. For more
information, see Data loss prevention for Power BI planning.
Microsoft Defender for Cloud Apps: Microsoft Defender for Cloud Apps (formerly
known as Microsoft Cloud App Security) supports policies that help protect data,
including real-time controls when users interact with the Power BI service. For
more information, see Defender for Cloud Apps for Power BI planning.

Data residency
For organizations with requirements to store data within a geographic region, Fabric
capacity can be set for a specific region that's different from the home region of the
Fabric tenant.

Encryption keys
Microsoft handles encryption of data at rest in Microsoft data centers with transparent
server-side encryption and auto-rotation of certificates. For customers with regulatory
requirements to manage the Premium encryption key themselves, Premium capacity can
be configured to use Azure Key Vault. Using customer-managed keys—also known as
bring-your-own-key or BYOK—is a precaution to ensure that, in the event of a human
error by a service operator, customer data can't be exposed.

Be aware that Premium Per User (PPU) only supports BYOK when it's enabled for the
entire Fabric tenant.

Auditing and monitoring


It's critical that you make use of auditing data to analyze adoption efforts, understand
usage patterns, educate users, support users, mitigate risk, improve compliance, manage
license costs, and monitor performance. For more information about why auditing your
data is valuable, see Auditing and monitoring overview.

There are different ways to approach auditing and monitoring depending on your role
and your objectives. The following articles describe various considerations and planning
activities.

Report-level auditing: Techniques that report creators can use to understand


which users are using the reports that they create, publish, and share.
Data-level auditing: Methods that data creators can use to track the performance
and usage patterns of data assets that they create, publish, and share.
Tenant-level auditing: Key decisions and actions administrators can take to create
an end-to-end auditing solution.
Tenant-level monitoring: Tactical actions administrators can take to monitor the
Power BI service, including updates and announcements.

REST APIs
The Power BI REST APIs and the Fabric REST APIs provide a wealth of information about
your Fabric tenant. Retrieving data by using the REST APIs should play an important role
in managing and governing a Fabric implementation. For more information about
planning for the use of REST APIs for auditing, see Tenant-level auditing.
You can retrieve auditing data to build an auditing solution, manage content
programmatically, or increase the efficiency of routine actions. The following table
presents some actions you can perform with the REST APIs.

Action Documentation resource(s)

Audit user activities REST API to get activity events

Audit workspaces, items, and Collection of asynchronous metadata scanning REST


permissions APIs to obtain a tenant inventory

Audit content shared to entire REST API to check use of widely shared links
organization

Audit tenant settings REST API to check tenant settings

Publish content REST API to deploy items from a deployment pipeline


or clone a report to another workspace

Manage content REST API to refresh a semantic model or take over


ownership of a semantic model

Manage gateway data sources REST API to update credentials for a gateway data
source

Export content REST API to export a report

Create workspaces REST API to create a new workspace

Manage workspace permissions REST API to assign user permissions to a workspace

Update workspace name or description REST API to update workspace attributes

Restore a workspace REST API to restore a deleted workspace

Programmatically retrieve a query result REST API to run a DAX query against a semantic model
from a semantic model

Assign workspaces to capacity REST API to assign workspaces to capacity

Programmatically change a data model Tabular Object Model (TOM) API

Embed Power BI content in custom Power BI embedded analytics client APIs


applications

 Tip

There are many other Power BI REST APIs. For a complete list, see Using the Power
BI REST APIs.
Planning for change
Every month, Microsoft releases new Fabric features and capabilities. To be effective, it's
crucial that everyone involved with system oversight stays current. For more
information, see Tenant-level monitoring.

) Important

Don't underestimate the importance of staying current. If you get a few months
behind on announcements, it can become difficult to properly manage Fabric and
support your users.

Considerations and key actions

Checklist - Considerations and key actions you can take for system oversight follow.

Improve system oversight:

" Verify who is permitted to be a Fabric administrator: If possible, reduce the


number of people granted the Fabric administrator role if it's more than a few
people.
" Use PIM for occasional administrators: If you have people who occasionally need
Fabric administrator rights, consider implementing Privileged Identity Management
(PIM) in Microsoft Entra ID. It's designed to assign just-in-time role permissions that
expire after a few hours.
" Train administrators: Check the status of cross-training and documentation in place
for handling Fabric administration responsibilities. Ensure that a backup person is
trained so that needs can be met timely, in a consistent way.

Improve management of the Fabric service:

" Review tenant settings: Conduct a review of all tenant settings to ensure they're
aligned with data culture objectives and governance guidelines and policies. Verify
which groups are assigned for each setting.
" Document the tenant settings: Create documentation of your tenant settings for
the internal Fabric community and post it in the centralized portal. Include which
groups a user would need to request to be able to use a feature. Use the Get Tenant
Settings REST API to make the process more efficient, and to create snapshots of
the settings on a regular basis.
" Customize the Get Help links: When user resources are established, as described in
the Mentoring and user enablement article, update the tenant setting to customize
the links under the Get Help menu option. It will direct users to your
documentation, community, and help.

Improve management of user machines and devices:

" Create a consistent onboarding process: Review your process for how onboarding
of new content creators is handled. Determine if new requests for software, such as
Power BI Desktop, and user licenses (Free, Pro, or PPU) can be handled together. It
can simplify onboarding since new content creators won't always know what to ask
for.
" Handle user machine updates: Ensure an automated process is in place to install
and update software, drivers, and settings to ensure all users have the same version.

Data architecture planning:

" Assess what your end-to-end data architecture looks like: Make sure you're clear
on:
How Fabric is currently used by the different business units in your organization
versus how you want Fabric to be used. Determine if there's a gap.
If there are any risks that should be addressed.
If there are any high-maintenance situations to be addressed.
What data sources are important for Fabric users, and how they're documented
and discovered.
" Review existing data gateways: Find out what gateways are being used throughout
your organization. Verify that gateway administrators and users are set correctly.
Verify who is supporting each gateway, and that there's a reliable process in place
to keep the gateway servers up to date.
" Verify use of personal gateways: Check the number of personal gateways that are
in use, and by whom. If there's significant usage, take steps to move towards use of
the standard mode gateway.

Improve management of user licenses:

" Review the process to request a user license: Clarify what the process is, including
any prerequisites, for users to obtain a license. Determine whether there are
improvements to be made to the process.
" Determine how to handle self-service license purchasing: Clarify whether self-
service licensing purchasing is enabled. Update the settings if they don't match
your intentions for how licenses can be purchased.
" Confirm how user trials are handled: Verify user license trials are enabled or
disabled. Be aware that all user trials are Premium Per User. They apply to Free
licensed users signing up for a trial, and Pro users signing up for a Premium Per
User trial.

Improve cost management:

" Determine your cost management objectives: Consider how to balance cost,


features, usage patterns, and effective utilization of resources. Schedule a routine
process to evaluate costs, at least annually.
" Obtain activity log data: Ensure you have access to the activity log data to assist
with cost analysis. It can be used to understand who is—or isn't—using the license
assigned to them.

Improve security and data protection:

" Clarify exactly what the expectations are for data protection: Ensure the
expectations for data protection, such as how to use sensitivity labels, are
documented and communicated to users.
" Determine how to handle external users: Understand and document the
organizational policies around sharing Fabric content with external users. Ensure
that settings in Fabric support your policies for external users.
" Set up monitoring: Investigate the use of Microsoft Defender for Cloud Apps to
monitor user behavior and activities in Fabric.

Improve auditing and monitoring:

" Plan for auditing needs: Collect and document the key business requirements for
an auditing solution. Consider your priorities for auditing and monitoring. Make key
decisions related to the type of auditing solution, permissions, technologies to be
used, and data needs. Consult with IT to clarify what auditing processes currently
exist, and what preferences of requirements exist for building a new solution.
" Consider roles and responsibilities: Identify which teams will be involved in
building an auditing solution, as well as the ongoing analysis of the auditing data.
" Extract and store user activity data: If you aren't currently extracting and storing
the raw data, begin retrieving user activity data.
" Extract and store snapshots of tenant inventory data: Begin retrieving metadata to
build a tenant inventory, which describes all workspaces and items.
" Extract and store snapshots of users and groups data: Begin retrieving metadata
about users, groups, and service principals.
" Create a curated data model: Perform data cleansing and transformations of the
raw data to create a curated data model that'll support analytical reporting for your
auditing solution.
" Analyze auditing data and act on the results: Create analytic reports to analyze the
curated auditing data. Clarify what actions are expected to be taken, by whom, and
when.
" Include additional auditing data: Over time, determine whether other auditing
data would be helpful to complement the activity log data, such as security data.

 Tip

For more information, see Tenant-level auditing.

Use the REST APIs:

" Plan for your use of the REST APIs: Consider what data would be most useful to
retrieve from the Power BI REST APIs and the Fabric REST APIs.
" Conduct a proof of concept: Do a small proof of concept to validate data needs,
technology choices, and permissions.

Questions to ask

Use questions like those found below to assess system oversight.

Are there atypical administration settings enabled or disabled? For example, is the
entire organization allowed to publish to web (we strongly advise restricting this
feature).
Do administration settings and policies align with, or inhibit, business the way user
work?
Is there a process in place to critically appraise new settings and decide how to set
them? Alternatively, are only the most restrictive settings set as a precaution?
Are Microsoft Entra ID security groups used to manage who can do what?
Do central teams have visibility of effective auditing and monitoring tools?
Do monitoring solutions depict information about the data assets, user activities,
or both?
Are auditing and monitoring tools actionable? Are there clear thresholds and
actions set, or do monitoring reports simply describe what's in the data estate?
Is Azure Log Analytics used (or planned to be used) for detailed monitoring of
Fabric capacities? Are the potential benefits and cost of Azure Log Analytics clear
to decision makers?
Are sensitivity labels and data loss prevention policies used? Are the potential
benefits and cost of these clear to decision makers?
Do administrators know the current number of licenses and licensing cost? What
proportion of the total BI spend goes to Fabric capacity, and to Pro and PPU
licenses? If the organization is only using Pro licenses for Power BI content, could
the number of users and usage patterns warrant a cost-effective switch to Power BI
Premium or Fabric capacity?

Maturity levels

The following maturity levels will help you assess the current state of your Power BI
system oversight.

Level State of system oversight

100: Initial • Tenant settings are configured independently by one or more administrators
based on their best judgment.

• Architecture needs, such as gateways and capacities, are satisfied on an as-


needed basis. However, there isn't a strategic plan.

• Fabric activity logs are unused, or selectively used for tactical purposes.

200: • The tenant settings purposefully align with established governance guidelines
Repeatable and policies. All tenant settings are reviewed regularly.

• A small number of specific administrators are selected. All administrators have a


good understanding of what users are trying to accomplish in Fabric, so they're in
a good position to support users.

• A well-defined process exists for users to request licenses and software. Request
forms are easy for users to find. Self-service purchasing settings are specified.

• Sensitivity labels are configured in Microsoft 365. However, use of labels remains
inconsistent. The advantages of data protection aren't well understood by users.

300: Defined • The tenant settings are fully documented in the centralized portal for users to
reference, including how to request access to the correct groups.
Level State of system oversight

• Cross-training and documentation exist for administrators to ensure continuity,


stability, and consistency.

• Sensitivity labels are assigned to content consistently. The advantages of using


sensitivity labels for data protection are understood by users.

• An automated process is in place to export Fabric activity log and API data to a
secure location for reporting and auditing.

400: Capable • Administrators work closely with the COE and governance teams to provide
oversight of Fabric. A balance of user empowerment and governance is
successfully achieved.

• Decentralized management of data architecture (such as gateways or capacity


management) is effectively handled to balance agility and control.

• Automated policies are set up and actively monitored in Microsoft Defender for
Cloud Apps for data loss prevention.

• Activity log and API data is actively analyzed to monitor and audit Fabric
activities. Proactive action is taken based on the data.

500: Efficient • The Fabric administrators work closely with the COE actively stay current. Blog
posts and release plans from the Fabric product team are reviewed frequently to
plan for upcoming changes.

• Regular cost management analysis is done to ensure user needs are met in a
cost-effective way.

• The Fabric REST API is used to retrieve tenant setting values on a regular basis.

• Activity log and API data is actively used to inform and improve adoption and
governance efforts.

Next steps
For more information about system oversight and Fabric administration, see the
following resources.

Administer Microsoft Fabric


Administer Power BI - Part 1
Administer Power BI - Part 2
Administrator in a Day Training – Day 1
Administrator in a Day Training – Day 2
Power BI security whitepaper
External guest users whitepaper
Planning a Power BI enterprise deployment whitepaper

In the next article in the Microsoft Fabric adoption roadmap series, learn about effective
change management.
Microsoft Fabric adoption roadmap:
Change management
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

When working toward improved data and business intelligence (BI) adoption, you
should plan for effective change management. In the context of data and BI, change
management includes procedures that address the impact of change for people in an
organization. These procedures safeguard against disruption and productivity loss due
to changes in solutions or processes.

7 Note

Effective change management is particularly important when you migrate to Power


BI.

Effective change management improves adoption and productivity because it:

Helps content creators and consumers use analytics more effectively and sooner.
Limits redundancy in data, analytical tools, and solutions.
Reduces the likelihood of risk-creating behaviors that affect shared resources (like
Fabric capacity) or organizational compliance (like data security and privacy).
Mitigates resistance to change that obstructs planning and inhibits user adoption.
Mitigates the impact of change and improving user wellbeing by reducing the
potential for disruption, stress, and conflict.

Effective change management is critical for successful adoption at all levels. To


successfully manage change, consider the key actions and activities described in the
following sections.

) Important

Change management is a fundamental obstacle to success in many organizations.


Effective change management requires that you understand that it's about people
—not tools or processes.
Successful change management involves empathy and communication. Ensure that
change isn't forced or resistance to change is ignored, because it can widen
organizational divides and further inhibit effectiveness.

 Tip

Whenever possible, we recommend that you describe and promote change as


improvement—it's much less threatening. For many people, change implies a cost in
terms of effort, focus, and time. Alternatively, improvement means a benefit
because it's about making something better.

Types of change to manage


When implementing data and BI solutions, you should manage different types of
change. Also, depending on the scale and scope of your implementation, you should
address different aspects of change.

Consider the following types of change to manage when you plan for Fabric adoption.

Process-level changes
Process-level changes are changes that affect a broader user community or the entire
organization. These changes typically have a larger impact, and so they require more
effort to manage. Specifically, this change management effort includes specific plans
and activities.

Here are some examples of process-level changes.

Change from centralized to decentralized approaches to ownership (change in


content ownership and management).
Change from enterprise to departmental, or from team to personal content
delivery (change in content delivery scope).
Change of central team structure (for example, forming a Center of Excellence).
Changes in governance policies.
Migration from other analytics products to Fabric, and the changes this migration
involves, like:
The separation of semantic models and reports, and a model-based approach
to analytics.
Transitioning from exports or static reports to interactive analytical reports,
which can involve filtering and cross-filtering.
Moving from distributing reports as PowerPoint files or flat files to accessing
reports directly from the Fabric portal.
Shifting from information in tables, paginated reports, and spreadsheets to
interactive visualizations and charts.
Changing from an on-premises or platform as a service (PaaS) platform to a
software as a service (SaaS) tool.

7 Note

Typically, giving up export-based processes or Excel reporting is a significant


challenge. That's because these methods are usually deeply engrained in the
organization and are tied to the autonomy and data skills of your users.

Solution-level changes
Solution-level changes are changes that affect a single solution or set of solutions.
These changes limit their impact to the user community of those solutions and their
dependent processes. Although solution-level changes typically have a lower impact,
they also tend to occur more frequently.

7 Note

In the context of this article, a solution is built to address specific business needs for
users. A solution can take many forms, such as a data pipeline, a lakehouse, a
semantic model, or a report. The considerations for change management described
in this article are relevant for all types of solutions, and not only reporting projects.

Here are some examples of solution-level changes.

Changes in calculation logic for KPIs or measures.


Changes in how master data or hierarchies for business attributes are mapped,
grouped, or described.
Changes in data freshness, detail, format, or complexity.
Introduction of advanced analytics concepts, like predictive analytics or
prescriptive analytics, or general statistics (if the user community isn't familiar with
these concepts, already).
Changes in the presentation of data, like:
Styling, colors, and other formatting choices for visuals.
The type of visualization.
How data is grouped or summarized (such as changing from different measures
of central tendency, like average, median, or geometric mean).
Changes in how content consumers interact with data (like connecting to a shared
semantic model instead of exporting information for personal usage scenarios).

How you prepare change management plans and activities will depend on the types of
change. To successfully and sustainably manage change, we recommend that you
implement incremental changes.

Address change incrementally


Change management can be a significant undertaking. Taking an incremental approach
can help you facilitate change in a way that's sustainable. To adopt an incremental
approach, you identify the highest priority changes and break them into manageable
parts, implementing each part with iterative phases and action plans.

The following steps outline how you can incrementally address change.

1. Define what's changing: Describe the change by outlining the before and after
states. Clarify the specific parts of the process or situation that you'll change,
remove, or introduce. Justify why this change is necessary, and when it should
occur.
2. Describe the impact of the change: For each of these changes, estimate the
business impact. Identify which processes, teams, or individuals the change affects,
and how disruptive it will be for them. Also consider any downstream effects the
change has on other dependent solutions or processes. Downstream effects might
result in other changes. Additionally, consider how long the situation remained the
same before it was changed. Changes to longer-standing processes tend to have a
higher impact, as preferences and dependencies arise over time.
3. Identify priorities: Focus on the changes with the highest potential impact. For
each change, outline a more detailed description of the changes and how it will
affect people.
4. Plan how to incrementally implement the change: Identify whether any high-
impact changes can be broken into stages or parts. For each part, describe how it
might be incrementally implemented in phases to limit its impact. Determine
whether there are any constraints or dependencies (such as when changes can be
made, or by whom).
5. Create an action plan for each phase: Plan the actions you will take to implement
and support each phase of the change. Also, plan for how you can mitigate
disruption in high-impact phases. Be sure to include a rollback plan in your action
plan, whenever possible.
 Tip

Iteratively plan how you'll implement each phase of these incremental changes as
part of your quarterly tactical planning.

When you plan to mitigate the impact of changes on Power BI adoption, consider the
activities described in the following sections.

Effectively communicate change


Ensure that you clearly and concisely describe planned changes for the user community.
Important communication should originate from the executive sponsor, or another
leader with relevant authority. Be sure to communicate the following details.

What's changing: What the situation is now and what it will be after the change.
Why it's changing: The benefit and value of the change for the audience.
When it's changing: An estimation of when the change will take effect.
Further context: Where people can go for more information.
Contact information: Who people should contact provide feedback, ask questions,
or raise concerns.

Consider maintaining a history of communications in your centralized portal. That way,


it's easy to find communications, timings, and details of changes after they've occurred.

) Important

You should communicate change with sufficient advanced notice so that people are
prepared. The higher the potential impact of the change, the earlier you should
communicate it. If unexpected circumstances prevent advance notice, be sure to
explain why in your communication.

Plan training and support


Changes to tools, processes, and solutions typically require training to use them
effectively. Additionally, extra support might be required to address questions or
respond to support requests.

Here are some actions you can take to plan for training and support.

Centralize training and support by using a centralized portal. The portal can help
organize discussions, collect feedback, and distribute training materials or
documentation by topic.
Consider incentives to encourage self-sustaining support within a community.
Schedule recurring office hours to answer questions and provide mentorship.
Create and demonstrate end-to-end scenarios for people to practice a new
process.
For high-impact changes, prepare training and support plans that realistically
assess the effort and actions needed to prevent the change from causing
disruption.

7 Note

These training and support actions will differ depending on the scale and scope of
the change. For high-impact, large-scale changes (like transitioning from enterprise
to managed self-service approaches to data and BI), you'll likely need to plan
iterative, multi-phase plans that span multiple planning periods. In this case,
carefully consider the effort and resources needed to deliver success.

Involve executive leadership


Executive support is critical to effective change management. When an executive
supports a change, it demonstrates its strategic importance or benefit to the rest of the
organization. This top-down endorsement and reinforcement is particularly important
for high-impact, large-scale changes, which have a higher potential for disruption. For
these scenarios, ensure that you actively engage and involve your executive sponsor to
endorse and reinforce the change.

U Caution

Resistance to change from the executive leadership is often a warning sign that
stronger business alignment is needed between the business and BI strategies. In
this scenario, consider specific alignment sessions and change management actions
with executive leadership.

Involve stakeholders
To effectively manage change, you can also take a bottom-up approach by engaging the
stakeholders, who are the people the change affects. When you create an action plan to
address the changes, identify and engage key stakeholders in focused, limited sessions.
In this way you can understand the impact of the change on the people whose work will
be affected by the change. Take note of their concerns and their ideas for how you
might lessen the impact of this change. Ensure that you identify any potentially
unexpected effects of the change on other people and processes.

Handle resistance to change


It's important to address resistance to change, as it can have substantial negative
impacts on adoption and productivity. When you address resistance to change, consider
the following actions and activities.

Involve your executive sponsor: The authority, credibility, and influence of the
executive sponsor is essential to support change management and resolve
disputes.
Identify blocking issues: When change disrupts the way people work, this change
can prevent people from effectively completing tasks in their regular activities. For
such blocking issues, identify potential workarounds when you take into account
the changes.
Focus on data and facts instead of opinions: Resistance to change is sometimes
due to opinions and preferences, because people are familiar with the situation
prior to the change. Understand why people have these opinions and preferences.
Perhaps it's due to convenience, because people don't want to invest time and
effort in learning new tools or processes.
Focus on business questions and processes instead of requirements: Changes
often introduce new processes to address problems and complete tasks. New
processes can lead to a resistance to change because people focus on what they
miss instead of fully understanding what's new and why.

Additionally, you can have a significant impact on change resistance by engaging


promoters and detractors.

Identify and engage promoters


Promoters are vocal, credible individuals in a user community who advocate in favor of a
tool, solution, or initiative. Promoters can have a positive impact on adoption because
they can influence peers to understand and accept change.

To effectively manage change, you should identify and engage promoters early in the
process. You should involve them and inform them about the change to better utilize
and amplify their advocacy.

 Tip
The promoters you identify might also be great candidates for your champions
network.

Identify and engage detractors


Detractors are the opposite of promoters. They are vocal, credible individuals in a user
community who advocate against a tool, solution, or initiative. Detractors can have a
significant negative influence on adoption because they can convince peers that the
change isn't beneficial. Additionally, detractors can advocate for alternative or solutions
marked for retirement, making it more difficult to decommission old tools, solutions, or
processes.

To effectively manage change, you should identify and engage detractors early in the
process. That way, you can mitigate the potential negative impact they have.
Furthermore, if you address their concerns, you might convert these detractors into
promoters, helping your adoption efforts.

 Tip

A common source of detractors is content owners for solutions that are going to be
modified or replaced. The change can sometimes threaten these content owners,
who are incentivized to resist the change in the hope that their solution will remain
in use. In this case, identify these content owners early and involve them in the
change. Giving these individuals a sense of ownership of the implementation will
help them embrace, and even advocate in favor, of the change.

Questions to ask

Use questions like those found below to assess change management.

Is there a role or team responsible for change management in the organization? If


so, how are they involved in data and BI initiatives?
Is change seen as an obstacle to achieving strategic success among people in the
organization? Is the importance of change management acknowledged in the
organization?
Are there any significant promoters for data and BI solutions and processes in the
user community? Conversely, are there any significant detractors?
What communication and training efforts are performed to launch new data tools
and solutions? How long do they last?
How is change in the user community handled (for example, with new hires or
promoted individuals)? What onboarding activities introduce these new individuals
to existing solutions, processes, and policies?
Do people who create Excel reports feel threatened or frustrated by initiatives to
automate reporting with BI tools?
To what extent do people associate their identities with the tools they use and the
solutions they have created and own?
How are changes to existing solutions planned and managed? Are changes
planned, with a visible roadmap, or are they reactive? Do people get sufficient
notification about upcoming changes?
How frequently do changes disrupt existing processes and tools?
How long does it take to decommission legacy systems or solutions when new
ones become available? How long does it take to implement changes to existing
solutions?
To what extent do people agree with the statement I am overwhelmed with the
amount of information I am required to process? To what extent do people agree
with the sentiment things are changing too much, too quickly?

Maturity levels

An assessment of change management evaluates how effectively the organization can


enact and respond to change.

The following maturity levels will help you assess your current state of change
management, as it relates to data and BI initiatives.

Level State of change management

100: Initial • Change is usually reactive, and it's also poorly communicated and
communicated.

• The purpose or benefits of change aren't well understood, and resistance to


change causes conflict and disruption.
Level State of change management

• No clear teams or roles are responsible for managing change for data initiatives.

200: • Executive leadership and decision makers recognize the need for change
Repeatable management in data and BI projects and initiatives.

• Some efforts are taken to plan or communicate change, but they're inconsistent
and often reactive. Resistance to change is still common. Change often disrupts
existing processes and tools.

300: • Formal change management plans or roles are in place. These plans include
Defined communication tactics and training, but they're not consistently or reliably
followed. Change occasionally disrupts existing processes and tools.

• Successful change management is championed by key individuals that bridge


organizational boundaries.

400: • Empathy and effective communication are integral to change management


Capable strategies.

• Change management efforts are owned by particular roles or teams, and


effective communication results in a clear understanding of the purpose and
benefits of change. Change rarely interrupts existing processes and tools.

500: • Change is an integral part of the organization. People in the organization


Efficient understand the inevitability of change, and see it as a source for momentum
instead of disruption. Change almost never unnecessarily interrupts existing
processes or tools.

• Systematic processes address change as a challenge of people and not


processes.

Next steps
In the next article in the Microsoft Fabric adoption roadmap series, in conclusion, learn
about adoption-related resources that you might find valuable.
Microsoft Fabric adoption roadmap
conclusion
Article • 11/14/2023

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

This article concludes the series on Microsoft Fabric adoption. The strategic and tactical
considerations and action items presented in this series will assist you in your analytics
adoption efforts, and with creating a productive data culture in your organization.

This series covered the following aspects of Fabric adoption.

Adoption introduction
Adoption maturity levels
Data culture
Executive sponsorship
Business alignment
Content ownership and management
Content delivery scope
Center of Excellence
Governance
Mentoring and enablement
Community of practice
User support
System oversight
Change management

The rest of this article includes suggested next actions to take. It also includes other
adoption-related resources that you might find valuable.

Next actions to take


It can be overwhelming to decide where to start. The following series of steps provides a
process to help you approach your next actions.
1. Learn: First, read this series of articles end-to-end. Become familiar with the
strategic and tactical considerations and action items that directly lead to
successful analytics adoption. They'll help you to build a data culture in your
organization. Discuss the concepts with your colleagues.
2. Assess current state: For each area of the adoption roadmap, assess your current
state. Document your findings. Your goal is to have full clarity on where you're now
so that you can make informed decisions about what to do next.
3. Clarify your strategic goals: Ensure that you're clear on what your organization's
goals are for adopting Fabric. Confirm that your adoption and data culture goals
align with your organization's broader strategic goals for the use of data, analytics,
and business intelligence (BI) in general. Focus on what your immediate strategy is
for the next 3-12 months. For more information about defining your goals, see the
strategic planning article.
4. Prioritize: Clarify what's most important to achieve in the next 12-18 months. For
instance, you might identify specific user enablement or risk reduction areas that
are a higher priority than other areas. Determine which advancements in maturity
levels you should prioritize first. For more information about defining your
priorities, see the strategic planning article.
5. Identify future state: For each area of the roadmap, identify the gaps between
what you want to happen (your future state) and what's happening (your current
state). Focus on the next 12-18 months for identifying your desired future state.
6. Customize maturity levels: Using the information you have on your strategy and
future state, customize the maturity levels for each area of the roadmap. Update or
delete the description for each maturity level so that they're realistic, based on
your goals and strategy. Your current state, priorities, staffing, and funding will
influence the time and effort it will take to advance to higher maturity levels.
7. Define measurable objectives: Create KPIs (key performance indicators) or OKRs
(objectives and key results) to define specific goals for the next quarter. Ensure that
the objectives have clear owners, are measurable, time-bound, and achievable.
Confirm that each objective aligns with your strategic BI goals and priorities.
8. Create tactical plans: Add specific action items to your project plan. Action items
will identify who will do what, and when. Include short, medium, and longer-term
(backlog) items in your project plan to make it easy to track and reprioritize.
9. Track action items: Use your preferred project planning software to track
continual, incremental progress of your action items. Summarize progress and
status every quarter for your executive sponsor.
10. Adjust: As new information becomes available—and as priorities change—
reevaluate and adjust your focus. Reexamine your strategic goals, objectives, and
action items once a quarter so you're certain that you're focusing on the right
actions.
11. Celebrate: Pause regularly to appreciate your progress. Celebrate your wins.
Reward and recognize people who take the initiative and help achieve your goals.
Encourage healthy partnerships between IT and the different areas of the business.
12. Repeat: Continue learning, experimenting, and adjusting as you progress with your
implementation. Use feedback loops to continually learn from everyone in the
organization. Ensure that continual, gradual, improvement is a priority.

A few important key points are implied within the previous suggestions.

Focus on the near term: Although it's important to have an eye on the big picture,
we recommend that you focus primarily on the next quarter, next semester, and
next year. It's easier to assess, plan, and act when you focus on the near term.
Progress will be incremental: Changes that happen every day, every week, and
every month add up over time. It's easy to become discouraged and sense a lack
of progress when you're working on a large adoption initiative that takes time. If
you keep track of your incremental progress, you'll be surprised at how much you
can accomplish over the course of a year.
Changes will continually happen: Be prepared to reconsider decisions that you
make, perhaps every quarter. It's easier to cope with continual change when you
expect the plan to change.
Everything correlates together: As you progress through each of the steps listed
above, it's important that everything's correlated from the high-level strategic
organizational objectives, all the way down to more detailed action items. That
way, you'll know that you're working on the right things.

Power BI implementation planning


Successfully implementing analytics throughout the organization requires deliberate
thought and planning. The Power BI implementation planning series of articles, which is
a work in progress, is intended to complement the Microsoft Fabric adoption roadmap.
It includes key considerations, actions, decision-making criteria, recommendations, and
it describes implementation patterns for important common usage scenarios.

Power BI adoption framework


The Power BI adoption framework describes additional aspects of how to adopt Power
BI in more detail. The original intent of the framework was to support Microsoft partners
with a lightweight set of resources for use when helping their customers deploy and
adopt Power BI.
The framework can augment this Microsoft Fabric adoption roadmap series. The
roadmap series focuses on the why and what of adopting Fabric, more so than the how.

7 Note

When completed, the Power BI implementation planning series (described in the


previous section) will replace the Power BI adoption framework.

Enterprise deployment whitepaper


The Planning a Power BI enterprise deployment whitepaper was published in 2020 as
an overview for Power BI implementers. It has a strong focus on technology. Its primary
goal is awareness of options, key considerations, decisions, and best practices. Because
of the breadth of content, different sections of the whitepaper will appeal to managers,
IT professionals, and self-service content creators.

7 Note

The Enterprise deployment whitepaper won't be updated again. When completed,


the Power BI implementation planning series (described in the previous section) will
replace the Enterprise deployment whitepaper.

Microsoft's BI transformation
Consider reading about Microsoft's journey and experience with driving a data culture.
This article describes the importance of two terms: discipline at the core and flexibility at
the edge. It also shares Microsoft's views and experience about the importance of
establishing a COE.

Power Platform adoption


The Power Platform team has an excellent set of adoption-related content. Its primary
focus is on Power Apps, Power Automate, and Power Virtual Agents. Many of the ideas
presented in this content can be applied to Power BI also.

The Power CAT Adoption Maturity Model , published by the Power CAT team,
describes repeatable patterns for successful Power Platform adoption.
The Power Platform Center of Excellence Starter Kit is a collection of components and
tools to help you develop a strategy for adopting and supporting Microsoft Power
Platform.

The Power Platform adoption best practices includes a helpful set of documentation and
best practices to help you align business and technical strategies.

The Power Platform adoption framework is a community-driven project with excellent


resources on adoption of Power Platform services at scale.

Microsoft 365 and Azure adoption


You might also find useful adoption-related guidance published by other Microsoft
technology teams.

The Maturity Model for Microsoft 365 provides information and resources to use
capabilities more fully and efficiently.
Microsoft Learn has a learning path for using the Microsoft service adoption
framework to drive adoption in your enterprise.
The Microsoft Cloud Adoption Framework for Azure is a collection of
documentation, implementation guidance, best practices, and tools to accelerate
your cloud adoption journey.

A wide variety of other adoption guides for individual technologies can be found online.
A few examples include:

Microsoft Teams adoption guide .


Microsoft Security and Compliance adoption guide .
SharePoint Adoption Resources .

Industry guidance
The Data Management Book of Knowledge (DMBOK2) is a book available for
purchase from DAMA International. It contains a wealth of information about maturing
your data management practices.

7 Note

The additional resources provided in this article aren't required to take advantage
of the guidance provided in this Fabric adoption series. They're reputable resources
should you wish to continue your journey.
Partner community
Experienced partners are available to help your organization succeed with adoption
initiatives. To engage a partner, visit the Power BI partner portal .
Discover data items in the OneLake data
hub
Article • 09/13/2023

The OneLake data hub makes it easy to find, explore, and use the Fabric data items in
your organization that you have access to. It provides information about the items and
entry points for working with them.

The data hub provides:

A filterable list of all the data items you can access


A gallery of recommended data items
A way of finding data items by workspace
A way to display only the data items of a selected domain
An options menu of things you can do with the data item

This article explains what you see on the data hub and describes how to use it.

Open the data hub


To open the data hub, select the OneLake data hub icon in the navigation pane.
7 Note

The OneLake data hub icon and label you see may differ slightly from that shown
above, and may also differ slightly from that seen by other users. The data hub
functionality is the same, however, no matter which icon/label appears. For more
information, see Considerations and limitations.

Find items in the data items list


The data items list displays all the data items you have access to. To shorten the list, you
can filter by keyword or data-item type using the filters at the top of the list. If you
select the name of an item, you'll get to the item's details page. If you hover over an
item, you'll see three dots that open the options menu when you select them.

The list has three tabs to narrow down the list of data items.

Tab Description

All Data items that you're allowed to find.

My data Data items that you own.

Endorsed in Endorsed data items in your organization that you're allowed to find. Certified
your org data items are listed first, followed by promoted data items. For more information
Tab Description

about endorsement, see the Endorsement overview

The columns of the list are described below.

Column Description

Name The data item name. Select the name to open the item's details page.

Endorsement Endorsement status.

Owner Data item owner (listed in the All and Endorsed in your org tabs only).

Workspace The workspace the data item is located in.

Refreshed Last refresh time (rounded to hour, day, month, and year. See the details section
on the item's detail page for the exact time of the last refresh).

Next refresh The time of the next scheduled refresh (My data tab only).

Sensitivity Sensitivity, if set. Select the info icon to view the sensitivity label description.

Find items by workspace


Related data items are often grouped together in a workspace. To see the data items by
workspace, expand the Explorer pane and select the workspace you're interested in. The
data items you're allowed to see in that workspace will be displayed in the data items
list.
7 Note

The Explorer pane may list workspaces that you don't have access to if the
workspace contains items that you do have access to (through explicitly granted
permissions, for example). If you select such a workspace, only the items you have
access to will be displayed in the data items list.

Find recommended items


Use the tiles across the top of the data hub to find and explore recommended data
items. Recommended data items are data items that have been certified or promoted by
someone in your organization or have recently been refreshed or accessed. Each tile
contains information about the item and an options menu for doing things with the
item. When you select a recommended tile, you are taken to the item's details page.
Display only data items belonging to a
particular domain
If domains have been defined in your organization, you can use the domain selector to
select a domain so that only data items belonging to that domain will be displayed. If an
image has been associated with the domain, you’ll see that image on the data hub to
remind you of the domain you're viewing.

For more information about domains, see the Domains overview

Open an item's options menu


Each item shown in the data hub has an options menu that enables you to do things,
such as open the item's settings, manage item permissions, etc. The options available
depend on the item and your permissions on the item.

To display the options menu, select More options (...) on one of the items shown in the
data items list or a recommended item. In the data items list, you need to hover over the
item to reveal More options.
7 Note

The Explorer pane may list workspaces that you don't have access to if the
workspace contains items that you do have access to (through explicitly granted
permissions, for example). If you select such a workspace, only the items you have
access to will be displayed in the data items list.

Considerations and limitations


The OneLake data hub's icon and label is currently undergoing evaluation, and its
appearance may vary slightly for different users. However, data hub functionality is not
affected and is the same no matter which icon/label variation appears. The icon/label
variations you might encounter are shown in the following images.

Next steps
Navigate to your items from Microsoft Fabric Home
Endorsement

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Promote or certify items
Article • 11/15/2023

Fabric provides two ways you can endorse your valuable, high-quality items to increase
their visibility: promotion and certification.

Promotion: Promotion is a way to highlight items you think are valuable and
worthwhile for others to use. It encourages the collaborative use and spread of
content within an organization.

Any item owner, as well as anyone with write permissions on the item, can
promote the item when they think it's good enough for sharing.

Certification: Certification means that the item meets the organization's quality
standards and can be regarded as reliable, authoritative, and ready for use across
the organization.

Only authorized reviewers (defined by the Fabric administrator) can certify items.
Item owners who wish to see their item certified and aren't authorized to certify it
themselves need to follow their organization's guidelines about getting items
certified.

Currently it's possible to endorse all Fabric items except Power BI dashboards.

This article describes how to promote items, how to certify items if you're an authorized
reviewer, and how to request certification if you're not.

See the endorsement overview to learn more about endorsement.

Promote items
To promote an item, you must have write permissions on the item you want to promote.

1. Go to the settings of the content you want to promote.

2. Expand the endorsement section and select Promoted.

If you're promoting a Power BI semantic model and see a Make discoverable


checkbox, it means you can make it possible for users who don't have access to
the semantic model to find it. See semantic model discovery for more detail.
3. Select Apply.

Certify items
Item certification is a significant responsibility, and only authorized users can certify
items. Other users can request item certification. This section describes how to certify an
item.

1. Get write permissions on the item you want to certify. You can request these
permissions from the item owner or from anyone who as an admin role in
workspace where the item is located.

2. Carefully review the item and determine whether it meets your organization's
certification standards.

3. If you decide to certify the item, go to the workspace where it resides, and open
the settings of the item you want to certify.

4. Expand the endorsement section and select Certified.

If you're certifying a Power BI semantic model and see a Make discoverable


checkbox, it means you can make it possible for users who don't have access to
the semantic model to find it. See semantic model discovery for more detail.
5. Select Apply.

Request item certification


If you would like to certify your item but aren't authorized to do so, follow the steps
below.

1. Go to the workspace where the item you want to be certified is located, and then
open the settings of that item.

2. Expand the endorsement section. The Certified button is greyed out since you
aren't authorized to certify content. Select the link about how to get your item
certified.
7 Note

If you clicked the link above but got redirected back to this note, it means that
your Fabric admin has not made any information available. In this case,
contact the Fabric admin directly.

Next steps
Read more about endorsement
Enable content certification (Fabric admins)
Read more about semantic model discoverability

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Share items in Microsoft Fabric
Article • 09/06/2023

Workspaces are the central places where you collaborate with your colleagues in
Microsoft Fabric. Besides assigning workspace roles, you can also use item sharing to
grant and manage item-level permissions in scenarios where:

You want to collaborate with colleagues who don't have a role in the workspace.
You want to grant additional item level-permissions for colleagues who already
have a role in the workspace.

This document describes how to share an item and manage its permissions.

Share an item via link


1. In the list of items, or in an open item, select the Share button .

2. The Create and send link dialog opens. Select People in your organization can
view.
3. The Select permissions dialog opens. Choose the audience for the link you're
going to share.
You have the following options:

People in your organization This type of link allows people in your


organization to access this item. It doesn't work for external users or guest
users. Use this link type when:
You want to share with someone in your organization.
You're comfortable with the link being shared with other people in your
organization.
You want to ensure that the link doesn't work for external or guest users.

People with existing access This type of link generates a URL to the item, but
it doesn't grant any access to the item. Use this link type if you just want to
send a link to somebody who already has access.

Specific people This type of link allows specific people or groups to access
the report. If you select this option, enter the names or email addresses of the
people you wish to share with. This link type also lets you share to guest
users in your organization's Azure Active Directory (Azure AD). You can't
share to external users who aren't guests in your organization.

7 Note

If your admin has disabled shareable links to People in your organization, you
can only copy and share links using the People with existing access and
Specific people options.

4. Choose the permissions you want to grant via the link.

Links that give access to People in your organization or Specific people always
include at least read access. However, you can also specify if you want the link to
include additional permissions as well.

7 Note

The Additional permissions settings vary for different items. Learn more
about the item permission model.

Links for People with existing access don't have additional permission
settings because these links don't give access to the item.

Select Apply.

5. In the Create and send link dialog, you have the option to copy the sharing link,
generate an email with the link, or share it via Teams.
Copy link: This option automatically generates a shareable link. Select Copy
in the Copy link dialog that appears to copy the link to your clipboard.
by Email: This option opens the default email client app on your computer
and creates an email draft with the link in it.

by Teams: This option opens Teams and creates a new Teams draft message
with the link in it.

6. You can also choose to send the link directly to Specific people or groups
(distribution groups or security groups). Enter their name or email address,
optionally type a message, and select Send. An email with the link is sent to your
specified recipients.
When your recipients receive the email, they can access the report through the
shareable link.

Manage item links


1. To manage links that give access to the item, in the upper right of the sharing
dialog, select the Manage permissions icon:
2. The Manage permissions pane opens, where you can copy or modify existing links
or grant users direct access. To modify a given link, select Edit.
3. In the Edit link pane, you can modify the permissions included in the link, people
who can use this link, or delete the link. Select Apply after your modification.

This image shows the Edit link pane when the selected audience is People in your
organization can view and share.
This image shows the Edit link pane when the selected audience is Specific people
can view and share. Note that the pane enables you to modify who can use the
link.
4. For more access management capabilities, select the Advanced option in the
footer of the Manage permissions pane. On the management page that opens,
you can:

View, manage, and create links.


View and manage who has direct access and grant people direct access.
Apply filters or search for specific links or people.


Grant and manage access directly
In some cases, you need to grant permission directly instead of sharing link, such as
granting permission to service account, for example.

1. Select Manage permission from the context menu.

2. Select Direct access.

3. Select Add user.


4. Enter the names of people or accounts that you need to grant access to directly.
Select the permissions that you want to grant. You can also optionally notify
recipients by email.

5. Select Grant.

6. You can see all the people, groups, and accounts with access in the list on the
permission management page. You can also see their workspace roles,
permissions, and so on. By selecting the context menu, you can modify or remove
the permissions.

7 Note

You can't modify or remove permissions that are inherited from a workspace
role in the permission management page. Learn more about workspace roles
and the item permission model.

Item permission model


Depending on the item being shared, you may find a different set of permissions that
you can grant to recipients when you share. Read permission is always granted during
sharing, so the recipient can discover the shared item in the OneSource data hub and
open it.

Permission Effect
granted while
sharing

Read Recipient can discover the item in the data hub and open it. Connect to SQL
endpoints of Lakehouse and Data warehouse.

Edit Recipient can edit the item or its content.

Share Recipient can share the item and grant permissions up to the permissions
that they have. For example, if the original recipient has Share, Edit, and Read
permissions, they can at most grant Share, Edit, and Read permissions to the
next recipient.

Read All with SQL Read Lakehouse or Data warehouse data through SQL endpoints.
endpoint

Read all with Read Lakehouse or Data warehouse data through OneLake APIs and Spark.
Apache Spark Read Lakehouse data through Lakehouse explorer.

Build Build new content on the dataset.


Permission Effect
granted while
sharing

Execute Execute or cancel execution of the item.

Considerations and limitations


When a user's permission on an item is revoked through the manage permissions
experience, it can take up to two hours for the change to take effect if the user is
signed-in. If the user is not signed in, their permissions will be evaluated the next
time they sign in, and any changes will only take effect at that time.

The Shared with me option in the Browse pane currently only displays Power BI
items that have been shared with you. It doesn't show you non-Power BI Fabric
items that have been shared with you.

Next steps
Workspace roles

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Apply sensitivity labels to Fabric items
Article • 11/15/2023

Sensitivity labels from Microsoft Purview Information Protection on items can guard
your sensitive content against unauthorized data access and leakage. They're a key
component in helping your organization meet its governance and compliance
requirements. Labeling your data correctly with sensitivity labels ensures that only
authorized people can access your data. This article shows you how to apply sensitivity
labels to your Microsoft Fabric items.

7 Note

For information about applying sensitivity labels in Power BI Desktop, see Apply
sensitivity labels in Power BI Desktop.

Prerequisites
Requirements needed to apply sensitivity labels to Fabric items:

Power BI Pro or Premium Per User (PPU) license


Edit permissions on the item you wish to label.

7 Note

If you can't apply a sensitivity label, or if the sensitivity label is greyed out in the
sensitivity label menu, you may not have permissions to use the label. Contact your
organization's tech support.

Apply a label
There are two common ways of applying a sensitivity label to an item: from the flyout
menu in the item header, and in the item settings.

From the flyout menu - select the sensitivity indication in the header to display the
flyout menu:
In items settings - open the item's settings, find the sensitivity section, and then
choose the desired label:

Next steps
Sensitivity label overview
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Delta Lake table format interoperability
Article • 11/29/2023

In Microsoft Fabric, the Delta Lake table format is the standard for analytics. Delta Lake is an
open-source storage layer that brings ACID (Atomicity, Consistency, Isolation, Durability)
transactions to big data and analytics workloads.

All Fabric experiences generate and consume Delta Lake tables, driving interoperability and a
unified product experience. Delta Lake tables produced by one compute engine, such as
Synapse Data warehouse or Synapse Spark, can be consumed by any other engine, such as
Power BI. When you ingest data into Fabric, Fabric stores it as Delta tables by default. You can
easily integrate external data containing Delta Lake tables by using OneLake shortcuts.

Delta Lake features and Fabric experiences


To achieve interoperability, all the Fabric experiences align on the Delta Lake features and
Fabric capabilities. Some experiences can only write to Delta Lake tables, while others can read
from it.

Writers: Data warehouses, eventstreams, and exported Power BI semantic models into
OneLake
Readers: SQL analytics endpoint and Power BI direct lake semantic models
Writers and readers: Fabric Spark runtime, dataflows, data pipelines, and Kusto Query
Language (KQL) databases

The following matrix shows key Delta Lake features and their support on each Fabric capability.

Fabric Name- Deletion V-order Table Write Read Delta


capability based vectors writing optimization partitions partitions reader/writer
column and version and
mappings maintenance default table
features

Data No Yes Yes Yes No Yes Reader: 3


warehouse Writer: 7
Delta Lake Deletion
export Vectors

SQL analytics No Yes N/A (not N/A (not N/A (not Yes N/A (not
endpoint applicable) applicable) applicable) applicable)

Fabric Spark Yes Yes Yes Yes Yes Yes Reader: 1


runtime 1.2 Writer: 2

Fabric Spark Yes No Yes Yes Yes Yes Reader: 1


runtime 1.1 Writer: 2
Fabric Name- Deletion V-order Table Write Read Delta
capability based vectors writing optimization partitions partitions reader/writer
column and version and
mappings maintenance default table
features

Dataflows Yes Yes Yes No Yes Yes Reader: 1


Writer: 2

Data No No Yes No Yes, Yes Reader: 1


pipelines overwrite Writer: 2
only

Power BI Yes Yes N/A (not N/A (not N/A (not Yes N/A (not
direct lake applicable) applicable) applicable) applicable)
semantic
models

Export Power Yes N/A (not Yes No Yes N/A (not Reader: 2
BI semantic applicable) applicable) Writer: 5
models into
OneLake

KQL No No No No Yes Yes Reader: 1


databases Writer: 1

Eventstreams No No No No Yes N/A (not Reader: 1


applicable) Writer: 2

7 Note

Fabric doesn't write name-based column mappings by default. The default Fabric
experience generates tables that are compatible across the service. Delta lake,
produced by third-party services, may have incompatible table features.
Some Fabric experiences do not have inherited table optimization and maintenance
capabilities, such as bin-compaction, V-order, and clean up of old unreferenced files.
To keep Delta Lake tables optimal for analytics, follow the techniques in Use table
maintenance feature to manage delta tables in Fabric for tables ingested using
those experiences.

Current limitations
Currently, Fabric doesn't support these Delta Lake features:

Column mapping using IDs


Delta Lake 3.x Uniform
Delta Lake 3.x Liquid clustering
TIMESTAMP_NTZ data type
Identity columns writing (proprietary Databricks feature)
Delta Live Tables (proprietary Databricks feature)

Next steps
What is Delta Lake?
Learn more about Delta Lake tables in Fabric Lakehouse and Synapse Spark.
Learn about Direct Lake in Power BI and Microsoft Fabric.
Learn more about querying tables from the Warehouse through its published Delta Lake
Logs.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Fabric known issues
Article • 11/24/2023

This page lists known issues for Fabric features. Before submitting a Support request,
review this list to see if the issue that you're experiencing is already known and being
addressed. Known issues are also available as an interactive Power BI report .

For service level outages or degradation notifications, check


https://support.fabric.microsoft.com/ .

Currently active known issues


Select the Title to view more information about that specific known issue.

Issue Product experience Title Issues publish


ID date

563 Data Engineering Lakehouse doesn't recognize table names November 22,
with special characters 2023

553 OneLake OneLake compute transactions not reported November 15,


in Metrics app 2023

549 Data Warehouse Making model changes to a semantic model November 15,
might not work 2023

536 Administration & Feature Usage and Adoption report activity November 9,
Management missing 2023

530 Administration & Creating or updating Fabric items is blocked October 23,
Management 2023

529 Data Warehouse Data warehouse with more than 20,000 tables October 23,
fails to load 2023

519 Administration & Capacity Metrics app shows variance October 13,
Management between workload summary and operations 2023

521 Administration & New throttling logic delayed for Power BI and October 5,
Management eventstream 2023

508 Data Warehouse User column incorrectly shows as System in October 5,


Fabric capacity metrics app 2023

506 Data Warehouse InProgress status shows in Fabric capacity October 5,


metrics app for completed queries 2023
Issue Product experience Title Issues publish
ID date

483 Administration & Admin monitoring dataset refresh fails and August 24,
Management credentials expire 2023

454 Data Warehouse Warehouse's object explorer doesn't support July 10, 2023
case-sensitive object names

447 Data Warehouse Temp tables in Data Warehouse and SQL July 5, 2023
analytics endpoint

Recently closed known issues


Select the Title to view more information about that specific known issue. Fixed issues
are removed after 46 days.

Issue Product Title Issues Status


ID experience publish date

453 Data Warehouse Data Warehouse only publishes July 10, 2023 Fixed:
Delta Lake Logs for Inserts November 15,
2023

446 Data Warehouse OneLake table folder not July 5, 2023 Fixed:
removed when table dropped in November 15,
data warehouse 2023

514 Data Engineering Unable to start new Spark session September Fixed: October
after deleting all libraries 25, 2023 13, 2023

507 Administration & Selecting view account link in September Fixed: October
Management account manager shows wrong 25, 2023 13, 2023
page

467 Data Engineering Notebook fails to load after August 3, Fixed: October
workspace migration 2023 13, 2023

463 Data Warehouse Failure occurs when accessing a August 3, Fixed: October
renamed Lakehouse or 2023 13, 2023
Warehouse

462 Administration & Fabric users see the workspace July 26, 2023 Fixed: October
Management git status column display synced 13, 2023
for unsupported items

458 Data Factory Not able to add Lookup activity July 26, 2023 Fixed: October
output to body object of Office 13, 2023
365
Next steps
Go to the Power BI report version of this page
Service level outages
Get your questions answered by the Fabric community

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Lakehouse doesn't
recognize table names with special
characters
Article • 11/24/2023

The Lakehouse explorer doesn't correctly identify Data Warehouse tables names
containing spaces and special characters, such as non-Latin characters.

Status: Open

Product Experience: Data Engineering

Symptoms
In the Lakehouse Explorer user interface, you see tables whose names contains spaces
and special characters in the "Unidentified tables" section.

Solutions and workarounds


To correctly see the table, you can use the SQL Analytics Endpoint on the Lakehouse.
You can also query the tables using Spark notebooks. When using a Spark notebook,
you must use the backtick notation and directly reference the table in disk in Spark
commands.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - OneLake compute
transactions not reported in Metrics app
Article • 11/16/2023

The Microsoft Fabric Capacity Metrics app doesn't show data for OneLake transaction
usage reporting. OneLake compute doesn't appear in the Fabric Capacity Metrics app
and doesn't count against capacity limits. OneLake storage reporting doesn't have any
issues and is reported correctly.

Status: Open

Product Experience: OneLake

Symptoms
You don't see OneLake compute usage in the Microsoft Fabric Capacity Metrics app.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Making model changes
to a semantic model might not work
Article • 11/16/2023

Making model changes in a Fabric Data Warehouse's semantic model might not work.
The types of model changes include, but aren't limited to, making changes to
relationships, measures, and more. When the change doesn't work, a data warehouse
semantic model error appears.

Status: Open

Product Experience: Data Warehouse

Symptoms
If impacted, you experience the following error while attempting to model the Semantic
Model: "You cannot use Direct Lake mode together with other storage modes in the
same model. Composite model does not support Direct Lake mode. Please remove the
unsupported tables or switch them to Direct Lake mode. See

https://go.microsoft.com/fwlink/?linkid=2215281 to learn more."

Solutions and workarounds


If impacted, perform the following actions in the semantic model to work around the
problem:

1. Select the Manage Default Semantic Model button


2. Unselect all objects
3. Select the Save button
4. Readd objects to the semantic model
5. Select the Save button

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Feature Usage and
Adoption report activity missing
Article • 11/15/2023

In the Feature Usage and Adoption report, you see all usage activity for workspaces on
Premium Per User (PPU) and Shared capacities filtered out. When viewing the report,
you see less than expected activity levels for the affected workspaces. For workspaces
not on PPU and Shared capacities, usage activity should be considered accurate.

Status: Open

Product Experience: Administration & Management

Symptoms
In the Feature Usage and Adoption report, you notice gaps in audit log activity for
certain workspaces that are hosted on Premium Per User (PPU) and Shared capacities.
The report also shows less activity than reality for the affected workspaces.

Solutions and workarounds


Until a fix is released, you can use the usage metrics reports for measuring usage at the
workspace level or pull usage data using the activity events API.

Next steps
Monitor report usage metrics
Monitor usage metrics in the workspaces (preview)
Admin - Get Activity Events
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Creating or updating
Fabric items is blocked
Article • 10/25/2023

You can't create or update a Fabric or Power BI item because your organization's
compute capacity has exceeded its limits. You don't receive an error message when the
creation or update is blocked. However, when the compute capacity exceeded its limits,
a notification was sent to your company's Fabric admin.

Status: Open

Product Experience: Administration & Management

Symptoms
You can't load a page, create an item, or update anything in Fabric.

Solutions and workarounds


In the future, we'll improve the notifications for the known issue to improve your
experience. In the meantime, wait a while and retry your request to see if capacity is
available. For further information on capacity usage, reach out to your capacity admin.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Data warehouse with
more than 20,000 tables fails to load
Article • 11/15/2023

A data warehouse or SQL analytics endpoint that has more than 20,000 tables fails to
load in the portal. If connecting through any other client tools, you can load the tables.
The issue is only observed while accessing the data warehouse through the portal.

Status: Open

Product Experience: Data Warehouse

Symptoms
Your data warehouse or SQL analytics endpoint fails to load in the portal with the error
message "Batch was canceled," but the same connection strings are reachable using
other client tools.

Solutions and workarounds


If you're impacted, use a client tool such as SQL Server Management Studio or Azure
Data studio to query the data warehouse.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Capacity Metrics app
shows variance between workload
summary and operations
Article • 10/15/2023

Fabric capacities support a breakdown of the capacity usage by workload meter. The
meter usage is derived from the workload summary usage, which contains smoothed
data over multiple time periods. Due to a rounding issue with this summary usage, it
appears lower than the usage from the workload operations in the Capacity Metrics app.
Until this issue is fixed, you can't correlate your operation level usage to your Azure bill
breakdown. While the difference doesn't change the total Fabric capacity bill, the usage
attributed to Fabric workloads might be under-reported.

Status: Open

Product Experience: Administration & Management

Symptoms
A customer can uses the Capacity Metrics app to look at their workload usage for any
item such as a Data warehouse or a Lakehouse. In a 14-day period on the Capacity
Metrics app, the usage appears to be lower than the bill for that workload meter. Note:
Due to this issue, the available capacity meter shows higher usage, giving the erroneous
impression that the capacity is underutilized more than it actually is.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No
Provide product feedback | Ask the community
Known issue - New throttling logic
delayed for Power BI and eventstream
Article • 10/06/2023

On October 1, Fabric launched a new capacity throttling logic to reduce throttling for
intermittent usage spikes and prevent overloading capacity. All experiences use the new
throttling logic except for Power BI and eventstream.

Status: Open

Product Experience: Administration & Management

Symptoms
Power BI and eventstream users don't currently see the benefits of the new throttling
logic.

Solutions and workarounds


No workarounds at this time. Power BI and eventstream will move to the new throttling
logic on October 13.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - User column incorrectly
shows as System in Fabric capacity
metrics app
Article • 10/06/2023

In a limited number of cases, when you make a user-initiated request to the data
warehouse, the user identity isn't correctly reported to the Fabric capacity metrics app.
In the capacity metrics app, the User column shows as System.

Status: Open

Product Experience: Data Warehouse

Symptoms
In the interactive operations table on the timepoint page, you incorrectly see the value
System under the User column.

Solutions and workarounds


No workarounds at this time. When the fix is released, we'll update this article.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - InProgress status shows
in Fabric capacity metrics app for
completed queries
Article • 11/15/2023

In the Fabric capacity metrics app, completed queries in the Data Warehouse SQL
analytics endpoint appear with the status as "InProgress" in the interactive operations
table on the timepoint page.

Status: Open

Product Experience: Data Warehouse

Symptoms
In the interactive operations table on the timepoint page, completed queries in the Data
Warehouse SQL analytics endpoint appear with the status InProgress

Solutions and workarounds


No workarounds at this time. When the fix is released, we'll update this article.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Unable to start new Spark
session after deleting all libraries
Article • 10/15/2023

You might not be able to start a new Spark session in a Fabric notebook. You receive a
message that the session has failed. The failure occurs when you installed libraries
through Workspace settings > Data engineering > Library management, and then you
removed all the libraries.

Status: Fixed: October 13, 2023

Product Experience: Data Engineering

Symptoms
You're unable to start a new Spark session in a Fabric notebook and receive the error
message: "SparkCoreError/PersonalizingFailed: Livy session has failed. Error code:
SparkCoreError/PersonalizingFailed. SessionInfo.State from SparkCore is Error:
Encountered an unexpected error while personalizing session. Failed step:
'LM_LibraryManagementPersonalizationStatement'. Source: SparkCoreService."

Solutions and workarounds


To work around this issue, you can install any library (from PyPi, Conda, or custom
library) through Workspace settings > Data engineering > Library management. You
don't need to use the library in a Fabric notebook or Spark job definition for it to resolve
the error message.

If you aren't using any library in your workspace, adding a library from PyPi or Conda to
work around this issue might slow your session start time slightly. Instead, you can
install a small custom library to have faster session start times. You can download a
simple JAR file from Fabric Samples/SampleCustomJar and install.

Next steps
About known issues
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Selecting View account
link in account manager shows wrong
page
Article • 10/15/2023

When you open the Account manager on any Microsoft Fabric page, you can select the
View account link. When you select the View account link, the page redirects to the
wrong page instead of the user account information page.

Status: Fixed: October 13, 2023

Product Experience: Administration & Management

Symptoms
Selecting the View account link in Account manager doesn't show your account
information.

Solutions and workarounds


You can see this information via the Office 365 portal or directly access your account
information .

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Admin monitoring
semantic model refresh fails and
credentials expire
Article • 11/15/2023

In some workspaces, the credentials for the admin monitoring workspace semantic
model expire, which shouldn't happen. As a result, the semantic model refresh fails, and
the Feature Usage and Adoption report doesn't work.

Status: Open

Product Experience: Administration & Management

Symptoms
In the admin monitoring workspace, you receive refresh failures. Although the semantic
model refreshed in the past, now the semantic model refresh fails with the error: Data
source error: The credentials provided for the data source are invalid.

Solutions and workarounds


To fix the semantic model refresh, reinitialize the admin monitoring workspace.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Fabric items can't be
created in a workspace moved to a
capacity in a different region
Article • 09/28/2023

If a workspace ever contained a Fabric item other than a Power BI item, even if all the
Fabric items have since been deleted, then moving that workspace to a different
capacity in a different region isn't supported. If you do move the workspace cross-
region, you can't create any Fabric items. In addition, the same behavior occurs if you
configure Spark compute settings in the Data Engineering or Data Science section of the
workspace settings.

Status: Fixed: September 28, 2023

Product Experience: OneLake

Symptoms
After moving the workspace to a different region, you can't create a new Fabric item.
The creation fails, sometimes showing an error message of "Unknown error."

Solutions and workarounds


To work around the issue, you can create a new workspace in the capacity in the
different region and create Fabric items there. Alternatively, you can move the
workspace back to the original capacity in the original region.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Notebook fails to load
after workspace migration
Article • 10/15/2023

If you migrate your workspace that contains Reflex or Kusto items to another capacity,
may see issues loading a notebook within that workspace.

Status: Fixed: October 13, 2023

Product Experience: Data Engineering

Symptoms
When you try to open your notebook, you see an error message similar to "Loading
Notebook... Failed to get content of notebook. TypeError: Failed to fetch".

Solutions and workarounds


To mitigate the issue, you can either:

Migrate your workspace back to its original capacity


Or delete the Reflex or Kusto item and then migrate your workspace

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Failure occurs when
accessing a renamed Lakehouse or
Warehouse
Article • 10/15/2023

After renaming your Lakehouse or Warehouse items in Microsoft Fabric, you may
experience a failure when trying to access the SQL endpoint or Warehouse item using
client tools or the Web user experience. The failure happens when the underlying SQL
file system isn't properly updated after the rename operation resulting in different
names in the portal and SQL file system.

Status: Fixed: October 13, 2023

Product Experience: Data Warehouse

Symptoms
You'll see an HTTP status code 500 failure after renaming your Lakehouse or Warehouse
when trying to access the renamed items.

Solutions and workarounds


There's no workaround at this time. We are aware of the issue and are working on a fix.
We apologize for the inconvenience caused and encourage you not to rename the
Lakehouse or Warehouse items.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Fabric users see
workspace git status column display
synced for unsupported items
Article • 10/15/2023

Fabric users in the admin portal see the workspace git status column as Synced for
unsupported items.

Status: Fixed: October 13, 2023

Product Experience: Administration & Management

Symptoms
Fabric users see workspace git status column display Synced for unsupported items. To
identify which items are supported, see Git integration

Solutions and workarounds


NA

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - not able to add Lookup
activity output to body object of Office
365
Article • 10/15/2023

Currently there's a bug when the user tries to add the output of a Lookup activity as a
dynamic content to the body object of Office 365. Office 365 activity hangs indefinitely.

Status: Fixed: October 13, 2023

Product Experience: Data Factory

Symptoms
Office 365 activity hangs indefinitely when output of Lookup activity is added as a
dynamic content to the body object of Office 365

Solutions and workarounds


Wrap the output of the lookup activity in a string, for example @string(activity('Lookup
1').output.firstRow).

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - OneLake file explorer
doesn't contain items under "My
workspace"
Article • 08/25/2023

OneLake file explorer for the Windows Desktop application doesn't contain items under
the "My workspace" folder.

Status: Fixed: August 24, 2023

Product Experience: OneLake

Symptoms
The "My workspace" appears in the list of workspaces but the folder is empty, when user
view OneLake file explorer from the Windows Desktop application.

Solutions and workarounds


None

Next steps
None

About known issues


Known issue - The Data Warehouse
Object Explorer doesn't support case-
sensitive object names
Article • 07/14/2023

The object explorer fails to display the Fabric Data Warehouse objects (ex. tables, views,
etc.) when have same noncase sensitive name (ex. table1 and Table1). In case there are
two objects with same name, one displays in the object explorer. but, if there's three or
more objects, nothing gets display. The objects show and can be used from system
views (ex. sys.tables). The objects aren't available in the object explorer.

Status: Open

Product Experience: Data Warehouse

Symptoms
If the customer notice the object shares the same noncase sensitive name as another
object listed in a system view and is working as intended, but isn't listed in the object
explorer, then the customer has encountered this known issue.

Solutions and workarounds


Recommend naming objects with different names and not relying on case-sensitivity as
it helps avoid any inconsistency from not being listed in the object explorer, but listed in
system views

Next steps
About known issues
Known issue - Data Warehouse only
publishes Delta Lake logs for inserts
Article • 11/16/2023

Delta tables referencing Lakehouse shortcuts that are created using Data Warehouse
tables, don't update when there's an 'update' or 'delete' operation performed on the
Data Warehouse table. The limitation is listed in our public documentation:
(/fabric/data-warehouse/query-delta-lake-logs#limitations)

Status: Fixed: November 15, 2023

Product Experience: Data Warehouse

Symptoms
The data that a customer sees when querying the Delta table by using either a shortcut
or using the Delta Lake log, doesn't match the data shown when using Data Warehouse.

Solutions and workarounds


To ensure that the data in the Delta tables references the shortcut, try the following
steps.

1. Create Table as Select (CTAS)


2. Drop the old table
3. CTAS again to the original table name
4. Drop the existing shortcut
5. Re-create the shortcut to Lakehouse

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - pipeline not loading if
user deployed with update app via
public API
Article • 08/25/2023

When the pipeline is deployed via public API with the 'update app' option
(/rest/api/power-bi/pipelines/deploy-all#pipelineupdateappsettings), opening the
pipeline page gets stuck on loading.

Status: Fixed: August 24, 2023

Product Experience: Administration & Management

Symptoms
The pipeline page gets stuck on loading.

Solutions and workarounds


The user can update the pipeline via public APIs: /rest/api/power-bi/pipelines.

Next steps
About known issues
Known issue - Temp table usage in Data
Warehouse and SQL analytics endpoint
Article • 11/15/2023

Users can create Temp tables in the Data Warehouse and in SQL analytics endpoint but
data from user tables can't be inserted into Temp tables. Temp tables can't be joined to
user tables.

Status: Open

Product Experience: Data Warehouse

Symptoms
Users may notice that data from their user tables can't be inserted into a Temp table.
Temp tables can't be joined to user tables.

Solutions and workarounds


Use regular user tables instead of Temp tables.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - OneLake Table folder not
removed when table dropped in Data
Warehouse
Article • 11/16/2023

When you drop a table in the Data Warehouse, it isn't removed from the folder in
OneLake.

Status: Fixed: November 15, 2023

Product Experience: Data Warehouse

Symptoms
After a user drops a table in the Data Warehouse using a TSQL query, the corresponding
folder in OneLake, under Tables, isn't removed automatically and can't be dropped
manually.

Solutions and workarounds


None

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - 'Affected rows' number
displayed doesn't match the real row
number
Article • 07/31/2023

In SQL Server Management Studio (SSMS) with the COPY statement, you may see an
incorrect row count reported in the Messages tab.

Status: Fixed: July 19, 2023

Product Experience: Data Warehouse

Symptoms
Incorrect row count is reported when using the COPY command to ingest data from
SSMS.

Solutions and workarounds


To get an accurate number of rows ingested using the COPY statement, use the query
editor in Microsoft Fabric.

Next steps
About known issues
Known issue - Moving files from outside
of OneLake to OneLake with file
explorer doesn't sync files
Article • 08/01/2023

Within OneLake file explorer, moving a folder (cut and paste or drag and drop) from
outside of OneLake into OneLake fails to sync the contents in that folder. The contents
move locally, but only the top-level folder syncs to OneLake. You must trigger a sync by
either opening the files and saving them or moving them back out of OneLake and then
copying and pasting (versus moving).

APPLIES TO: ✔️OneLake

Status: Fixed: July 31, 2023

Product Experience: Administration & Management

Symptoms
​You continuously see the sync pending arrows for the folder and underlying files
indicating the files aren't synced to OneLake.

Solutions and workarounds


The fix is available in OneLake file explorer v1.0.9.0 and later versions.

Next steps
About known issues
Known issue - 'Get Tenant Settings' API
returns default values instead of user
configured values
Article • 06/28/2023

​When users call the admin API to retrieve tenant settings, it currently returns default
values instead of the user-configured values and security groups. This issue is limited to
the API and doesn't affect the functionality of the tenant settings page in the admin
portal.

APPLIES TO: ✔️Fabric

Status: Fixed: June 28, 2023

Product Experience: Administration & Management

Symptoms
​This bug is currently affecting a large number of customers, resulting in the symptoms
to be observed more widely. Due to the API returning default values, the properties and
corresponding values obtained through the API may not match what users see in the
admin portal. Additionally, the API response doesn't include the security group sections,
as the default security group is always empty. Here's an example comparing the
expected response with the faulty response:

SQL

Expected Response { "settingName": "AllowServicePrincipalsUseReadAdminAPIs",


"title": "Allow service principals to use read-only admin APIs", "enabled":
true, "canSpecifySecurityGroups": true, "enabledSecurityGroups": [ {
"graphId": "494a15ab-0c40-491d-ab15-xxxxxxxxxxx", "name": "testgroup" } ],
"tenantSettingGroup": "Admin API settings" } Faulty Response {
"settingName": "AllowServicePrincipalsUseReadAdminAPIs", "title": "Allow
service principals to use read-only admin APIs", "enabled": false,
"canSpecifySecurityGroups": true, "tenantSettingGroup": "Admin API settings"
}

Solutions and workarounds


There's no viable workaround. So, we recommend waiting for the bug to be fixed before
using this API.
Next steps
About known issues
Microsoft Fabric product, experience,
and item icons
Article • 11/21/2023

This article provides information about the official collection of icons for Microsoft
Fabric that you can use in architectural diagrams, training materials, or documentation.

Do's
Use the icons to illustrate how products can work together.
In diagrams, we recommend including a label that contains the product,
experience or item name somewhere close to the icon.
Use the icons as they appear within the product.

Don'ts
Don't crop, flip or rotate icons.
Don't distort or change icon shape in any way.
Don't use Microsoft product icons to represent your product or service.

Terms
Microsoft permits the use of these icons in architectural diagrams, training materials, or
documentation. You may copy, distribute, and display the icons only for the permitted
use unless granted explicit permission by Microsoft. Microsoft reserves all other rights.

Download icons from GitHub

Related content
Microsoft Power Platform icons
Azure icons
Dynamics 365 icons

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


What's new in Microsoft Fabric? archive
Article • 11/17/2023

This archive page is periodically updated with an archival of content from What's new in
Microsoft Fabric?

To follow the latest in Fabric news and features, see the Microsoft Fabric Blog . Also
follow the latest in Power BI at What's new in Power BI?

Community
This section summarizes previous Microsoft Fabric community opportunities for
prospective and current influencers and MVPs. To learn about the Microsoft MVP Award
and to find MVPs, see mvp.microsoft.com .

Month Feature Learn more

May Learn about Prior to our official announcement of Microsoft Fabric at Build 2023,
2023 Microsoft MVPs had the opportunity to familiarize themselves with the product.
Fabric from For several months, they have been actively testing Fabric and gaining
MVPs valuable insights. Now, their enthusiasm for the product is evident as
they eagerly share their knowledge and thoughts about Microsoft
Fabric with the community .

Data Factory in Microsoft Fabric


This section summarizes archived new features and capabilities of Data Factory in
Microsoft Fabric. Follow issues and feedback through the Data Factory Community
Forum .

Month Feature Learn more

May Introducing Data Data Factory enables you to develop enterprise-scale data
2023 Factory in Microsoft integration solutions with next-generation dataflows and
Fabric data pipelines .

Synapse Data Engineering in Microsoft Fabric


This section summarizes archived new features and capabilities of data engineering,
including Data Factory in Microsoft Fabric.
Month Feature Learn more

May Introducing Data With Synapse Data Engineering, one of the core experiences of
2023 Engineering in Microsoft Fabric, data engineers feel right at home, able to
Microsoft Fabric leverage the power of Apache Spark to transform their data at
scale and build out a robust lakehouse architecture .

Synapse Data Science in Microsoft Fabric


This section summarizes archived improvements and features for the Data Science
experience in Microsoft Fabric.

Month Feature Learn more

May Introducing Synapse With data science in Microsoft Fabric, you can utilize the
2023 Data Science in power of machine learning features to seamlessly enrich
Microsoft Fabric data as part of your data and analytics workflows .

Synapse Data Warehouse


This section summarizes archived improvements and features for Synapse Data
Warehouse in Microsoft Fabric.

Month Feature Learn more

May Introducing Synapse Synapse Data Warehouse is the next generation of data
2023 Data Warehouse in warehousing in Microsoft Fabric that is the first
Microsoft Fabric transactional data warehouse to natively support an open
data format, Delta-Parquet.

Synapse Real-Time Analytics in Microsoft Fabric


This section summarizes archived improvements and features for real-time analytics in
Microsoft Fabric.

Month Feature Learn more

May What's New in Kusto – Announcing the Synapse Real Time Analytics in
2023 Build 2023! Microsoft Fabric (Preview) !

Synapse Real-Time Analytics samples and guidance


Month Feature Learn more

May Ingest, transform, and route You can now ingest, capture, transform and route real-
2023 real-time events with time events to various destinations in Microsoft
Microsoft Fabric event Fabric with a no-code experience using Microsoft
streams Fabric eventstreams.

Fabric and Microsoft 365


This section includes articles and announcements about Microsoft Fabric integration
with Microsoft Graph and Microsoft 365.

Month Feature Learn more

May Step-by-Step Guide to This blog reviews how to enable Microsoft Fabric with a
2023 Enable Microsoft Fabric for Microsoft 365 Developer Account and the Fabric Free
Microsoft 365 Developer Trial .
Account

May Microsoft 365 Data + Microsoft 365 Data Integration for Microsoft Fabric
2023 Microsoft Fabric better enables you to manage your Microsoft 365 alongside
together your other data sources in one place with a suite of
analytical experiences.

Related content
Modernization Best Practices and Reusable Assets Blog
Azure Data Explorer Blog
Get started with Microsoft Fabric
Microsoft Training Learning Paths for Fabric
End-to-end tutorials in Microsoft Fabric
Fabric Known Issues
Microsoft Fabric Blog
Microsoft Fabric terminology
What's new in Power BI?

Next step
What's new in Microsoft Fabric?
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community

You might also like