Fabric Get Started
Fabric Get Started
e OVERVIEW
What is Fabric?
Fabric terminology
What's New
b GET STARTED
End-to-end tutorials
p CONCEPT
c HOW-TO GUIDE
Workspaces
p CONCEPT
Fabric workspace
Workspace roles
b GET STARTED
Create a workspace
c HOW-TO GUIDE
Microsoft Fabric is an all-in-one analytics solution for enterprises that covers everything
from data movement to data science, Real-Time Analytics, and business intelligence. It
offers a comprehensive suite of services, including data lake, data engineering, and data
integration, all in one place.
With Fabric, you don't need to piece together different services from multiple vendors.
Instead, you can enjoy a highly integrated, end-to-end, and easy-to-use product that is
designed to simplify your analytics needs.
SaaS foundation
Microsoft Fabric brings together new and existing components from Power BI, Azure
Synapse, and Azure Data Factory into a single integrated environment. These
components are then presented in various customized user experiences.
Fabric brings together experiences such as Data Engineering, Data Factory, Data Science,
Data Warehouse, Real-Time Analytics, and Power BI onto a shared SaaS foundation. This
integration provides the following advantages:
Fabric allows creators to concentrate on producing their best work, freeing them from
the need to integrate, manage, or understand the underlying infrastructure that
supports the experience.
Data Factory - Azure Data Factory combines the simplicity of Power Query with the
scale and power of Azure Data Factory. You can use more than 200 native
connectors to connect to data sources on-premises and in the cloud. For more
information, see What is Data Factory in Microsoft Fabric?
Data Science - Data Science experience enables you to build, deploy, and
operationalize machine learning models seamlessly within your Fabric experience.
It integrates with Azure Machine Learning to provide built-in experiment tracking
and model registry. Data scientists are empowered to enrich organizational data
with predictions and allow business analysts to integrate those predictions into
their BI reports. This way it shifts from descriptive to predictive insights. For more
information, see What is Data science in Microsoft Fabric?
Fabric brings together all these experiences into a unified platform to offer the most
comprehensive big data analytics platform in the industry.
Microsoft Fabric enables organizations, and individuals, to turn large and complex data
repositories into actionable workloads and analytics, and is an implementation of data
mesh architecture. To learn more about data mesh, visit the article that explains data
mesh architecture.
OneLake
The data lake is the foundation on which all the Fabric services are built. Microsoft Fabric
Lake is also known as OneLake. It's built into the Fabric service and provides a unified
location to store all organizational data where the experiences operate.
OneLake is built on top of ADLS (Azure Data Lake Storage) Gen2. It provides a single
SaaS experience and a tenant-wide store for data that serves both professional and
citizen developers. The OneLake SaaS experience simplifies the experiences, eliminating
the need for users to understand any infrastructure concepts such as resource groups,
RBAC (Role-Based Access Control), Azure Resource Manager, redundancy, or regions.
Additionally it doesn't require the user to even have an Azure account.
OneLake eliminates today's pervasive and chaotic data silos, which individual developers
create when they provision and configure their own isolated storage accounts. Instead,
OneLake provides a single, unified storage system for all developers, where discovery
and data sharing is trivial and compliance with policy and security settings are enforced
centrally and uniformly. For more information, see What is OneLake?
The tenant maps to the root of OneLake and is at the top level of the hierarchy. You can
create any number of workspaces within a tenant, which can be thought of as folders.
The following image shows the various Fabric items where data is stored. It's an example
of how various items within Fabric would store data inside OneLake. As displayed, you
can create multiple workspaces within a tenant, create multiple lakehouses within each
workspace. A lakehouse is a collection of files, folders, and tables that represents a
database over a data lake. To learn more, see What is a lakehouse?.
Every developer and business unit in the tenant can instantly create their own
workspaces in OneLake. They can ingest data into their own lakehouses, start
processing, analyzing, and collaborating on the data, just like OneDrive in Office.
All the Microsoft Fabric compute experiences are prewired to OneLake, just like the
Office applications are prewired to use the organizational OneDrive. The experiences
such as Data Engineering, Data Warehouse, Data Factory, Power BI, and Real-Time
Analytics use OneLake as their native store. They don't need any extra configuration.
OneLake is designed to allow instant mounting of existing PaaS storage accounts into
OneLake with the Shortcut feature. There's no need to migrate or move any of the
existing data. Using shortcuts, you can access the data stored in Azure Data Lake
Storage.
Additionally, shortcuts allow you to easily share data between users and applications
without moving or duplicating information. The shortcut capability extends to other
storage systems, allowing you to compose and analyze data across clouds with
transparent, intelligent caching that reduces egress costs and brings data closer to
compute.
Interop - Integrate your solution with the OneLake Foundation and establish basic
connections and interoperability with Fabric.
Develop on Fabric - Build your solution on top of the Fabric platform or seamlessly
embed Fabric's functionalities within your existing applications. It allows you to
actively leverage Fabric capabilities.
Build a Fabric workload - Create customized workloads and experiences in Fabric.
Tailor your offerings to deliver their value proposition while leveraging Fabric
ecosystem.
Next steps
Microsoft Fabric terminology
Create a workspace
Navigate to your items from Microsoft Fabric Home page
End-to-end tutorials in Microsoft Fabric
Feedback
Was this page helpful? Yes No
Microsoft Fabric has launched as a public preview and is temporarily provided free of
charge when you sign up for the Microsoft Fabric (Preview) trial. Your use of the
Microsoft Fabric (Preview) trial includes access to the Fabric product experiences and the
resources to create and host Fabric items. The Fabric (Preview) trial lasts for a period of
60 days, but may be extended by Microsoft, at our discretion. The Microsoft Fabric
(Preview) trial experience is subject to certain capacity limits as further explained below.
This document helps you understand and start a Fabric (Preview) trial.
) Important
5. Open your Account manager again. Notice that you now have a heading for Trial
status. Your Account manager keeps track of the number of days remaining in your
trial. You also see the countdown in your Fabric menu bar when you work in a
product experience.
Congratulations! You now have a Fabric (Preview) trial that includes a Power BI individual
trial (if you didn't already have a Power BI paid license) and a Fabric (Preview) trial
capacity.
With a Fabric (Preview) trial, you get full access to all of the Fabric experiences and
features. You also get OneLake storage up to 1 TB. Create Fabric items and collaborate
with others in the same Fabric trial capacity. With a Fabric (Preview) trial, you can:
You don't have access to your capacity until you put something into it. To begin using
your Fabric (Preview) trial, add items to My workspace or create a new workspace.
Assign that workspace to your trial capacity using the "Trial" license mode, and then all
the items in that workspace are saved and executed in that capacity.
To learn more about workspaces and license mode settings, see Workspaces.
Capacity units
When you start a Fabric (Preview) trial, Microsoft provisions one 64 capacity unit (CU)
trial capacity. These CUs allow users of your trial capacity to consume 64x60 CU seconds
every minute. Every time the Fabric trial capacity is used, it consumes CUs. The Fabric
platform aggregates consumption from all experiences and applies it to your reserved
capacity. Not all functions have the same consumption rate. For example, running a Data
Warehouse might consume more capacity units than authoring a Power BI report. When
the capacity consumption exceeds its size, Microsoft slows down the experience similar
to slowing down CPU performance.
There's no limit on the number of workspaces or items you can create within your
capacity. The only constraint is the availability of capacity units and the rate at which you
consume them.
You're the capacity owner for your trial capacity. As your own capacity administrator for
your Fabric trial capacity, you have access to a detailed and transparent report for how
capacity units are consumed via the Capacity Metrics app. For more information about
administering your trials, see Administer a Fabric trial capacity.
Share Fabric items, such as machine learning models, warehouses, and notebooks,
and collaborate on them with other Fabric users.
Additionally, if you cancel your trial, you may not be able to start another trial. If you
want to retain your data and continue to use Microsoft Fabric (Preview), you can
purchase a capacity and migrate your workspaces to that capacity. To learn more about
workspaces and license mode settings, see Workspaces.
Each trial user is the capacity admin for their trial capacity. Microsoft currently doesn't
support multiple capacity administrators per trial capacity. Therefore, Fabric
administrators can't view metrics for individual capacities. We do have plans to support
this capability in an upcoming admin monitoring feature.
If you don't see the Start trial button in your Account manager:
Your Fabric administrator may have disabled access, and you can't start a Fabric
(Preview) trial. Contact your Fabric administrator to request access. You can also
start a trial using your own tenant. For more information, see Sign up for Power BI
with a new Microsoft 365 account.
If you're an existing Power BI trial user, you don't see Start trial in your Account
manager. You can start a Fabric (Preview) trial by attempting to create a Fabric
item. When you attempt to create a Fabric item, you're prompted to start a Fabric
(Preview) trial. If you don't see this prompt, your Fabric administrator may have
disabled the Fabric (Preview) feature.
You might not be able to start a trial if your tenant has exhausted its limit of trial
capacities. If that is the case, you have the following options:
Request another trial capacity user to share their trial capacity workapce with
you. Give users access to workspaces
To increase tenant trial capacity limits, reach out to your Fabric administrator to
create a Microsoft support ticket.
This known bug occurs when the Fabric administrator turns off trials after you start a
trial. To add your workspace to the trial capacity, open the Admin portal by selecting it
from the gear icon in the top menu bar. Then, select Trial > Capacity settings and
choose the name of the capacity. If you don't see your workspace assigned, add it here.
What is the region for my Fabric (Preview) trial capacity?
If you start the trial using the Account manager, your trial capacity is located in the
home region for your tenant. See Find your Fabric home region for information about
how to find your home region, where your data is stored.
Not all regions are available for the Fabric (Preview) trial. Start by looking up your home
region and then check to see if your region is supported for the Fabric (Preview) trial. If
your home region doesn't have Fabric enabled, don't use the Account manager to start
a trial. To start a trial in a region that is not your home region, follow the steps in Other
ways to start a Fabric (Preview) trial. If you've already started a trial from Account
manager, cancel that trial and follow the steps in Other ways to start a Fabric (Preview)
trial instead.
You can't move your organization's tenant between regions by yourself. If you need to
change your organization's default data location from the current region to another
region, you must contact support to manage the migration for you. For more
information, see Move between regions.
If you don't upgrade to a paid Fabric capacity at the end of the trial period, non-Power
BI Fabric items are removed according to the retention policy upon removal.
How is the Fabric (Preview) trial different from an individual trial of Power BI paid?
A per-user trial of Power BI paid allows access to the Fabric landing page. Once you sign
up for the Fabric (Preview) trial, you can use the trial capacity for storing Fabric
workspaces and items and for running Fabric experiences. All rules guiding Power BI
licenses and what you can do in the Power BI experience remain the same. The key
difference is that a Fabric capacity is required to access non-Power BI experiences and
items.
During the Fabric preview, you can't create Fabric items in the trial capacity if you or
your tenant have private links enabled and public access is disabled. This limitation is a
known bug for Fabric preview.
Autoscale
The Fabric (Preview) trial capacity doesn't support autoscale. If you need more compute
capacity, you can purchase a Fabric capacity in Azure.
The Fabric (Preview) trial is different from a Proof of Concept (POC). A Proof of
Concept (POC) is standard enterprise vetting that requires financial investment and
months' worth of work customizing the platform and using fed data. The Fabric
(Preview) trial is free for users through public preview and doesn't require
customization. Users can sign up for a free trial and start running product
experiences immediately, within the confines of available capacity units.
You don't need an Azure subscription to start a Fabric (Preview) trial. If you have an
existing Azure subscription, you can purchase a (paid) Fabric capacity.
You can migrate your existing workspaces into a trial capacity using workspace settings
and choosing "Trial" as the license mode. To learn how to migrate workspaces, see
create workspaces.
Next steps
Learn about licenses
Review Fabric terminology
Feedback
Was this page helpful? Yes No
This article describes the meaning of preview in Microsoft Fabric, and explains how
preview experiences and features can be used.
Preview experiences and features are released with limited capabilities, but are made
available on a preview basis so customers can get early access and provide feedback.
Are not subject to SLAs and support is provided as best effort in certain cases.
However, Microsoft Support is eager to get your feedback on the preview
functionality, and might provide best effort support in certain cases.
Global administrator
Fabric administrator
7 Note
Feedback
Was this page helpful? Yes No
Learn the definitions of terms used in Microsoft Fabric, including terms specific to
Synapse Data Warehouse, Synapse Data Engineering, Synapse Data Science, Synapse
Real-Time Analytics, Data Factory, and Power BI.
) Important
General terms
Capacity: Capacity is a dedicated set of resources that is available at a given time
to be used. Capacity defines the ability of a resource to perform an activity or to
produce output. Different items consume different capacity at a certain time. Fabric
offers capacity through the Fabric SKU and Trials. For more information, see What
is capacity?
Item: An item a set of capabilities within an experience. Users can create, edit, and
delete them. Each item type provides different capabilities. For example, the Data
Engineering experience includes the lakehouse, notebook, and Spark job definition
items.
Apache Spark job: A Spark job is part of a Spark application that is run in parallel
with other jobs in the application. A job consists of multiple tasks. For more
information, see Spark job monitoring.
Apache Spark job definition: A Spark job definition is a set of parameters, set by
the user, indicating how a Spark application should be run. It allows you to submit
batch or streaming jobs to the Spark cluster. For more information, see What is an
Apache Spark job definition?
V-order: A write optimization to the parquet file format that enables fast reads and
provides cost efficiency and better performance. All the Fabric engines write v-
ordered parquet files by default.
Data Factory
Connector: Data Factory offers a rich set of connectors that allow you to connect
to different types of data stores. Once connected, you can transform the data. For
more information, see connectors.
Data pipeline: In Data Factory, a data pipeline is used for orchestrating data
movement and transformation. These pipelines are different from the deployment
pipelines in Fabric. For more information, see Pipelines in the Data Factory
overview.
Dataflow Gen2: Dataflows provide a low-code interface for ingesting data from
hundreds of data sources and transforming your data. Dataflows in Fabric are
referred to as Dataflow Gen2. Dataflow Gen1 exists in Power BI. Dataflow Gen2
offers extra capabilities compared to Dataflows in Azure Data Factory or Power BI.
You can't upgrade from Gen1 to Gen2. For more information, see Dataflows in the
Data Factory overview.
KQL Queryset: The KQL Queryset is the item used to run queries, view results, and
manipulate query results on data from your Data Explorer database. The queryset
includes the databases and tables, the queries, and the results. The KQL Queryset
allows you to save queries for future use, or export and share queries with others.
For more information, see Query data in the KQL Queryset
Event stream: The Microsoft Fabric event streams feature provides a centralized
place in the Fabric platform to capture, transform, and route real-time events to
destinations with a no-code experience. An event stream consists of various
streaming data sources, ingestion destinations, and an event processor when the
transformation is needed. For more information, see Microsoft Fabric event
streams.
OneLake
Shortcut: Shortcuts are embedded references within OneLake that point to other
file store locations. They provide a way to connect to existing data without having
to directly copy it. For more information, see OneLake shortcuts.
Next steps
Navigate to your items from Microsoft Fabric Home page
Discover data items in the OneLake data hub
End-to-end tutorials in Microsoft Fabric
What's new in Microsoft Fabric?
Article • 11/30/2023
This page is continuously updated with a recent review of what's new in Microsoft
Fabric. To follow the latest in Fabric news and features, see the Microsoft Fabric Blog .
Also follow the latest in Power BI at What's new in Power BI?
) Important
November Microsoft Fabric, A focus on what customers using the current Platform-as-
2023 explained for a-Service (PaaS) version of Synapse can expect . We'll
existing Synapse explain what the general availability of Fabric means for
users your current investments (spoiler: we fully support them),
but also how to think about the future.
November Microsoft Fabric is Microsoft Fabric is now generally available for purchase .
2023 now generally Microsoft Fabric can reshape how your teams work with
available data by bringing everyone together on a single, AI-
powered platform built for the era of AI. This includes the
experiences of Fabric: Power BI, Data Factory, Data
Engineering, Data Science, Real-Time Analytics, Data
Warehouse, and the overall Fabric platform.
November Fabric workloads are Microsoft Fabric is now generally available! Microsoft
2023 now generally Fabric Synapse Data Warehouse, Data Engineering & Data
available! Science, Real-Time Analytics, Data Factory, OneLake, and
the overall Fabric platform are now generally available.
Month Feature Learn more
October Announcing the Announcing the Fabric Roadmap . One place you can see
2023 Fabric roadmap what we are working on and when you can expect it to be
available.
October Get started with Explore how semantic link seamlessly connects Power BI
2023 semantic link semantic models with Synapse Data Science within
Microsoft Fabric. Learn more at Semantic link in Microsoft
Fabric: Bridging BI and Data Science .
You can also check out the semantic link sample notebooks
that are now available in the fabric-samples GitHub
repository. These notebooks showcase the use of semantic
link's Python library, SemPy, in Microsoft Fabric.
September Fabric Capacities – Read more about the improvements we're making to the
2023 Everything you need Fabric capacity management platform for Fabric and Power
to know about BI users .
what's new and
what's coming
August Strong, useful, From the Data Integration Design Team, learn about the
2023 beautiful: Designing strong, creative, and function design of Microsoft Fabric,
a new way of getting as Microsoft designs for the future of data integration.
data
August Learn Live: Get Calling all professionals, enthusiasts, and learners! On
2023 started with August 29, we'll be kicking off the "Learn Live: Get started
Microsoft Fabric with Microsoft Fabric" series in partnership with Microsoft's
Data Advocacy teams and Microsoft WorldWide Learning
teams to deliver 9x live-streamed lessons covering topics
related to Microsoft Fabric!
July 2023 Step-by-Step In this comprehensive guide, we walk you through the
Tutorial: Building process of creating Extract, Transform, Load (ETL) pipelines
ETLs with Microsoft using Microsoft Fabric .
Fabric
July 2023 Free preview usage We're extending the free preview usage of Fabric
of Microsoft Fabric experiences (other than Power BI). These experiences won't
Month Feature Learn more
June 2023 Get skilled on Who is Fabric for? How can I get skilled? This blog post
Microsoft Fabric - answers these questions about Microsoft Fabric, a
the AI-powered comprehensive data analytics solution by unifying many
analytics platform experiences on a single platform.
June 2023 Introducing the end- In this blog, we explore four end-to-end scenarios that are
to-end scenarios in typical paths our customers take to extract value and
Microsoft Fabric insights from their data using Microsoft Fabric .
May 2023 Get Started with A technical overview and introduction to everything from
Microsoft Fabric - All data movement to data science, real-time analytics, and
in-one place for all business intelligence in Microsoft Fabric .
your Analytical
needs
May 2023 Microsoft OneLake Microsoft OneLake brings the first multicloud SaaS data
in Fabric, the lake for the entire organization .
OneDrive for data
7 Note
Copilot in notebooks The Copilot in Fabric Data Science and Data Engineering notebooks is
preview designed to accelerate productivity, provide helpful answers and
guidance, and generate code for common tasks like data exploration,
data preparation and machine learning with. You can interact and engage
with the AI from either the chat panel or even from within notebooks
Feature Learn more
cells using magic commands to get insights from data faster. For more
information, see Copilot in notebooks .
Data Activator We are thrilled to announce that Data Activator is now in preview and
preview is enabled for all existing Microsoft Fabric users.
Data Engineering: We are thrilled to announce preview of the Environment in Fabric. The
Environment preview Environment is a centralized item that allows you to configure all the
required settings for running a Spark job in one place.
Data Wrangler for Data Wrangler now supports Spark DataFrames in preview. Until now,
Spark DataFrames users have been able to explore and transform pandas DataFrames using
preview common operations that can be converted to Python code in real time.
The new release allows users to edit Spark DataFrames in addition to
pandas DataFrames with Data Wrangler .
Lakehouse support The Lakehouse now integrates with the lifecycle management capabilities
for git integration in Microsoft Fabric , providing a standardized collaboration between all
and deployment development team members throughout the product's life. Lifecycle
pipelines (preview) management facilitates an effective product versioning and release
process by continuously delivering features and bug fixes into multiple
environments.
Microsoft 365 The Microsoft 365 connector now supports ingesting data into Lakehouse
connector now tables .
supports ingesting
data into Lakehouse
(preview)
Microsoft Fabric User We're happy to announce the preview of Microsoft Fabric User APIs. The
APIs Fabric user APIs are a major enabler for both enterprises and partners
to use Microsoft Fabric as they enable end-to-end fully automated
interaction with the service, enable integration of Microsoft Fabric into
external web applications, and generally enable customers and partners
to scale their solutions more easily.
Notebook Git Fabric notebooks now offer Git integration for source control using Azure
integration preview DevOps . It allows users to easily control the notebook code versions
and manage the git branches by leveraging the Fabric Git functions and
Azure DevOps.
Notebook in Now you can also use notebooks to deploy your code across different
Deployment Pipeline environments , such as development, test, and production. You can also
Preview use deployment rules to customize the behavior of your notebooks when
they are deployed, such as changing the default Lakehouse of a
Notebook. Get started with deployment pipelines to set up your
deployment pipeline, Notebook will show up in the deployment content
automatically.
Feature Learn more
Splunk add-on Microsoft Fabric add-on for Splunk allows users to ingest logs from
preview Splunk platform into a Fabric KQL DB using the Kusto python SDK.
VNet Gateways in VNet Data Gateway support for Dataflows Gen2 in Fabric is now in
Dataflow Gen2 preview. The VNet data gateway helps to connect from Fabric
preview Dataflows Gen2 to Azure data services within a VNet, without the need of
an on-premises data gateway.
Community
This section summarizes new Microsoft Fabric community opportunities for prospective
and current influencers and MVPs.
November Microsoft Fabric A special edition of the "Microsoft Fabric MVP Corner" blog
2023 MVP Corner – series highlights selected content related to Fabric and created
Special Edition by MVPs around the Microsoft Ignite 2023 conference , when
(Ignite) we announced Microsoft Fabric generally available.
Cloud Skills win a VIP pass to the next Microsoft Ignite. The challenge is on
Challenge until January 15, 2024. The challenge helps you prepare for the
Microsoft Certified: Fabric Analytics Engineer Associate
certification and new Microsoft Applied Skills credentials
covering the lakehouse and data warehouse scenarios, which are
coming in the next months.
October Microsoft Fabric Highlights of selected content related to Fabric and created by
2023 MVP Corner – MVPs from October 2023 .
October 2023
September Microsoft Fabric Highlights of selected content related to Fabric and created by
2023 MVP Corner – MVPs from September 2023 .
September
2023
August Microsoft Fabric Highlights of selected content related to Fabric and created by
2023 MVP Corner – MVPs from August 2023 .
August 2023
July 2023 Microsoft Fabric Highlights of selected content related to Fabric and created by
MVP Corner – MVPs in July 2023 .
July 2023
June 2023 Microsoft Fabric The Fabric MVP Corner blog series to highlight selected content
MVP Corner – related to Fabric and created by MVPs in June 2023 .
June 2023
May 2023 Fabric User Power BI User Groups are now Fabric User Groups !
Groups
Power BI
Updates to Power BI Desktop and the Power BI service are summarized at What's new in
Power BI?
November Semantic Link: Semantic Link adds support for the recently released
2023 OneLake integrated OneLake integrated semantic models! You can now
Month Feature Learn more
Semantic Models directly access data using your semantic model's name
via OneLake using the read_table function and the
new mode parameter set to onelake .
November Integrate your SAP Using the built-in connectivity of Microsoft Fabric is, of
2023 data into Microsoft course, the easiest and least-effort way of adding SAP
Fabric data to your Fabric data estate .
November Fabric Changing the Follow this step-by-step example of how to explore the
2023 game: Validate functional dependencies between columns in a table
dependencies with using the semantic link . The semantic link is a feature
Semantic Link – Data that allows you to establish a connection between Power
Quality BI datasets and Synapse Data Science in Microsoft Fabric.
October Fabric Change the Follow this realistic example of reading data from Azure
2023 Game: Exploring the Data Lake Storage using shortcuts, organizing raw data
data into structured tables, and basic data exploration. Our
data exploration uses as a source the diverse and
captivating city of London with information extracted
from data.london.gov.uk/ .
September Announcing an end- A new workshop guides you in building a hands-on, end-
2023 to-end workshop: to-end data analytics solution for the Snapshot
Analyzing Wildlife Serengeti dataset using Microsoft Fabric. The dataset
Data with Microsoft consists of approximately 1.68M wildlife images and
Fabric image annotations provided in .json files.
September New learning path: The new Implement a Lakehouse with Microsoft Fabric
2023 Implement a learning path introduces the foundational components of
Lakehouse with implementing a data lakehouse with Microsoft Fabric with
Microsoft Fabric seven in-depth modules.
July 2023 Connecting to How do I connect to OneLake? This blog covers how to
OneLake connect and interact with OneLake, including how
OneLake achieves its compatibility with any tool used
over ADLS Gen2!
June 2023 Using Azure How does Azure Databricks work with Microsoft Fabric?
Databricks with This blog post answers that question and more details on
Microsoft Fabric and how the two systems can work together.
OneLake
Data Factory in Microsoft Fabric
This section summarizes recent new features and capabilities of Data Factory in
Microsoft Fabric. Follow issues and feedback through the Data Factory Community
Forum .
November Dataflow Gen2 The connectors for Lakehouse, Warehouse, and KQL
2023 General availability of Database are now generally available . We encourage
Fabric connectors you to use these connectors when trying to connect to
data from any of these Fabric experiences.
November Dataflow Gen2 Column binding support is enabled for SAP HANA. This
2023 Support for column optional parameter results in significantly improved
binding for SAP performance. For more information, see Support for
HANA connector column binding for SAP HANA connector .
November Dataflow Gen2 When using a Dataflow Gen2 in Fabric, the system will
2023 staging artifacts automatically create a set of staging artifacts. Now, these
hidden staging artifacts will be abstracted from the Dataflow
Gen2 experience and will be hidden from the workspace
list. No action is required by the user and this change has
no impact on existing Dataflows.
November Dataflow Gen2 VNet Data Gateway support for Dataflows Gen2 in Fabric
2023 Support for VNet is now in preview. The VNet data gateway helps to
Gateways preview connect from Fabric Dataflows Gen2 to Azure data
services within a VNet, without the need of an on-
premises data gateway.
November Cross workspace You can now clone your data pipelines across workspaces
2023 "Save as" by using the "Save as" button .
November Dynamic content In the Email and Teams activities, you can now add
2023 flyout integration with dynamic content with ease. With this new pipeline
Month Email
Featureand Teams expression
Learn moreintegration, you will now see a flyout menu to
activity help you select and build your message content quickly
without needing to learn the pipeline expression
language.
November Copy activity now The Copy activity in data pipelines now supports fault
2023 supports fault tolerance for Fabric Warehouse . Fault tolerance allows
tolerance for Fabric you to handle certain errors without interrupting data
Data Warehouse movement. By enabling fault tolerance, you can continue
connector to copy data while skipping incompatible data like
duplicated rows.
November MongoDB and MongoDB and MongoDB Atlas connectors are now
2023 MongoDB Atlas available to use in your Data Factory data pipelines as
connectors sources and destinations.
November Microsoft 365 The Microsoft 365 connector now supports ingesting data
2023 connector now into Lakehouse tables .
supports ingesting
data into Lakehouse
(preview)
November Multi-task support for You can now open and edit data pipelines from different
2023 editing pipelines in workspaces and navigate between them using the
the designer multi-tasking capabilities in Fabric.
November String interpolation You can now edit your data connections within your data
2023 added to pipeline pipelines . Previously, a new tab would open when
return value connections needed editing. Now, you can remain within
your pipeline and seamlessly update your connections.
October Category redesign of We've redesigned the way activities are categorized to
2023 activities make it easier for you to find the activities you're looking
for with new categories like Control flow, Notifications,
and more.
October Integer data type We now support variables as integers! When creating a
2023 available for variables new variable, you can now choose to set the variable type
to Integer, making it easier to use arithmetic functions
with your variables.
October Pipeline name now We've added a new system variable called Pipeline Name
2023 supported in System so that you can inspect and pass the name of your
variables. pipeline inside of the pipeline expression editor, enabling
a more powerful workflow in Fabric Data Factory.
Month Feature Learn more
October Support for Type You can now edit column types when you land data into
2023 editing in Copy your Lakehouse table(s). This makes it easier to customize
activity Mappings the schema of your data in your destination. Simply
navigate to the Mapping tab, import your schemas, if you
don't see any mappings, and use the drop-down list to
make changes.
October New certified Announcing the release of the new Emplifi Metrics
2023 connector: Emplifi connector. The Power BI Connector is a layer between
Metrics Emplifi Public API and Power BI itself. For more
information, see Emplifi Public API documentation .
October SAP HANA The update enhances the SAP HANA connector with the
2023 (Connector Update) capability to consume HANA Calculation Views deployed
in SAP Datasphere by taking into account SAP
Datasphere's additional security concepts.
October Set Activity State to Activity State is now available in Fabric Data Factory data
2023 "Comment Out" Part pipelines , giving you the ability to comment out part of
of Pipeline your pipeline without deleting the definition.
August Staging labels The concept of staging data was introduced in Dataflows
2023 Gen2 for Microsoft Fabric and now you have the ability to
define what queries within your Dataflow should use the
staging mechanisms or not.
August Secure input/output We've added advanced settings for the Set Variable
2023 for logs activity called Secure input and Secure output. When you
enable secure input or output, you can hide sensitive
information from being captured in logs.
August Pipeline run status We've recently added Pipeline status so that developers
2023 added to Output can easily see the status of the pipeline run. You can now
panel view your Pipeline run status from the Output panel.
August Data pipelines FTP The FTP connector is now available to use in your Data
2023 connector Factory data pipelines in Microsoft Fabric. Look for it in
the New connection menu.
August Maximum number of The new maximum number of entities that can be part of
2023 entities in a Dataflow a Dataflow has been raised to 50.
August Manage connections The Manage Connections option now allows you to view
2023 feature the linked connections to your dataflow, unlink a
connection, or edit connection credentials and gateway.
July 2023 New modern data An improved experience aims to expedite the process of
connectivity and discovering data in Dataflow, Dataflow Gen2, and
discovery experience Datamart .
in Dataflows
October Microsoft Fabric Data You are invited to join our October webinar
2023 Factory Webinar Series – series , where we will show you how to use Data
October 2023 Factory to transform and orchestrate your data in
various scenarios.
September Notify Outlook and Teams Learn how to send notifications to both Teams
2023 channel/group from a channels/groups and Outlook emails .
Microsoft Fabric pipeline
September Microsoft Fabric Data Join our Data Factory webinar series where we
2023 Factory Webinar Series – will show you how to use Data Factory to transform
September 2023 and orchestrate your data in various scenarios.
August Using Data pipelines for Real-Time Analytics' KQL DB is supported as both a
2023 copying data to/from KQL destination and a source with data pipelines ,
Databases and crafting allowing you to build and manage various extract,
workflows with the Lookup transform, and load (ETL) activities, leveraging the
activity power and capabilities of KQL DBs.
August Incrementally amass data With Dataflows Gen2 that comes with support for
2023 data destinations, you can setup your own pattern
to load new data incrementally , replace some old
data and keep your reports up to date with your
source data.
August Data Pipeline Performance Learn how to account for pagination given the
2023 Improvement Part 3: Gaining current state of Fabric Data Pipelines in preview.
more than 50% This pipeline is performant when the number of
paginated pages isn't too large. Read more at
Month Feature Learn more
improvement for Historical Gaining more than 50% improvement for Historical
Loads Loads .
August Data Pipeline Performance Examples from this blog series include how to
2023 Improvements Part 2: merge two arrays into an array of JSON objects,
Creating an Array of JSONs and how to take a date range and create multiple
subranges then store these as an array of JSONs.
Read more at Creating an Array of JSONs .
July 2023 Data Pipeline Performance Part one of a series of blogs on moving data with
Improvements Part 1: How multiple Copy Activities moving smaller volumes in
to convert a time interval parallel: How to convert a time interval
(dd.hh:mm:ss) into seconds (dd.hh:mm:ss) into seconds .
July 2023 Construct a data analytics A blog covering data pipelines in Data Factory and
workflow with a Fabric Data the advantages you find by using pipelines to
Factory data pipeline orchestrate your Fabric data analytics projects and
activities .
July 2023 Data Pipelines Tutorial: In this blog, we will act in the persona of an AVEVA
Ingest files into a Lakehouse customer who needs to retrieve operations data
from a REST API with from AVEVA Data Hub into a Microsoft Fabric
pagination ft. AVEVA Data Lakehouse .
Hub
July 2023 Data Factory Spotlight: This blog spotlight covers the two primary high-
Dataflow Gen2 level features Data Factory implements: dataflows
and pipelines .
November Upgraded DataGrid An upgraded DataGrid for the Lakehouse table preview
2023 capabilities in experience now features sorting, filtering, and resizing of
Lakehouse columns.
November SQL analytics You can now retry the SQL analytics endpoint provisioning
2023 endpoint re- directly within the Lakehouse experience . This means that
provisioning if your initial provisioning attempt fails, you have the option
to try again without the need to create an entirely new
Lakehouse.
November Multiple Runtimes With the introduction of Runtime 1.2, Fabric supports
2023 Support multiple runtimes , offering users the flexibility to
seamlessly switch between them, minimizing the risk of
incompatibilities or disruptions. When changing runtimes, all
system-created items within the workspace, including
Lakehouses, SJDs, and Notebooks, will operate using the
newly selected workspace-level runtime version starting from
the next Spark Session.
November Intelligent Cache By default, the newly revamped and optimized Intelligent
2023 Cache feature is enabled in Fabric Spark. The intelligent
cache works seamlessly behind the scenes and caches data
to help speed up the execution of Spark jobs in Microsoft
Fabric as it reads from your OneLake or ADLS Gen2 storage
via shortcuts.
November Monitoring Hub for The latest enhancements in the monitoring hub are designed
2023 Spark to provide a comprehensive and detailed view of Spark and
enhancements Lakehouse activities , including executor allocations,
Month Feature Learn more
November Monitoring for Users can now view the progress and status of Lakehouse
2023 Lakehouse maintenance jobs and table load activities.
operations
November REST API support REST Public APIs for Spark Job Definition are now available,
2023 for Spark Job making it easy for users to manage and manipulate SJD
Definition preview items .
November REST API support As a key requirement for workload integration, REST Public
2023 for Lakehouse APIs for Lakehouse are now available. The Lakehouse REST
artifact, Load to Public APIs makes it easy for users to manage and
tables and table manipulate Lakehouse artifacts items programmatically.
maintenance
November Lakehouse support The Lakehouse now integrates with the lifecycle
2023 for git integration management capabilities in Microsoft Fabric , providing a
and deployment standardized collaboration between all development team
pipelines (preview) members throughout the product's life. Lifecycle
management facilitates an effective product versioning and
release process by continuously delivering features and bug
fixes into multiple environments.
November Embed a Power BI We are thrilled to announce that the powerbiclient Python
2023 report in Notebook package is now natively supported in Fabric notebooks.
This means you can easily embed and interact with Power BI
reports in your notebooks with just a few lines of code. To
learn more about how to use the powerbiclient package to
embed a Power BI component.
November Notebook We now support uploading the .jar files in the Notebook
2023 resources .JAR file Resources explorer . You can add your own compiled libs,
support
Month Feature Learn more
November Notebook Git Fabric notebooks now offer Git integration for source control
2023 integration preview using Azure DevOps . It allows users to easily control the
notebook code versions and manage the git branches by
leveraging the Fabric Git functions and Azure DevOps.
November Notebook in Now you can also use notebooks to deploy your code across
2023 Deployment different environments , such as development, test, and
Pipeline Preview production. You can also use deployment rules to customize
the behavior of your notebooks when they are deployed,
such as changing the default Lakehouse of a Notebook. Get
started with deployment pipelines to set up your deployment
pipeline, Notebook will show up in the deployment content
automatically.
November Notebook REST With REST Public APIs for the Notebook items, data
2023 APIs Preview engineers/data scientists can automate their pipelines and
establish CI/CD conveniently and efficiently. The notebook
Restful Public API can make it easy for users to manage
and manipulate Fabric notebook items and integrate
notebook with other tools and systems.
November Synapse VS Code With support for the Synapse VS Code extension on
2023 extension in vsocde.dev, users can now seamlessly edit and execute Fabric
vscode.dev preview Notebooks without ever leaving their browser window .
Additionally, all the native pro-developer features of VS Code
are now accessible to end-users in this environment.
October Create multiple Creating multiple OneLake shortcuts just got easier. Rather
2023 OneLake shortcuts than creating shortcuts one at a time, you can now browse to
at once your desired location and select multiple targets at once. All
your selected targets then get created as new shortcuts in a
single operation .
October Delta-RS The OneLake team worked with the Delta-RS community to
2023 introduces native help introduce support for recognizing OneLake URLs in
support for both Delta-RS and the Rust Object Store .
OneLake
September Import notebook The new "Import Notebook" entry on the Workspace -> New
2023 to your Workspace menu lets you easily import new Fabric Notebook items in
Month Feature Learn more
September Notebook file The Synapse VS Code extension now supports notebook File
2023 system support in System for Data Engineering and Data Science in Microsoft
Synapse VS Code Fabric. The Synapse VS Code extension empowers users to
extension develop their notebook artifacts directly within the Visual
Studio Code environment.
September Notebook sharing We now support checking the "Run" operation separately
2023 execute-only mode when sharing a notebook, if you just selected the "Run"
operation, the recipient would see a "Execution-only"
notebook .
September Notebook save We now support viewing and comparing the differences
2023 conflict resolution between two versions of the same notebook when there
are saving conflicts.
September Mssparkutils new We now support a new method in mssparkutils that can
2023 API for fast data enable large volume of data move/copy much faster ,
copy Mssparkutils.fs.fastcp() . You can use
mssparkutils.fs.help("fastcp") to check the detailed usage.
August Introducing High High concurrency mode allows you to run notebooks
2023 Concurrency Mode simultaneously on the same cluster without compromising
in Notebooks for performance or security when paying for a single session.
Data Engineering High concurrency mode offers several benefits for Fabric
and Data Science Spark users.
workloads in
Microsoft Fabric
August Service principal Azure service principal has been added as an authentication
2023 support to connect type for a set of data sources that can be used in Dataset,
to data in Dataflow, Dataflow, Dataflow Gen2 and Datamart.
Datamart, Dataset
and Dataflow Gen
2
August Announcing XMLA Direct Lake datasets now support XMLA-Write operations.
2023 Write support for Now you can use your favorite BI Pro tools and scripts to
Direct Lake create and manage Direct Lake datasets using XMLA
datasets endpoints .
November Fabric Changing the game: A step-by-step guide to use your own Python library
2023 Using your own library with in the Lakehouse . It is quite simple to create your
Microsoft Fabric own library with Python and even simpler to reuse it
on Fabric.
August Fabric changing the game: Learn more about logging your workload into
2023 Logging your workload OneLake using notebooks , using the OneLake API
using Notebooks Path inside the notebook.
July 2023 Lakehouse Sharing and Share a lakehouse and manage permissions so
Access Permission that users can access lakehouse data through the
Management Data Hub, the SQL analytics endpoint, and the
default semantic model.
June 2023 Virtualize your existing Connect data silos without moving or copying data
data into OneLake with with OneLake, which allows you to create special
shortcuts folders called shortcuts that point to other storage
locations .
November Copilot in notebooks The Copilot in Fabric Data Science and Data Engineering
2023 preview notebooks is designed to accelerate productivity,
provide helpful answers and guidance, and generate code
for common tasks like data exploration, data preparation
and machine learning with. You can interact and engage
with the AI from either the chat panel or even from within
notebooks cells using magic commands to get insights
from data faster. For more information, see Copilot in
notebooks .
November Data Wrangler for Data Wrangler now supports Spark DataFrames in preview.
2023 Spark DataFrames Until now, users have been able to explore and transform
preview pandas DataFrames using common operations that can be
Month Feature Learn more
November MLFlow Notebook The MLflow inline authoring widget enables users to
2023 Widget effortlessly track their experiment runs along with metrics
and parameters, all directly from within their notebook .
November New Model & New enhancements to our model and experiment tracking
2023 Experiment Item features are based on valuable user feedback. The new
Usability tree-control in the run details view makes tracking easier
Improvements by showing which run is selected. We've enhanced the
comparison feature, allowing you to easily adjust the
comparison pane for a more user-friendly experience. Now
you can select the run name to see the Run Details view.
November Recent Experiment It is now simpler for users to check out recent runs for an
2023 Runs experiment directly from the workspace list view . This
update makes it easier to keep track of recent activity,
quickly jump to the related Spark application, and apply
filters based on the run status.
November SynapseML v1.0 SynapseML v1.0 is now released. SynapseML v1.0 makes it
2023 easy to build production ready machine learning systems
on Fabric and has been in use at Microsoft for over six
years.
November Prebuilt AI models in We are excited to announce the preview for prebuilt AI
2023 Microsoft Fabric models in Fabric . Azure OpenAI Service , Text
preview Analytics , and Azure AI Translator are prebuilt models
available in Fabric, with support for both RESTful API and
SynapseML. You can also use the OpenAI Python Library
to access Azure OpenAI service in Fabric.
November Reusing existing We have added support for a new connection method
2023 Spark Session in called "synapse" in sparklyr , which enables users to
sparklyr connect to an existing Spark session. Additionally, we have
contributed this connection method to the OSS sparklyr
Month Feature Learn more
project. Users can now use both sparklyr and SparkR in the
same session and easily share data between them.
November REST API Support for REST APIs for ML Experiment and ML Model are now
2023 ML Experiments and available. These REST APIs for ML Experiments and ML
ML Models Models begin to empower users to create and manage
machine-learning artifacts programmatically, a key
requirement for pipeline automation and workload
integration.
October Get started with Explore how semantic link seamlessly connects Power BI
2023 semantic link semantic models with Synapse Data Science within
(preview) Microsoft Fabric. Learn more at Semantic link in Microsoft
Fabric: Bridging BI and Data Science .
August Harness the Power of Harness the potential of Microsoft Fabric and SynapseML
2023 LangChain in LLM capabilities to effectively summarize and organize
Microsoft Fabric for your own documents.
Advanced Document
Summarization
July 2023 Unleashing the Power In this blog post, we delve into the exciting functionalities
of SynapseML and and features of Microsoft Fabric and SynapseML to
Microsoft Fabric: A demonstrate how to leverage Generative AI models or
Guide to Q&A on PDF Large Language Models (LLMs) to perform question and
Documents answer (Q&A) tasks on any PDF document .
November New data We've updated the Data Science Happy Path tutorial for
2023 science happy Microsoft Fabric . This new comprehensive tutorial
path tutorial in demonstrates the entire data science workflow , using a bank
Microsoft Fabric customer churn problem as the context.
November New data We've expanded our collection of data science samples to
2023 science samples include new end-to-end R samples and new quick tutorial
samples for "Explaining Model Outputs" and "Visualizing Model
Behavior." .
November New data The new Data Science sample on sales forecasting was
2023 science developed in collaboration with Sonata Software . This new
forecasting sample encompasses the entire data science workflow,
sample spanning from data cleaning to Power BI visualization. The
notebook covers the steps to develop, evaluate, and score a
forecasting model for superstore sales, harnessing the power of
the SARIMAX algorithm.
August New Machine More samples have been added to the Microsoft Fabric Synapse
2023 failure and Data Science Use a sample menu. To check these Data Science
Customer churn samples, select Synapse Data Science, then Use a sample.
samples
August Use Semantic Learn how Fabric allows data scientists to use Semantic Kernel
2023 Kernel with with Lakehouse in Microsoft Fabric .
Lakehouse in
Microsoft Fabric
November Mirroring in Microsoft Any database can be accessed and managed centrally
2023 Fabric from within Fabric without having to switch database
clients. By just providing connection details, your
database is instantly available in Fabric as a Mirrored
database . Azure Cosmos DB, Azure SQL Database,
and Snowflake customers will be able to use Mirroring.
SQL Server, Azure PostgreSQL, Azure MySQL,
MongoDB, and other databases and data warehouses
will be coming in CY24.
Month Feature Learn more
November TRIM T-SQL support You can now use the TRIM command to remove spaces
2023 or specific characters from strings by using the
keywords LEADING, TRAILING or BOTH in TRIM
(Transact-SQL).
November SSD metadata caching File and rowgroup metadata are now also cached with
2023 in-memory and SSD cache, further improving
performance.
November PARSER 2.0 CSV file parser version 2.0 for COPY INTO builds an
2023 improvements for CSV innovation from Microsoft Research's Data Platform
ingestion and Analytics group to make CSV file ingestion blazing
fast on Fabric Warehouse. For more information, see
COPY INTO (Transact-SQL).
November Fast compute resource All query executions in Fabric Warehouse are now
2023 assignment enabled powered by the new technology recently deployed as
part of the Global Resource Governance component
that assigns compute resources in milliseconds.
November REST API support for With the Warehouse public APIs, SQL developers can
2023 Warehouse now automate their pipelines and establish CI/CD
conveniently and efficiently. The Warehouse REST
Public APIs makes it easy for users to manage and
manipulate Fabric Warehouse items.
November Power BI semantic Microsoft has renamed the Power BI dataset content
2023 models type to semantic model. This applies to Microsoft Fabric
semantic models as well. For more information, see
New name for Power BI datasets.
November SQL analytics endpoint Microsoft has renamed the SQL endpoint of a
2023 Lakehouse to the SQL analytics endpoint of a
Lakehouse.
November Dynamic data masking Dynamic Data Masking (DDM) for Fabric Warehouse
2023 and the SQL analytics endpoint in the Lakehouse. For
more information and samples, see Dynamic data
Month Feature Learn more
masking in Fabric data warehousing and How to
implement dynamic data masking in Synapse Data
Warehouse.
November Clone tables with time You can now use table clones to create a clone of a
2023 travel table based on data up to seven calendar days in the
past .
November User experience updates Several user experiences in Warehouse have landed. For
2023 more information, see Fabric Warehouse user
experience updates .
November Automatic data Automatic data compaction will rewrite many smaller
2023 compaction parquet files into a few larger parquet files, which will
improve the performance of reading the table. Data
Compaction is one of the ways that we help your
Warehouse to provide you with great performance and
no effort on your part.
October Support for sp_rename Support for the T-SQL sp_rename syntax is now
2023 available for both Warehouse and SQL analytics
endpoint. For more information, see Fabric Warehouse
support for sp_rename .
October Query insights The query insights feature is a scalable, sustainable, and
2023 extendable solution to enhance the SQL analytics
experience. With historic query data, aggregated
insights, and access to actual query text, you can
analyze and tune your query performance.
October Full DML to Delta Lake Fabric Warehouse now publishes all Inserts, Updates
2023 Logs and Deletes for each table to their Delta Lake Log in
OneLake.
October Throttling and A new article details the throttling and smoothing
2023 smoothing in Synapse behavior in Synapse Data Warehouse, where almost all
Data Warehouse activity is classified as background to take advantage of
the 24-hr smoothing window before throttling takes
effect. Learn more about how to observe utilization in
Synapse Data Warehouse.
September Default semantic model The default semantic model no longer automatically
2023 improvements adds new objects . This can be enabled in the
Warehouse item settings.
September SQL Projects support for Microsoft Fabric Data Warehouse is now supported in
2023 Warehouse in Microsoft the SQL Database Projects extension available inside of
Fabric Azure Data Studio and Visual Studio Code .
September Usage reporting Utilization and billing reporting is available for Fabric
2023 data warehousing in the Microsoft Fabric Capacity
Metrics app. For more information, read about
Utilization and billing reporting Fabric data
warehousing .
August SSD Caching enabled Local SSD caching stores frequently accessed data on
2023 local disks in highly optimized format, significantly
reducing I/O latency. This benefits you immediately,
with no action required or configuration necessary.
July 2023 Sharing Any Admin or Member within a workspace can share a
Warehouse with another recipient within your
organization. You can also grant these permissions
using the "Manage permissions" experience.
July 2023 Table clone A zero-copy clone creates a replica of the table by
copying the metadata, while referencing the same data
files in OneLake. This avoids the need to store multiple
copies of data, thereby saving on storage costs when
you clone a table in Microsoft Fabric. For more
Month Feature Learn more
May 2023 Introducing Synapse Synapse Data Warehouse is the next generation of data
Data Warehouse in warehousing in Microsoft Fabric that is the first
Microsoft Fabric transactional data warehouse to natively support an
open data format, Delta-Parquet.
November Migrate from Azure Synapse A detailed guide with a migration runbook is
2023 dedicated SQL pools available for migrations from Azure Synapse Data
Warehouse dedicated SQL pools into Microsoft
Fabric.
August Efficient Data Partitioning A proposed method for data partitioning using
2023 with Microsoft Fabric: Best Fabric notebooks . Data partitioning is a data
Practices and management technique used to divide a large
Implementation Guide dataset into smaller, more manageable subsets
called partitions or shards.
May 2023 Microsoft Fabric - How can a This blog reviews how to connect to a SQL analytics
SQL user or DBA connect endpoint of the Lakehouse or the Warehouse
through the Tabular Data Stream, or TDS
endpoint , familiar to all modern web applications
that interact with a SQL Server endpoint.
November Announcing Delta You can now enable availability of KQL Database in Delta
2023 Lake support in Lake format . Delta Lake is the unified data lake table
Real-Time Analytics format chosen to achieve seamless data access across all
KQL Database compute engines in Microsoft Fabric.
November Delta Parquet As part of the one logical copy promise, we are excited to
2023 support in KQL announce that data in KQL Database can now be made
Database available in OneLake in delta parquet format . You can
now access this Delta table by creating a OneLake shortcut
from Lakehouse, Warehouse, or directly via Power BI Direct
Lake mode.
November Open Source Several open-source connectors for real-time analytics are
2023 Connectors for KQL now supported to enable users to ingest data from
Database various sources and process it using KQL DB.
November REST API Support We're excited to announce the launch of REST Public APIs
2023 for KQL Database for KQL DB. The Public REST APIs of KQL DB enables
users to manage and automate their flows
programmatically.
November Eventstream Data Now, you can transform your data streams into real time
2023 Transformation for within Eventstream before they are sent to your KQL
KQL Database Database . When you create a KQL Database destination
in the eventstream, you can set the ingestion mode to
"Event processing before ingestion" and add event
processing logics such as filtering and aggregation to
transform your data streams.
November Splunk add-on Microsoft Fabric add-on for Splunk allows users to ingest
2023 preview logs from Splunk platform into a Fabric KQL DB using the
Kusto python SDK.
November Get Data from If you're working on other Fabric items and are looking to
2023 Eventstream ingest data from Eventstream, our new "Get Data from
anywhere in Fabric Eventstream" feature simplifies the process, you can Get
data from Eventstream while you are working with a KQL
database and Lakehouse.
November Two ingestion We've introduced two distinct ingestion modes for your
2023 modes for Lakehouse Destination: Rows per file and Duration .
Lakehouse
Destination
November Optimize Tables The table optimization shortcut is now available inside
2023 Before Ingesting Eventstream Lakehouse destination to compact
Data to Lakehouse numerous small streaming files generated on a Lakehouse
table. Table optimization shortcut works by opening a
Month Feature Learn more
November Get Data in Real- A new Get Data experience simplifies the data ingestion
2023 Time Analytics: A process in your KQL database.
New and Improved
Experience
October Expanded Custom New new custom app connections provide more
2023 App Connections flexibility when it comes to bringing your data streams into
Eventstream.
October Eventstream Kafka The Custom App feature has new endpoints in sources and
2023 Endpoints and destinations , including sample Java code for your
Sample Code convenience. Simply add it to your application, and you'll
be all set to stream your real-time event to Eventstream.
October KQL Database Auto Users do not need to worry about how many resources are
2023 scale algorithm needed to support their workloads in a KQL database. KQL
improvements Database has a sophisticated in-built, multi-dimensional,
auto scaling algorithm. We recently implemented some
optimizations that will make some time series analysis more
efficient .
October Understanding Read more about how a KQL database is billed in the
2023 Fabric KQL DB SaaS world of Microsoft Fabric.
Capacity
Month Feature Learn more
September OneLake shortcut to Now you can create a shortcut from KQL DB to delta tables
2023 delta tables from in OneLake, allowing in-place data queries. Now you query
KQL DB delta tables in your Lakehouse or Warehouse directly from
KQL DB.
September Model and Query Kusto Query Language (KQL) now allows you to model and
2023 data as graphs using query data as graphs. This feature is currently in
KQL preview.Learn more at Introduction to graph semantics in
KQL and Graph operators and functions .
September Easily connect to Power BI desktop has released two new ways to easily
2023 KQL Database from connect to a KQL database, in the Get Data dialogue and in
Power BI desktop the OneLake data hub menus.
September Eventstream now AMQP stands for Advanced Message Queuing Protocol, a
2023 supports AMQP protocol that supports a wide range of messaging patterns.
format connection In Eventstream, you can now create a Custom App source
string for data or destination and select AMQP format connection string
ingestion for ingesting data into Fabric or consuming data from
Fabric.
August KQL Database Fabric KQL Database supports running Python code
2023 support for inline embedded in Kusto Query Language (KQL) using the
Python python() plugin. The plugin is disabled by default. Before
you start, enable the Python plugin in your KQL database.
July 2023 Stream Real-time Eventstreams under Real-Time Analytics are a centralized
Events to Microsoft platform within Fabric, allowing you to capture, transform,
Fabric with and route real-time events to multiple destinations
eventstreams from a effortlessly, all through a user-friendly, no-code experience.
custom application
June 2023 Unveiling the Epic As part of the Kusto Detective Agency Season 2 , we're
Opportunity: A Fun excited to introduce an epic opportunity for all investigators
Game to Explore the and data enthusiasts to learn about the new portfolio in a
Synapse Real-Time fun and engaging way. Recruiting now at
Analytics https://detective.kusto.io/ !
November Semantic Link: Data Great Expectations Open Source (GX OSS) is a popular
2023 validation using Great Python library that provides a framework for
Expectations describing and validating the acceptable state of data.
With the recent integration of Microsoft Fabric
semantic link, GX can now access semantic models ,
further enabling seamless collaboration between data
scientists and business analysts.
November Explore Data Dive into a practical scenario using real-world bike-
2023 Transformation in sharing data and learn to compute the number of
Eventstream for KQL bikes rented every minute on each street, using
Database Integration Eventstream's powerful event processor, mastering
real-time data transformations, and effortlessly
directing the processed data to your KQL Database. .
October Stream Azure IoT Hub A demo of using Fabric Eventstream to seamlessly
2023 Data into Fabric ingest and transform real-time data streams before
Eventstream for Email they reach various Fabric destinations such as
Alerting Lakehouse, KQL Database, and Reflex. Then, configure
email alerts in Reflex with Data Activator triggers.
September Quick start: Sending data Learn how to send data from Kafka to Synapse Real-
2023 to Synapse Real-Time time Analytics in Fabric .
Analytics in Fabric from
Apache Kafka Ecosystems
using Java
June 2023 From raw data to insights: Learn about the integration between Azure Event
How to ingest data from Hubs and your KQL database .
Azure Event Hubs into a
KQL database
June 2023 From raw data to insights: Learn about the integration between eventstreams
How to ingest data from and a KQL database , both of which are a part of the
eventstreams into a KQL Real-Time Analytics experience.
database
June 2023 Discovering the best ways This blog covers different options for bringing data
to get data into a KQL into a KQL database .
database
June 2023 Get started with In this blog, we focus on the different ways of
exploring your data with querying data in Synapse Real-Time Analytics .
KQL – a purpose-built
tool for petabyte scale
data analytics
November Copilot for Power We are thrilled to announce the preview of Copilot in
2023 BI in Microsoft Microsoft Fabric , including the experience for Power BI,
Fabric preview which helps users quickly get started by helping them create
reports in the Power BI web experience. For more
information, see Copilot for Power BI .
October Chat your data in Learn how to construct Copilot tools based on business data
2023 Microsoft Fabric in Microsoft Fabric .
Month Feature Learn more
with Semantic
Kernel
November Fabric workloads are Microsoft Fabric is now generally available! Microsoft
2023 now generally Fabric Synapse Data Warehouse, Data Engineering & Data
available! Science, Real-Time Analytics, Data Factory, OneLake, and
the overall Fabric platform are now generally available.
November Microsoft Fabric User We're happy to announce the preview of Microsoft Fabric
2023 APIs User APIs. The Fabric user APIs are a major enabler for
both enterprises and partners to use Microsoft Fabric as
they enable end-to-end fully automated interaction with
the service, enable integration of Microsoft Fabric into
external web applications, and generally enable customers
and partners to scale their solutions more easily.
October Item type icons Our design team has completed a rework of the item type
2023 icons across the platform to improve visual parsing.
September Monitoring hub – Column options inside the monitoring hub give users a
2023 column options better customization experience and more room to
operate.
September OneLake File Explorer The OneLake file explorer automatically syncs all
2023 v1.0.10 Microsoft OneLake items that you have access to in
Windows File Explorer. With the latest version, you can
seamlessly transition between using the OneLake file
explorer app and the Fabric web portal. You can also right-
click on the OneLake icon in the Windows notification area,
and select Diagnostic Operations to view client-site logs.
Learn more about easy access to open workspaces and
items online .
August Multitasking Now, all Fabric items are opened in a single browser tab on
2023 navigation the left navigation bar, even in the event of a page refresh.
improvement This ensures you can refresh the page without the concern
of losing context.
Month Feature Learn more
July 2023 New OneLake file With OneLake file explorer v1.0.9.0, it's simple to choose
explorer update with and switch between different Microsoft Entra ID (formerly
support for switching Azure Active Directory) accounts .
organizational
accounts
July 2023 Help pane The Help pane is feature-aware and displays articles about
the actions and features available on the current Fabric
screen. For more information, see Help pane in the
monthly Fabric update.
November Microsoft Fabric Microsoft Fabric User APIs are now available for Fabric
2023 User APIs experiences. The Fabric user APIs are a major enabler for
both enterprises and partners to use Microsoft Fabric as
they enable end-to-end fully automated interaction with
the service, enable integration of Microsoft Fabric into
external web applications, and generally enable customers
and partners to scale their solutions more easily.
November Notebook in Now you can also use notebooks to deploy your code
2023 Deployment Pipeline across different environments, such as development, test,
Preview and production. You can also use deployment rules to
customize the behavior of your notebooks when they are
deployed, such as changing the default Lakehouse of a
Notebook. Get started with deployment pipelines to set up
your deployment pipeline, Notebook will show up in the
deployment content automatically.
November Notebook Git Fabric notebooks now offer Git integration for source
2023 integration preview control using Azure DevOps. It allows users to easily control
the notebook code versions and manage the Git branches
by leveraging the Fabric Git functions and Azure DevOps.
Month Feature Learn more
November Notebook REST APIs With REST Public APIs for the Notebook items, data
2023 Preview engineers/data scientists can automate their pipelines and
establish CI/CD conveniently and efficiently. The notebook
Restful Public API can make it easy for users to manage
and manipulate Fabric notebook items and integrate
notebook with other tools and systems.
November Lakehouse support The Lakehouse artifact now integrates with the lifecycle
2023 for git integration management capabilities in Microsoft Fabric, providing a
and deployment standardized collaboration between all development team
pipelines (preview) members throughout the product's life. Lifecycle
management facilitates an effective product versioning and
release process by continuously delivering features and bug
fixes into multiple environments.
September SQL Projects Microsoft Fabric Data Warehouse is now supported in the
2023 support for SQL Database Projects extension available inside of Azure
Warehouse in Data Studio and Visual Studio Code .
Microsoft Fabric
September Notebook file The Synapse VS Code extension now supports notebook
2023 system support in File System for Data Engineering and Data Science in
Synapse VS Code Microsoft Fabric. The Synapse VS Code extension empowers
extension users to develop their notebook artifacts directly within the
Visual Studio Code environment.
September Git integration with You can now publish a Power BI paginated report and keep
2023 paginated reports in it in sync with your git workspace. Developers can apply
Power BI their development processes, tools, and best practices.
August Introducing the dbt The dbt adapter allows you to connect and transform data
2023 adapter for Synapse into Synapse Data Warehouse . The Data Build Tool (dbt)
Month Feature Learn more
May 2023 Introducing git While developing in Fabric, developers can back up and
integration in version their work, roll back as needed, collaborate or work
Microsoft Fabric for in isolation using git branches . Read more about
seamless source connecting the workspace to an Azure repo.
control
management
October Announcing the Data We are thrilled to announce that Data Activator is now in
2023 Activator preview preview and is enabled for all existing Microsoft Fabric
users.
August Updated preview We have been working on a new experience for designing
2023 experience for triggers and it's now available in our preview! You now see
trigger design three cards in every trigger: Select, Detect, and Act.
May Driving actions from Data Activator is a new no-code Microsoft Fabric
2023 your data with Data experience that empowers the business analyst to drive
Activator actions automatically from your data. To learn more, sign up
for the Data Activator limited preview.
November Fabric + Microsoft 365 Microsoft Graph is the gateway to data and intelligence
2023 Data: Better Together in Microsoft 365. Microsoft 365 Data Integration for
Microsoft Fabric enables you to manage your Microsoft
365 alongside your other data sources in one place with a
suite of analytical experiences.
Month Feature Learn more
November Microsoft 365 The Microsoft 365 connector now supports ingesting
2023 connector now data into Lakehouse tables .
supports ingesting
data into Lakehouse
(preview)
October Microsoft OneLake You can now create shortcuts directly to your Dynamics
2023 adds shortcut support 365 and Power Platform data in Dataverse and analyze
to Power Platform and it with Microsoft Fabric alongside the rest of your
Dynamics 365 OneLake data. There is no need to export data, build ETL
pipelines or use third-party integration tools.
Migration
This section includes guidance and documentation updates on migration to Microsoft
Fabric.
July 2023 Fabric Changing the game – This blog post covers OneLake integrations and
OneLake integration multiple scenarios to ingest the data inside of Fabric
OneLake , including ADLS, ADF, OneLake Explorer,
Databricks.
June 2023 Microsoft Fabric changing This blog post covers the scenario to export data
the game: Exporting data from Azure SQL Database into OneLake .
and building the Lakehouse
June 2023 Copy data to Azure SQL at Did you know that you can use Microsoft Fabric to
scale with Microsoft Fabric copy data at scale from supported data sources to
Azure SQL Database or Azure SQL Managed
Instance within minutes?
June 2023 Bring your Mainframe DB2 In this blog, we review the convenience and ease of
z/OS data to Microsoft opening DB2 for z/OS data in Microsoft Fabric .
Fabric
Monitoring
This section includes guidance and documentation updates on monitoring your
Microsoft Fabric capacity and utilization, including the Monitoring hub.
October Throttling and smoothing in A new article helps you understand Fabric capacity
2023 Synapse Data Warehouse throttling. Throttling occurs when a tenant's
capacity consumes more capacity resources than it
has purchased over a period of time.
September Monitoring hub - column Users can select and reorder the columns
2023 options according to their customized needs in the
Monitoring hub .
September Fabric Capacities – Read more about the improvements we're making
2023 Everything you need to to the Fabric capacity management platform for
know about what's new and Fabric and Power BI users .
what's coming
September Microsoft Fabric Capacity The Microsoft Fabric Capacity Metrics app is
2023 Metrics available in App Source for a variety of billing and
utilization reporting.
August Monitoring Hub support for We have updated Monitoring Hub to allow users to
2023 personalized column personalize activity-specific columns. You now have
options the flexibility to display columns that are relevant
to the activities you're focused on.
May 2023 Capacity metrics in Learn more about the universal compute capacities
Microsoft Fabric and Fabric's capacity metrics governance
features that admins can use to monitor usage
and make data-driven scale-up decisions.
Microsoft Purview
This section summarizes recent announcements about governance and compliance
capabilities with Microsoft Purview in Microsoft Fabric. Learn more about Information
protection in Microsoft Fabric.
May Administration, Security and Microsoft Fabric provides built-in enterprise grade
2023 Governance in Microsoft Fabric governance and compliance capabilities , powered
by Microsoft Purview.
Related content
For older updates, review previous updates in Microsoft Fabric.
Next step
Microsoft Fabric community
Feedback
Was this page helpful? Yes No
Multi-experience tutorials
The following table lists tutorials that span multiple Fabric experiences.
Tutorial Scenario
name
Lakehouse In this tutorial, you ingest, transform, and load the data of a fictional retail
company, Wide World Importers, into the lakehouse and analyze sales data across
various dimensions.
Data Science In this tutorial, you explore, clean, and transform a taxicab trip semantic model,
and build a machine learning model to predict trip duration at scale on a large
semantic model.
Real-Time In this tutorial, you use the streaming and query capabilities of Real-Time
Analytics Analytics to analyze the New York Yellow Taxi trip semantic model. You uncover
essential insights into trip statistics, taxi demand across the boroughs of New
York, and other related insights.
Data In this tutorial, you build an end-to-end data warehouse for the fictional Wide
warehouse World Importers company. You ingest data into data warehouse, transform it
using T-SQL and pipelines, run queries, and build reports.
Experience-specific tutorials
The following tutorials walk you through scenarios within specific Fabric experiences.
Power BI In this tutorial, you build a dataflow and pipeline to bring data into a
lakehouse, create a dimensional model, and generate a compelling
report.
Tutorial name Scenario
Data Factory In this tutorial, you ingest data with data pipelines and transform data
with dataflows, then use the automation and notification to create a
complete data integration scenario.
Data Science end-to- In this set of tutorials, learn about the different Data Science experience
end AI samples capabilities and examples of how ML models can address your common
business problems.
Data Science - Price In this tutorial, you build a machine learning model to analyze and
prediction with R visualize the avocado prices in the US and predict future prices.
Application lifecycle In this tutorial, you learn how to use deployment pipelines together with
management git integration to collaborate with others in the development, testing and
publication of your data and reports.
Next steps
Create a workspace
Discover data items in the OneLake data hub
Feedback
Was this page helpful? Yes No
Use this reference guide and the example scenarios to help you in deciding whether you
need a copy activity, a dataflow, or Spark for your workloads using Microsoft Fabric.
) Important
Use case Data lake and data warehouse Data ingestion, Data ingestion,
migration, data data transformation,
data ingestion, transformation, data processing,
lightweight transformation data data profiling
wrangling,
data profiling
Review the following three scenarios for help with choosing how to work with your data
in Fabric.
Scenario1
Leo, a data engineer, needs to ingest a large volume of data from external systems, both
on-premises and cloud. These external systems include databases, file systems, and APIs.
Leo doesn’t want to write and maintain code for each connector or data movement
operation. He wants to follow the medallion layers best practices, with bronze, silver,
and gold. Leo doesn't have any experience with Spark, so he prefers the drag and drop
UI as much as possible, with minimal coding. And he also wants to process the data on a
schedule.
The first step is to get the raw data into the bronze layer lakehouse from Azure data
resources and various third party sources (like Snowflake Web, REST, AWS S3, GCS, etc.).
He wants a consolidated lakehouse, so that all the data from various LOB, on-premises,
and cloud sources reside in a single place. Leo reviews the options and selects pipeline
copy activity as the appropriate choice for his raw binary copy. This pattern applies to
both historical and incremental data refresh. With copy activity, Leo can load Gold data
to a data warehouse with no code if the need arises and pipelines provide high scale
data ingestion that can move petabyte-scale data. Copy activity is the best low-code
and no-code choice to move petabytes of data to lakehouses and warehouses from
varieties of sources, either ad-hoc or via a schedule.
Scenario2
Mary is a data engineer with a deep knowledge of the multiple LOB analytic reporting
requirements. An upstream team has successfully implemented a solution to migrate
multiple LOB's historical and incremental data into a common lakehouse. Mary has been
tasked with cleaning the data, applying business logics, and loading it into multiple
destinations (such as Azure SQL DB, ADX, and a lakehouse) in preparation for their
respective reporting teams.
Mary is an experienced Power Query user, and the data volume is in the low to medium
range to achieve desired performance. Dataflows provide no-code or low-code
interfaces for ingesting data from hundreds of data sources. With dataflows, you can
transform data using 300+ data transformation options, and write the results into
multiple destinations with an easy to use, highly visual user interface. Mary reviews the
options and decides that it makes sense to use Dataflow Gen 2 as her preferred
transformation option.
Scenario3
Adam is a data engineer working for a large retail company that uses a lakehouse to
store and analyze its customer data. As part of his job, Adam is responsible for building
and maintaining the data pipelines that extract, transform, and load data into the
lakehouse. One of the company's business requirements is to perform customer review
analytics to gain insights into their customers' experiences and improve their services.
Adam decides the best option is to use Spark to build the extract and transformation
logic. Spark provides a distributed computing platform that can process large amounts
of data in parallel. He writes a Spark application using Python or Scala, which reads
structured, semi-structured, and unstructured data from OneLake for customer reviews
and feedback. The application cleanses, transforms, and writes data to Delta tables in
the lakehouse. The data is then ready to be used for downstream analytics.
Next steps
How to copy data using copy activity
Quickstart: Create your first dataflow to get and transform data
How to create an Apache Spark job definition in Fabric
Microsoft Fabric decision guide: choose
a data store
Article • 09/18/2023
Use this reference guide and the example scenarios to help you choose a data store for
your Microsoft Fabric workloads.
) Important
Security Object level Row level, table level Built-in RLS Row-level
(table, view, (when using T-SQL), none editor Security
function, stored for Spark
procedure, etc.),
column level,
row level,
DDL/DML
Query across Yes, query Yes, query across No Yes, query across
items across lakehouse and warehouse KQL Databases,
lakehouse and tables;query across lakehouses, and
warehouse lakehouses (including warehouses with
tables shortcuts using Spark) shortcuts
Ingestion Queued
latency ingestion,
Streaming
ingestion has a
Data Lakehouse Power BI KQL Database
warehouse Datamart
couple of
seconds latency
Scenarios
Review these scenarios for help with choosing a data store in Fabric.
Scenario 1
Susan, a professional developer, is new to Microsoft Fabric. They are ready to get started
cleaning, modeling, and analyzing data but need to decide to build a data warehouse or
a lakehouse. After review of the details in the previous table, the primary decision points
are the available skill set and the need for multi-table transactions.
Susan has spent many years building data warehouses on relational database engines,
and is familiar with SQL syntax and functionality. Thinking about the larger team, the
primary consumers of this data are also skilled with SQL and SQL analytical tools. Susan
decides to use a data warehouse, which allows the team to interact primarily with T-
SQL, while also allowing any Spark users in the organization to access the data.
Scenario 2
Rob, a data engineer, needs to store and model several terabytes of data in Fabric. The
team has a mix of PySpark and T-SQL skills. Most of the team running T-SQL queries are
consumers, and therefore don't need to write INSERT, UPDATE, or DELETE statements.
The remaining developers are comfortable working in notebooks, and because the data
is stored in Delta, they're able to interact with a similar SQL syntax.
Rob decides to use a lakehouse, which allows the data engineering team to use their
diverse skills against the data, while allowing the team members who are highly skilled
in T-SQL to consume the data.
Scenario 3
Ash, a citizen developer, is a Power BI developer. They're familiar with Excel, Power BI,
and Office. They need to build a data product for a business unit. They know they don't
quite have the skills to build a data warehouse or a lakehouse, and those seem like too
much for their needs and data volumes. They review the details in the previous table
and see that the primary decision points are their own skills and their need for a self
service, no code capability, and data volume under 100 GB.
Ash works with business analysts familiar with Power BI and Microsoft Office, and knows
that they already have a Premium capacity subscription. As they think about their larger
team, they realize the primary consumers of this data may be analysts, familiar with no-
code and SQL analytical tools. Ash decides to use a Power BI datamart, which allows the
team to interact build the capability fast, using a no-code experience. Queries can be
executed via Power BI and T-SQL, while also allowing any Spark users in the organization
to access the data as well.
Scenario 4
Daisy is business analyst experienced with using Power BI to analyze supply chain
bottlenecks for a large global retail chain. They need to build a scalable data solution
that can handle billions of rows of data and can be used to build dashboards and
reports that can be used to make business decisions. The data comes from plants,
suppliers, shippers, and other sources in various structured, semi-structured, and
unstructured formats.
Daisy decides to use a KQL Database because of its scalability, quick response times,
advanced analytics capabilities including time series analysis, geospatial functions, and
fast direct query mode in Power BI. Queries can be executed using Power BI and KQL to
compare between current and previous periods, quickly identify emerging problems, or
provide geo-spatial analytics of land and maritime routes.
Next steps
What is data warehousing in Microsoft Fabric?
Create a warehouse in Microsoft Fabric
Create a lakehouse in Microsoft Fabric
Introduction to Power BI datamarts
Create a KQL database
Feedback
Was this page helpful? Yes No
This article gives a high level view of navigating to your items and actions from
Microsoft Fabric Home. Each product experience has its own Home, and there are
similarities that they all share. Those similarities are described in this article. For detailed
information about Home for a particular product experience, such as Data Factory
Home, visit the relevant page for that product experience.
) Important
Overview of Home
On Home, you see items that you create and that you have permission to use. These
items are from all the workspaces that you access. That means that the items available
on everyone's Home are different. At first, you might not have much content, but that
changes as you start to create and share Microsoft Fabric items.
7 Note
Home is not workspace-specific. For example, the Recent area on Home might
include items from many different workspaces.
In Microsoft Fabric, the term item refers to: apps, lakehouses, warehouses, reports, and
more. Your items are accessible and viewable in Microsoft Fabric, and often the best
place to start working in Microsoft Fabric is from Home. However, once you've created
at least one new workspace, been granted access to a workspace, or you've added an
item to My workspace, you might find it more convenient to navigate directly to a
workspace. One way to navigate to a workspace is by using the nav pane and workspace
selector.
To open Home, select it from the top of your left navigation pane.
7 Note
Power BI Home is different from the other product experiences. To learn more, visit
Power BI Home.
1. The left navigation pane (nav pane) for your product experience links you to
different views of your items and to creator resources.
2. The selector for switching product experiences.
3. The top menu bar for orienting yourself in Microsoft Fabric, finding items, help,
and sending Microsoft feedback. The Account manager control is a critical icon for
looking up your account information and managing your Fabric trial.
4. Options for creating new items.
5. Links to recommended content. This content helps you get started using the
product experience and links to items and workspaces that you visit often.
6. Your items organized by recent, favorites, and items shared with you by your
colleagues. The items that appear here are the same across product experiences,
except for the Power BI experience.
) Important
Only the content that you can access appears on your Home. For example, if you
don't have permissions to a report, that report doesn't appear on Home. The
exception to this is if your subscription or license changes to one with less access,
then you will receive a prompt asking you to start a trial or upgrade your license.
Locate items from Home
Microsoft Fabric offers many ways of locating and viewing your content. All approaches
access the same pool of content in different ways. Searching is sometimes the easiest
and quickest way to find something. While other times, using the nav pane to open a
workspace or selecting a card on the Home canvas is your best option.
In the bottom section of the nav pane is where you find and open your workspaces. Use
the workspace selector to view a list of your workspaces and select one to open. Below
the workspace selector is the name of the currently open workspace.
- By default, you see the Workspaces selector and My workspace.
- When you open a workspace, its name replaces My workspace.
- Whenever you create a new item, it's added to the open workspace.
The nav pane is there when you open Home and remains there as you open other areas
of Microsoft Fabric. Every Microsoft Fabric product experience nav pane includes Home,
Browse, OneLake data hub, Create, and Workspaces.
There are different ways to find and open your workspaces. If you know the name or
owner, you can search. Or you can select the Workspaces icon in the nav pane and
choose which workspace to open.
The workspace opens on your canvas, and the name of the workspace is listed on your
nav pane. When you open a workspace, you can view its content. It includes items such
as notebooks, pipelines, reports, and lakehouses.
Microsoft Fabric provides context sensitive help in the right rail of your browser. In this
example, we've selected Browse from the nav pane and the Help pane automatically
updates to show us articles about the features of the Browse screen. For example, we're
shown articles on View recent content and See content that others have shared with you.
If there are community posts related to the current view, they're displayed under Forum
topics.
Leave the Help pane open as you work, and use the suggested topics to learn how to
use Microsoft Fabric features and terminology. Or, select the X to close the Help pane
and save screen space.
The Help pane is also a great place to search for answers to your questions. Type your
question or keywords in the Search field.
To return to the default Help pane, select the left arrow.
For more information about the Help pane, see Get in-product help.
7 Note
Power BI Home is different from the other product experiences. To learn more, visit
Power BI Home.
The Recommended area might include getting started content as well as items and
workspaces that you use frequently.
Next steps
Power BI Home
Start a Fabric trial
Self-help with the Fabric contextual
Help pane
Article • 05/23/2023
This article explains how to use the Fabric Help pane. The Help pane is feature-aware
and displays articles about the actions and features available on the current Fabric
screen. The Help pane is also a search engine that quickly finds answers to questions in
the Fabric documentation and Fabric community forums.
) Important
Forum topics: This section shows topics from the community forums that are
related to the features on the current screen. Select a topic to open it in a separate
browser tab.
Other resources: This section has links for feedback and Support.
1. From the upper-right corner of Fabric, select the ? icon to open the Help pane.
2. Open Browse and select the Recent feature. The Fabric Help pane displays
documents about the Recent feature. Select a document to learn more. The
document opens in a separate browser tab.
3. Forum posts often provide interesting context. Select one that looks helpful or
interesting.
6. Close the Help pane by selecting the X icon in the upper-right corner of the pane.
Still need help?
If you still need help, select Ask the community and submit a question. If you have an
idea for a new feature, let us know by selecting Submit an idea. To open the Support
site, select Get help in Other Resources.
Global search
Article • 06/21/2023
When you're new to Microsoft Fabric, you have only a few items (workspaces, reports,
apps, lakehouses). But as you begin creating and sharing items, you can end up with
long lists of content. That's when searching, filtering, and sorting become helpful.
) Important
7 Note
Search is available from Home and also most other areas of Microsoft Fabric. Just look
In the Search field, type all or part of the name of an item, creator, keyword, or
workspace. You can even enter your colleague's name to search for content that they've
shared with you. The search finds matches in all the items that you own or have access
to.
In addition to the Search field, most experiences on the Microsoft Fabric canvas also
include a Filter by keyword field. Similar to search, use Filter by keyword to narrow
down the content on your canvas to find what you need. The keywords you enter in the
Filter by keyword pane apply to the current view only. For example, if you open Browse
and enter a keyword in the Filter by keyword pane, Microsoft Fabric searches only the
content that appears on the Browse canvas.
Sorting is also available in other areas of Microsoft Fabric. In this example, the
workspaces are sorted by the Refreshed date. To set sorting criteria for workspaces,
select a column header, and then select again to change the sorting direction.
Not all columns can be sorted. Hover over the column headings to discover which can
be sorted.
) Important
The Fabric settings pane provides links to various kinds of settings you can configure.
This article shows how to open the Fabric settings pane and describes the kinds of
settings you can access from there.
Preferences
In the preferences section, individual users can set their user preferences, specify the
language of the Fabric user interface, manage their account and notifications, and
configure settings for their personal use throughout the system.
Link Description
General Opens the generate settings page, where you can set the display language for the
Fabric interface and parts of visuals.
Notifications Opens the notifications settings page where you can view your subscriptions and
alerts.
Item Opens the item settings page, where you can configure per-item-type settings.
settings
Link Description
Developer Opens the developer settings page, where you can configure developer mode
settings settings.
Link Description
Manage Opens the personal/group storage management page, where you can see and
personal/group manage data items that you own or that have been shared with you.
storage
Power BI Opens the Power BI settings page, where you can get to the settings pages for
settings the Power BI items (dashboards, datasets, workbooks, reports, datamarts, and
dataflows) that are in the current workspace.
Manage Opens page where you can manage connections, on-premises data gateways,
connections and virtual networks data gateways.
and gateways
Manage Opens a page where you can manage embed codes you have created.
embed codes
Azure Analysis Opens up a page where you can migrate your Azure Analysis Services datasets
Services to Power BI Premium.
migrations
Link Description
Admin Opens the Fabric admin portal where admins perform various management tasks and
portal configure Fabric tenant settings. For more information, see What is the admin portal?
Microsoft Currently available to Fabric admins only. Opens the Microsoft Purview hub where
Purview you can view Purview insights about your organization's sensitive data. The Microsoft
hub Purview hub also provides links to Purview governance and compliance capabilities
(preview) and has links to documentation to help you get started with Microsoft Purview
governance and compliance in Fabric.
Next steps
What is Fabric
What is Microsoft Fabric admin?
Workspaces
Article • 06/14/2023
Workspaces are places to collaborate with colleagues to create collections of items such
as lakehouses, warehouses, and reports. This article describes workspaces, how to
manage access to them, and what settings are available.
Pin workspaces to the top of the workspace flyout list to quickly access your
favorite workspaces. Read more about pin workspaces.
Use granular workspace roles for flexible permissions management in the
workspaces: Admin, Member, Contributor, and Viewer. Read more about
workspace roles.
Navigate to current workspace from anywhere by selecting the icon on left nav
pane. Read more about current workspace in this article.
Workspace settings: As workspace admin, you can update and manage your
workspace configurations in workspace settings.
Manage a workspace in Git: Git integration in Microsoft Fabric enables Pro
developers to integrate their development processes, tools, and best practices
straight into the Fabric platform. Learn how to manage a workspace with Git.
Contact list: Specify who receives notification about workspace activity. Read more
about workspace contact lists in this article.
Current workspace
After you select and open to a workspace, this workspace becomes your current
workspace. You can quickly navigate to it from anywhere by selecting the workspace
icon from left nav pane.
Workspace settings
Workspace admins can use workspace settings to manage and update the workspace.
The settings include general settings of the workspace, like the basic information of the
workspace, contact list, OneDrive, license, Azure connections, storage, and other
experiences' specific settings.
To open the workspace settings, you can select the workspace in the nav pane, then
select More options (...) > Workspace settings next to the workspace name.
You can also open it from the workspace page.
7 Note
Creating Microsoft 365 Groups may be restricted in your environment, or the ability
to create them from your OneDrive site may be disabled. If this is the case, speak
with your IT department.
You can configure OneDrive in workspace settings by typing in the name of the
Microsoft 365 group that you created earlier. Type just the name, not the URL. Microsoft
Fabric automatically picks up the OneDrive for the group.
License mode
By default, workspaces are created in your organization's shared capacity. When your
organization has other capacities, workspaces including My Workspaces can be assigned
to any capacity in your organization. You can configure it while creating a workspace or
in Workspace settings -> Premium. Read more about licenses.
Azure connections configuration
Workspace admins can configure dataflow storage to use Azure Data Lake Gen 2
storage and Azure Log Analytics (LA) connection to collect usage and performance logs
for the workspace in workspace settings.
With the integration of Azure Data Lake Gen 2 storage, you can bring your own storage
to dataflows, and establish a connection at the workspace level. Read Configure
dataflow storage to use Azure Data Lake Gen 2 for more detail.
After the connection with Azure Log Analytics (LA), activity log data is sent continuously
and is available in Log Analytics in approximately 5 minutes. Read Using Azure Log
Analytics for more detail.
System storage
System storage is the place to manage your dataset storage in your individual or
workspace account so you can keep publishing reports and datasets. Your own datasets,
Excel reports, and those items that someone has shared with you, are included in your
system storage.
In the system storage, you can view how much storage you have used and free up the
storage by deleting the items in it.
Keep in mind that you or someone else may have reports and dashboards based on a
dataset. If you delete the dataset, those reports and dashboards don't work anymore.
Remove the workspace
As an admin for a workspace, you can delete it. When you delete the workspace,
everything contained within the workspace is deleted for all group members, and the
associated app is also removed from AppSource.
In the Workspace settings pane, select Other > Remove this workspace.
Admins can also see the state of all the workspaces in their organization. They can
manage, recover, and even delete workspaces. Read about managing the workspaces
themselves in the "Admin portal" article.
Auditing
Microsoft Fabric audits the following activities for workspaces.
Next steps
Create workspaces
Give users access to workspaces
Create a workspace
Article • 11/15/2023
This article explains how to create workspaces in Microsoft Fabric. In workspaces, you
create collections of items such as lakehouses, warehouses, and reports. For more
background, see the Workspaces article.
To create a workspace:
1. Select Workspaces > New workspace. The Create a workspace pane opens.
If you are a domain contributor for the workspace, you can associate the
workspace to a domain, or you can change an existing association. For
information about domains, see Domains in Fabric.
Advanced settings
Expand Advanced and you see advanced setting options:
Contact list
Contact list is a place where you can put the names of people as contacts for
information about the workspace. Accordingly, people in this contact list receive system
email notifications for workspace level changes.
By default, the first workspace admin who created the workspace is the contact. You can
add other users or groups according to your needs. Enter the name in the input box
directly, it helps you to automatically search and match users or groups in your org.
License mode
Different license mode provides different sets of feature for your workspace. After the
creation, you can still change the workspace license type in workspace settings, but
some migration effort is needed.
7 Note
Currently, if you want to downgrade the workspace license type from Premium
capacity to Pro (Shared capacity), you must first remove any non-Power BI Fabric
items that the workspace contains. Only after you remove such items will you be
allowed to downgrade the capacity. For more information, see Moving data
around.
Template apps
Power BI template apps are developed for sharing outside your organization. If you
check this option, a special type of workspace (template app workspace) is created. It's
not possible to revert it back to a normal workspace after creation.
Dataflow storage (preview)
Data used with Power BI is stored in internal storage provided by Power BI by default.
With the integration of dataflows and Azure Data Lake Storage Gen 2 (ADLS Gen2), you
can store your dataflows in your organization's Azure Data Lake Storage Gen2 account.
Learn more about dataflows in Azure Data Lake Storage Gen2 accounts.
Pin workspaces
Quickly access your favorite workspaces by pinning them to the top of the workspace
flyout list.
1. Open the workspace flyout from the nav pane and hover over the workspace you
want to pin. Select the Pin to top icon.
2. The workspace is added in the Pinned list.
3. To unpin a workspace, select the unpin button. The workspace is unpinned.
Next steps
Read about workspaces
Feedback
Was this page helpful? Yes No
Workspace roles let you manage who can do what in a Microsoft Fabric workspace.
Microsoft Fabric workspaces sit on top of OneLake and divide the data lake into
separate containers that can be secured independently. Workspace roles in Microsoft
Fabric extend the Power BI workspace roles by associating new Microsoft Fabric
capabilities such as data integration and data exploration with existing workspace roles.
For more information on Power BI roles, see Roles in workspaces in Power BI.
You can either assign roles to individuals or to security groups, Microsoft 365 groups,
and distribution lists. To grant access to a workspace, assign those user groups or
individuals to one of the workspace roles: Admin, Member, Contributor, or Viewer.
Here's how to give users access to workspaces.
Everyone in a user group gets the role that you've assigned. If someone is in several user
groups, they get the highest level of permission that's provided by the roles that they're
assigned. If you nest user groups and assign a role to a group, all the contained users
have permissions.
Users in workspace roles have the following Microsoft Fabric capabilities, in addition to
the existing Power BI capabilities associated with these roles.
1 Contributors and Viewers can also share items in a workspace, if they have Reshare
permissions.
2 Additional permissions are needed to read data from shortcut destination. Learn more
about shortcut security model.
3
Admins, Members, and Contributors can grant viewers granular SQL permissions to
query the SQL analytics endpoint of the Lakehouse and the Warehouse via TDS
endpoints for SQL connections.
4
Keep in mind that you also need permissions on the gateway. Those permissions are
managed elsewhere, independent of workspace roles and permissions.
Next steps
Roles in workspaces in Power BI
Create workspaces
Give users access to workspaces
OneLake security
OneLake shortcuts
Data warehouse security
Data engineering security
Data science roles and permissions
Feedback
Was this page helpful? Yes No
After you create a workspace in Microsoft Fabric, or if you have an admin or member
role in a workspace, you can give others access to it by adding them to the different
roles. Workspace creators are automatically admins. For an explanation of the different
roles, see Roles in workspaces.
7 Note
To enforce row-level security (RLS) on Power BI items for Microsoft Fabric Pro users
who browse content in a workspace, assign them the Viewer Role.
After you add or remove workspace access for a user or a group, the permission
change only takes effect the next time the user logs into Microsoft Fabric.
This article walks you through the following basic tasks in Microsoft Fabric’s Git
integration tool:
It’s recommended to read the overview of Git integration before you begin.
) Important
Prerequisites
To integrate Git with your Microsoft Fabric workspace, you need to set up the following
prerequisites in both Azure DevOps and Fabric.
Fabric prerequisites
To access the Git integration feature, you need one of the following:
Power BI Premium license. Your Power BI premium license still works for all Power
BI features.
Fabric capacity. A Fabric capacity is required to use all supported Fabric items.
In addition, your organization’s administrator has to enable the Fabric switch. If this
switch is disabled, contact your administrator.
Connect a workspace to an Azure repo
Only a workspace admin can connect a workspace to an Azure Repo, but once
connected, anyone with permission can work in the workspace. If you're not an admin,
ask your admin for help with connecting. To connect a workspace to an Azure Repo,
follow these steps:
1. Sign into Power BI and navigate to the workspace you want to connect with.
2. Go to Workspace settings
7 Note
If you don't see the Workspace settings icon, select the ellipsis (three dots)
then workspace settings.
3. Select Git integration. You’re automatically signed into the Azure Repos account
registered to the Azure AD user signed into Fabric.
4. From the dropdown menu, specify the following details about the branch you want
to connect to:
7 Note
You can only connect a workspace to one branch and one folder at a time.
Organization
Project
Git repository
Branch (Select an existing branch using the drop-down menu, or select +
New Branch to create a new branch. You can only connect to one branch at a
time.)
Folder (Select an existing folder in the branch or enter a name to create a
new folder. If you don’t select a folder, content will be created in the root
folder. You can only connect to one folder at a time.)
5. Select Connect and sync.
During the initial sync, if either the workspace or Git branch is empty, content is copied
from the nonempty location to the empty one. If both the workspace and Git branch
have content, you’re asked which direction the sync should go. For more information on
this initial sync, see Connect and sync.
After you connect, the Workspace displays information about source control that allows
the user to view the connected branch, the status of each item in the branch and the
time of the last sync.
To keep your workspace synced with the Git branch, commit any changes you make in
the workspace to the Git branch, and update your workspace whenever anyone creates
new commits to the Git branch.
Commit to Git
To commit your changes to the Git branch, follow these steps:
1. Go to the workspace.
2. Select the Source control icon. This icon shows the number of uncommitted
changes.
3. Select the Changes tab of the Source control pane. A list appears with all the
items you changed, and an icon indicating if the item is new , modified ,
conflict , or deleted .
4. Select the items you want to commit. To select all items, check the top box.
5. Add a comment in the box. If you don't add a comment, a default message is
added automatically.
6. Select Commit.
After the changes are committed, the items that were committed are removed from
the list, and the workspace will point to the new commit that it's synced to.
After the commit is completed successfully, the status of the selected items changes
from Uncommitted to Synced.
1. Go to the workspace.
2. Select the Source control icon.
3. Select the Updates tab of the Source control pane. A list appears with all the items
that were changed in the branch since the last update.
4. Select Update all.
After it updates successfully, the list of items is removed, and the workspace will point to
the new commit that it's synced to.
After the update is completed successfully, the status of the items changes to Synced.
1. Go to Workspace settings
Permissions
The actions you can take on a workspace depend on the permissions you have in both
the workspace and Azure DevOps. For a more detailed discussion of permissions, see
Permissions.
Manually changing the item definition file. These changes are valid, but might
be different than if done through the editors. For example, if you rename a
dataset column in Git and import this change to the workspace, the next time
you commit changes to the dataset, the bim file will register as changed and the
modified column pushed to the back of the columns array. This is because the
AS engine that generates the bim files pushes renamed columns to the end of
the array. This change doesn't affect the way the item operates.
Committing a file that uses CRLF line breaks. The service uses LF (line feed) line
breaks. If you had item files in the Git repo with CRLF line breaks, when you
commit from the service these files are changed to LF. For example, if you open
a report in desktop, save the .pbip project and upload it to Git using CRLF.
If you're having trouble with these actions, make sure you understand the
limitations of the Git integration feature.
Next steps
Understand the Git integration process
Manage Git branches
Git integration best practices
Feedback
Was this page helpful? Yes No
With Copilot and other generative AI features in preview, Microsoft Fabric brings a new
way to transform and analyze data, generate insights, and create visualizations and
reports.
Before your business starts using Copilot capabilities in Fabric, you may have questions
about how it works, how it keeps your business data secure and adheres to privacy
requirements, and how to use generative AI responsibly. Read on for answers to these
and other questions.
The article Privacy, security, and responsible use for Copilot (preview) offers guidance on
responsible use.
Copilot features in Fabric are built to meet the Responsible AI Standard, which means
that they're reviewed by multidisciplinary teams for potential harms, and then refined to
include mitigations for those harms.
Before you use Copilot, your admin needs to enable Copilot in Fabric. See the article
Enable Copilot in Fabric (preview) for details. Also, keep in mind the limitations of
Copilot:
Next steps
What is Microsoft Fabric?
Copilot for Fabric: FAQ
Feedback
Was this page helpful? Yes No
Before your business can start using Copilot capabilities in Microsoft Fabric, you need to
enable Copilot. Copilot doesn't work for trial SKUs. You need a F64 or P1 capacity to use
Copilot. With Copilot and other generative AI features in preview, Microsoft Fabric
brings a new way to transform and analyze data, generate insights, and create
visualizations and reports.
The preview of Copilot in Microsoft Fabric is rolling out in stages with the goal that all
customers with a paid Fabric capacity (F64 or higher) or Power BI Premium capacity (P1
or higher) have access to the Copilot preview. It becomes available to you automatically
as a new setting in the Fabric admin portal when it's rolled out to your tenant. When
charging begins for the Copilot in Fabric experiences, you can count Copilot usage
against your existing Fabric or Power BI Premium capacity.
7 Note
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64 or
higher, or P1 or higher) are supported.
We're enabling Copilot in stages. Everyone will have the access by March 2024.
Next steps
What is Microsoft Fabric?
Copilot in Fabric and Power BI: FAQ
Feedback
Was this page helpful? Yes No
This article answers frequently asked questions about Copilot for Microsoft Fabric and
Power BI.
Power BI
I loaded my semantic model, but it doesn't meet
all the criteria listed in the data evaluation. What
should I do?
The criteria listed in Update your data model to work well with Copilot for Power BI is
important because it helps you get a better quality report. As long as you meet seven of
the eight points, including Consistency, the quality of the reports generated should be
good.
If your data doesn't meet that criteria, we recommend spending the time to bring it into
compliance.
Next, when you select a Copilot-enabled URL, you have to initially load the semantic
model. When you've completed loading the semantic model, then you see the Copilot
button. See the Create a report with Copilot for the Power BI service article.
If you load a semantic model and still can't see the Copilot button, file a bug here:
Copilot Bug Template.
Next steps
What is Microsoft Fabric?
Privacy, security, and responsible use for Copilot
Feedback
Was this page helpful? Yes No
With Copilot and other generative AI features in preview, Microsoft Fabric brings a new
way to transform and analyze data, generate insights, and create visualizations and
reports.
Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.
This article provides answers to common questions related to business data security and
privacy to help your organization get started with Copilot in Fabric.
Overview
Review the supplemental preview terms for Fabric , which includes terms of use
for Microsoft Generative AI Service Previews.
In general, these features are designed to generate natural language, code, or other
content based on:
For example, Power BI, Data Factory, and Data Science offer Copilot chats where you can
ask questions and get responses that are contextualized on your data. Copilot for Power
BI can also create reports and other visualizations. Copilot for Data Factory can
transform your data and explain what steps it has applied. Data Science offers Copilot
features outside of the chat pane, such as custom IPython magic commands in
notebooks. Copilot chats may be added to other experiences in Fabric, along with other
features that are powered by Azure OpenAI under the hood.
This information is sent to Azure OpenAI Service, where it's processed and an output is
generated. Therefore, data processed by Azure OpenAI can include:
Grounding data may include a combination of dataset schema, specific data points, and
other information relevant to the user's current task. Review each experience section for
details on what data is accessible to Copilot features in that scenario.
Interactions with Copilot are specific to each user. This means that Copilot can only
access data that the current user has permission to access, and its outputs are only
visible to that user unless that user shares the output with others, such as sharing a
generated Power BI report or generated code. Copilot doesn't use data from other users
in the same tenant or other tenants.
Copilot uses Azure OpenAI—not the publicly available OpenAI services—to process all
data, including user inputs, grounding data, and Copilot outputs. Copilot currently uses
a combination of GPT models, including GPT 3.5. Microsoft hosts the OpenAI models in
the Microsoft Azure environment, and the Service doesn't interact with any services by
OpenAI, such as ChatGPT or the OpenAI API. Your data isn't used to train models and
isn't available to other customers. Learn more about Azure OpenAI.
Data from Copilot in Fabric is stored by Microsoft for up to 30 days (as outlined in the
Preview Terms of Use ) to help monitor and prevent abusive or harmful uses or outputs
of the service. Authorized Microsoft employees may review data that has triggered our
automated systems to investigate and verify potential abuse.
1. Copilot receives a prompt from a user. This prompt could be in the form of a
question that a user types into a chat pane, or in the form of an action such as
selecting a button that says "Create a report."
2. Copilot preprocesses the prompt through an approach called grounding.
Depending on the scenario, this might include retrieving relevant data such as
dataset schema or chat history from the user's current session with Copilot.
Grounding improves the specificity of the prompt, so the user gets responses that
are relevant and actionable to their specific task. Data retrieval is scoped to data
that is accessible to the authenticated user based on their permissions. See the
section What data does Copilot use and how is it processed? in this article for
more information.
3. Copilot takes the response from Azure OpenAI and postprocesses it. Depending
on the scenario, this postprocessing might include responsible AI checks, filtering
with Azure content moderation, or additional business-specific constraints.
4. Copilot returns a response to the user in the form of natural language, code, or
other content. For example, a response might be in the form of a chat message or
generated code, or it might be a contextually appropriate form such as a Power BI
report or a Synapse notebook cell.
5. The user reviews the response before using it. Copilot responses can include
inaccurate or low-quality content, so it's important for subject matter experts to
check outputs before using or sharing them.
Just as each experience in Fabric is built for certain scenarios and personas—from data
engineers to data analysts—each Copilot feature in Fabric has also been built with
unique scenarios and users in mind. For capabilities, intended uses, and limitations of
each feature, review the section for the experience you're working in.
Definitions
Prompt or input
The text or action submitted to Copilot by a user. This could be in the form of a question
that a user types into a chat pane, or in the form of an action such as selecting a button
that says "Create a report."
Grounding
A preprocessing technique where Copilot retrieves additional data that's contextual to
the user's prompt, and then sends that data along with the user's prompt to Azure
OpenAI in order to generate a more relevant and actionable response.
Response or output
The content that Copilot returns to a user. For example, a response might be in the form
of a chat message or generated code, or it might be contextually appropriate content
such as a Power BI report or a Synapse notebook cell.
This information is sent to Azure OpenAI Service, where it's processed and an output is
generated. Therefore, data processed by Azure OpenAI can include:
Grounding data may include a combination of dataset schema, specific data points, and
other information relevant to the user's current task. Review each experience section for
details on what data is accessible to Copilot features in that scenario.
Interactions with Copilot are specific to each user. This means that Copilot can only
access data that the current user has permission to access, and its outputs are only
visible to that user unless that user shares the output with others, such as sharing a
generated Power BI report or generated code. Copilot doesn't use data from other users
in the same tenant or other tenants.
Copilot uses Azure OpenAI—not OpenAI's publicly available services—to process all
data, including user inputs, grounding data, and Copilot outputs. Copilot currently uses
a combination of GPT models, including GPT 3.5. Microsoft hosts the OpenAI models in
Microsoft's Azure environment and the Service doesn't interact with any services by
OpenAI (for example, ChatGPT or the OpenAI API). Your data isn't used to train models
and isn't available to other customers. Learn more about Azure OpenAI.
Data from Copilot in Fabric is stored by Microsoft for up to 30 days (as outlined in the
Preview Terms of Use ) to help monitor and prevent abusive or harmful uses or outputs
of the service. Authorized Microsoft employees may review data that has triggered our
automated systems to investigate and verify potential abuse.
To allow data to be processed elsewhere, your admin can turn on the setting Data sent
to Azure OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance. Learn more about admin settings for
Copilot.
Notes by release
Additional information for future releases or feature updates will appear here.
Next steps
What is Microsoft Fabric?
Copilot in Fabric and Power BI: FAQ
Feedback
Was this page helpful? Yes No
With Copilot for Data Factory in Microsoft Fabric and other generative AI features in
preview, Microsoft Fabric brings a new way to transform and analyze data, generate
insights, and create visualizations and reports in Data Science and the other workloads.
Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.
The article Privacy, security, and responsible use for Copilot (preview) provides an
overview of Copilot in Fabric. Read on for details about Copilot for Data Factory.
Notes by release
Additional information for future releases or feature updates will appear here.
Next steps
What is Microsoft Fabric?
Copilot in Fabric: FAQ
Feedback
Was this page helpful? Yes No
With Copilot for Data Science in Microsoft Fabric and other generative AI features in
preview, Microsoft Fabric brings a new way to transform and analyze data, generate
insights, and create visualizations and reports in Data Science and the other workloads.
Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.
The article Privacy, security, and responsible use for Copilot (preview) provides an
overview of Copilot in Fabric. Read on for details about Copilot for Data Science.
Keep in mind that code generation with fast-moving or recently released libraries
may include inaccuracies or fabrications.
Notes by release
Additional information for future releases or feature updates will appear here.
Next steps
What is Microsoft Fabric?
Copilot in Fabric: FAQ
Feedback
Was this page helpful? Yes No
With Copilot and other generative AI features in preview, Microsoft Fabric brings a new
way to transform and analyze data, generate insights, and create visualizations and
reports in Power BI and the other workloads.
Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.
The article Privacy, security, and responsible use for Copilot (preview) provides an
overview of Copilot in Fabric. Read on for details about Copilot for Power BI.
Copilot provides a summary of your dataset and an outline of suggested pages for
your report. Then it generates those pages for the report. After you open a blank
report with a semantic model, Copilot can generate:
Suggested topics.
A report outline: for example, what each page in the report will be about, and
how many pages it will create.
The visuals for the individual pages.
Notes by release
Additional information for future releases or feature updates will appear here.
Next steps
What is Microsoft Fabric?
Copilot in Fabric and Power BI: FAQ
Feedback
Was this page helpful? Yes No
Copilot in Fabric enhances productivity, unlocks profound insights, and facilitates the
creation of custom AI experiences tailored to your data. As a component of the Copilot
in Fabric experience, Copilot in Data Factory empowers customers to use natural
language to articulate their requirements for creating data integration solutions using
Dataflow Gen2. Essentially, Copilot in Data Factory operates like a subject-matter expert
(SME) collaborating with you to design your dataflows.
Copilot for Data Factory is an AI-enhanced toolset that supports both citizen and
professional data wranglers in streamlining their workflow. It provides intelligent
Mashup code generation to transform data using natural language input and generates
code explanations to help you better understand earlier generated complex queries and
tasks.
Supported capabilities
With Dataflow Gen2, you can:
Generate a new query that may include sample data or a connection to a data
source that requires configuring authentication.
Provide a summary of the query and the applied steps.
Generate new transformation steps for an existing query.
Get started
1. Create a new Dataflows Gen2.
4. In the Get data window, search for OData and select the OData connector.
5. In the Connect to data source for the OData connector, input the following text
into the URL field:
https://services.odata.org/V4/Northwind/Northwind.svc/
6. From the navigator, select the Orders table and then Select related tables. Then
select Create to bring multiple tables into the Power Query editor.
7. Select the Customers query, and in the Copilot pane type this text: Only keep
European customers , then press Enter or select the Send message icon.
Your input is now visible in the Copilot pane along with a returned response card.
You can validate the step with the corresponding step title in the Applied steps list
and review the formula bar or the data preview window for accuracy of your
results.
8. Select the Employees query, and in the Copilot pane type this text: Count the
total number of employees by City , then press Enter or select the Send message
icon. Your input is now visible in the Copilot pane along with a returned response
card and an Undo button.
9. Select the column header for the Total Employees column and choose the option
Sort descending. The Undo button disappears because you modified the query.
10. Select the Order_Details query, and in the Copilot pane type this text: Only keep
orders whose quantities are above the median value , then press Enter or select
the Send message icon. Your input is now visible in the Copilot pane along with a
returned response card.
11. Either select the Undo button or type the text Undo (any text case) and press Enter
in the Copilot pane to remove the step.
12. To leverage the power of Azure Open AI when creating or transforming your data,
ask Copilot to create sample data by typing this text:
Create a new query with sample data that lists all the Microsoft OS versions
Copilot adds a new query to the Queries pane list, containing the results of your
input. At this point, you can either transform data in the user interface, continue to
edit with Copilot text input, or delete the query with an input such as Delete my
current query .
Next steps
Feedback
Was this page helpful? Yes No
) Important
Copilot for Data Science and Data Engineering is an AI assistant that helps analyze and
visualize data. It works with Lakehouse tables and files, Power BI Datasets, and
pandas/spark/fabric dataframes, providing answers and code snippets directly in the
notebook. The most effective way of using Copilot is to add your data as a dataframe.
You can ask your questions in the chat panel, and the AI provides responses or code to
copy into your notebook. It understands your data's schema and metadata, and if data
is loaded into a dataframe, it has awareness of the data inside of the data frame as well.
You can ask Copilot to provide insights on data, create code for visualizations, or
provide code for data transformations, and it recognizes file names for easy reference.
Copilot streamlines data analysis by eliminating complex coding.
7 Note
Chart creation
Filtering data
Applying transformations
Machine learning models
First select the Copilot icon in the notebooks ribbon. The Copilot chat panel opens, and
a new cell appears at the top of your notebook. This cell must run each time a Spark
session loads in a Fabric notebook. Otherwise, the Copilot experience won't properly
operate. We are in the process of evaluating other mechanisms for handling this
required initialization in future releases.
Run the cell at the top of the notebook. After the cell successfully executes, you can use
Copilot. You must rerun the cell at the top of the notebook each time your session in the
notebook closes.
And more. Copilot responds with the answer or the code, which you can copy and paste
it your notebook. Copilot for Data Science and Data Engineering is a convenient,
interactive way to explore and analyze your data.
As you use Copilot, you can also invoke the magic commands inside of a notebook cell
to obtain output directly in the notebook. For example, for natural language answers to
responses, you can ask questions using the "%%chat" command, such as:
%%chat
What are some machine learning models that may fit this dataset?
or
%%code
Can you generate code for a logistic regression that fits this data?
Copilot for Data Science and Data Engineering also has schema and metadata
awareness of tables in the lakehouse. Copilot can provide relevant information in
context of your data in an attached lakehouse. For example, you can ask:
"How many tables are in the lakehouse?"
"What are the columns of the table customers?"
Copilot responds with the relevant information if you added the lakehouse to the
notebook. Copilot also has awareness of the names of files added to any lakehouse
attached to the notebook. You can refer to those files by name in your chat. For
example, if you have a file named sales.csv in your lakehouse, you can ask "Create a
dataframe from sales.csv". Copilot generates the code and displays it in the chat panel.
With Copilot for notebooks, you can easily access and query your data from different
sources. You don't need the exact command syntax to do it.
Tips
"Clear" your conversation in the Copilot chat panel with the broom located at the
top of the chat panel. Copilot retains knowledge of any inputs or outputs during
the session, but this helps if you find the current content distracting.
Use the chat magics library to configure settings about Copilot, including privacy
settings. The default sharing mode is designed to maximize the context sharing
Copilot has access to, so limiting the information provided to copilot can directly
and significantly impact the relevance of its responses.
When Copilot first launches, it offers a set of helpful prompts that can help you get
started. They can help kickstart your conversation with Copilot. To refer to prompts
later, you can use the sparkle button at the bottom of the chat panel.
You can "drag" the sidebar of the copilot chat to expand the chat panel, to view
code more clearly or for readability of the outputs on your screen.
Next steps
How to use Chat-magics
How to use the Copilot Chat Pane
Feedback
Was this page helpful? Yes No
) Important
The Chat-magics Python library enhances your data science and engineering workflow
in Microsoft Fabric notebooks. It seamlessly integrates with the Fabric environment, and
allows execution of specialized IPython magic commands in a notebook cell, to provide
real-time outputs. IPython magic commands and more background on usage can be
found here: https://ipython.readthedocs.io/en/stable/interactive/magics.html# .
7 Note
Capabilities of Chat-magics
Dataframe descriptions
The %%describe command provides summaries and descriptions of loaded dataframes.
This simplifies the data exploration phase.
current cell
new cell
cell output
into a variable
Next steps
How to use Copilot Pane
Feedback
Was this page helpful? Yes No
) Important
Copilot for Data Science and Data Engineering notebooks is an AI assistant that helps
you analyze and visualize data. It works with lakehouse tables, Power BI Datasets, and
pandas/spark dataframes, providing answers and code snippets directly in the
notebook. The most effective way of using Copilot is to load your data as a dataframe.
You can use the chat panel to ask your questions, and the AI provides responses or code
to copy into your notebook. It understands your data's schema and metadata, and if
data is loaded into a dataframe, it has awareness of the data inside of the data frame as
well. You can ask Copilot to provide insights on data, create code for visualizations, or
provide code for data transformations, and it recognizes file names for easy reference.
Copilot streamlines data analysis by eliminating complex coding.
Prerequisites
7 Note
7 Note
If your workspace is provisioned in a region without GPU capacity, and your data is
not enabled to flow cross-geo, Copilot will not function properly and you will see
errors.
) Important
If your Spark session terminates, the context for chat-magics will also
terminate, also wiping the context for the Copilot pane.
2. Verify that all these conditions are met before proceeding with the Copilot Chat
Pane.
3. The Copilot chat panel opens on the right side of your notebook.
Key capabilities
AI assistance: Generate code, query data, and get suggestions to accelerate your
workflow.
Data insights: Quick data analysis and visualization capabilities.
Explanations: Copilot can provide natural language explanations of notebook cells,
and can provide an overview for notebook activity as it runs.
Fixing errors: Copilot can also fix notebook run errors as they arise. Copilot shares
context with the notebook cells (executed output) and can provide helpful
suggestions.
Important notices
Inaccuracies: Potential for inaccuracies exists. Review AI-generated content
carefully.
Data storage: Customer data is temporarily stored, to identify harmful use of AI.
2. Each of these selections outputs chat text in the text panel. As the user, you must
fill out the specific details of the data you'd like to use.
3. You can then input any type of request you have in the chat box.
Next steps
How to use Chat-magics
Feedback
Was this page helpful? Yes No
The goal of this series of articles is to provide a roadmap. The roadmap presents a series
of strategic and tactical considerations and action items that lead to the successful
adoption of Microsoft Fabric, and help build a data culture in your organization.
Advancing adoption and cultivating a data culture is about more than implementing
technology features. Technology can assist an organization in making the greatest
impact, but a healthy data culture involves many considerations across the spectrum of
people, processes, and technology.
7 Note
While reading this series of articles, we recommended that you also take into
consideration Power BI implementation planning guidance. After you're familiar
with the concepts in the Microsoft Fabric adoption roadmap, consider reviewing
the usage scenarios. Understanding the diverse ways Power BI is used can
influence your implementation strategies and decisions for all of Microsoft Fabric.
The diagram depicts the following areas of the Microsoft Fabric adoption roadmap.
The areas in the above diagram include:
Area Description
Data culture: Data culture refers to a set of behaviors and norms in the organization that
encourages a data-driven culture. Building a data culture is closely related to adopting
Fabric, and it's often a key aspect of an organization's digital transformation.
Business Alignment: How well the data culture and data strategy enable business users to
achieve business objectives. An effective BI data strategy aligns with the business strategy.
Area Description
Content ownership and management: There are three primary strategies for how business
intelligence (BI) and analytics content is owned and managed: business-led self-service BI,
managed self-service BI, and enterprise BI. These strategies have a significant influence on
adoption, governance, and the Center of Excellence (COE) operating model.
Content delivery scope: There are four primary strategies for content and data delivery:
personal, team, departmental, and enterprise. These strategies have a significant influence
on adoption, governance, and the COE operating model.
Center of Excellence: A Fabric COE is an internal team of technical and business experts.
These experts actively assist others who are working with data within the organization. The
COE forms the nucleus of the broader community to advance adoption goals that are
aligned with the data culture vision.
Governance: Data governance is a set of policies and procedures that define the ways in
which an organization wants data to be used. When adopting Fabric, the goal of
governance is to empower the internal user community to the greatest extent possible,
while adhering to industry, governmental, and contractual requirements and regulations.
Mentoring and user enablement: A critical objective for adoption efforts is to enable users
to accomplish as much as they can within the guardrails established by governance
guidelines and policies. The act of mentoring users is one of the most important
responsibilities of the COE. It has a direct influence on adoption efforts.
User support: User support includes both informally organized and formally organized
methods of resolving issues and answering questions. Both formal and informal support
methods are critical for adoption.
Your organizational data culture vision will strongly influence the strategies that
you follow for self-service and enterprise content ownership and management
and content delivery scope.
These strategies will, in turn, have a big impact on the operating model for your
Center of Excellence and governance decisions.
The established governance guidelines, policies, and processes affect the
implementation methods used for mentoring and enablement, the community of
practice, and user support.
Governance decisions will dictate the day-to-day system oversight (administration)
activities.
Adoption and governance decisions are implemented alongside change
management to mitigate the impact and disruption of change on existing business
processes.
All data culture and adoption-related decisions and actions are accomplished more
easily with guidance and leadership from an executive sponsor, who facilitates
business alignment between the business strategy and data strategy. This
alignment in turn informs data culture and governance decisions.
Each individual article in this series discusses key topics associated with the items in the
diagram. Considerations and potential action items are provided. Each article concludes
with a set of maturity levels to help you assess your current state so you can decide
what action to take next.
) Important
Whenever possible, adoption efforts should be aligned across analytics platforms and BI
services.
7 Note
The remaining articles in this Power BI adoption series discuss the following aspects of
adoption.
) Important
You might be wondering how this Fabric adoption roadmap is different from the
Power BI adoption framework . The adoption framework was created primarily to
support Microsoft partners. It's a lightweight set of resources to help partners
deploy Power BI solutions for their customers.
This adoption series is more current. It's intended to guide any person or
organization that is using—or considering using—Fabric. If you're seeking to
improve your existing Power BI of Fabric implementation, or planning a new Power
BI or Fabric implementation, this adoption roadmap is a great place to start.
Target audience
The intended audience of this series of articles is interested in one or more of the
following outcomes.
Improving their organization's ability to effectively use analytics.
Increasing their organization's maturity level related to the delivery of analytics.
Understanding and overcoming adoption-related challenges faced when scaling
and growing.
Increasing their organization's return on investment (ROI) in data and analytics.
This series of articles will be most helpful to those who work in an organization with one
or more of the following characteristics.
To fully benefit from the information provided in these articles, you should have an
understanding of Power BI foundational concepts and Fabric foundational concepts.
Next steps
In the next article in this series, learn about the Fabric adoption maturity levels. The
maturity levels are referenced throughout the entire series of articles. Also, see the
conclusion article for additional adoption-related resources.
Experienced partners are available to help your organization succeed with adoption
initiatives. To engage with a partner, visit the Power BI partner portal .
Acknowledgments
The Microsoft Fabric adoption roadmap articles are written by Melissa Coates, Kurt
Buhler, and Peter Myers. Matthew Roche, from the Fabric Customer Advisory Team,
provides strategic guidance and feedback to the subject matter experts. Reviewers
include Cory Moore, James Ward, Timothy Bindas, Greg Moir, and Chuy Varela.
Microsoft Fabric adoption roadmap
maturity levels
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
Type Description
User adoption: User adoption is the extent to which consumers and creators continually
increase their knowledge. It's concerned with whether they're actively using analytics tools,
and whether they're using them in the most effective way.
Solution adoption: Solution adoption refers to the impact and business value achieved for
individual requirements and analytical solutions.
As the four arrows in the previous diagram indicate, the three types of adoption are all
strongly inter-related:
The remainder of this article introduces the three types of adoption in more detail.
It's helpful to think about organizational adoption from the perspective of a maturity
model. For consistency with the Power CAT adoption maturity model and the maturity
model for Microsoft 365, this Microsoft Fabric adoption roadmap aligns with the five
levels from the Capability Maturity Model , which were later enhanced by the Data
Management Maturity (DMM) model from ISACA (note that the DMM was a paid
resource that has since been retired).
Every organization has limited time, funding, and people. So, it requires them to be
selective about where they prioritize their efforts. To get the most from your investment
in analytics, seek to attain at least maturity level 300 or 400, as discussed below. It's
common that different business units in the organization evolve and mature at different
rates, so be conscious of the organizational state as well as progress for key business
units.
7 Note
Pockets of success and experimentation with Fabric exist in one or more areas of
the organization.
Achieving quick wins has been a priority, and solutions have been delivered with
some success.
Organic growth has led to the lack of a coordinated strategy or governance
approach.
Practices are undocumented, with significant reliance on tribal knowledge.
There are few formal processes in place for effective data management.
Risk exists due to a lack of awareness of how data is used throughout the
organization.
The potential for a strategic investment with analytics is acknowledged. However,
there's no clear path forward for purposeful, organization-wide execution.
Certain analytics content is now critical in importance and/or it's broadly used by
the organization.
There are attempts to document and define repeatable practices. These efforts are
siloed, reactive, and deliver varying levels of success.
There's an over-reliance on individuals having good judgment and adopting
healthy habits that they learned on their own.
Analytics adoption continues to grow organically and produces value. However, it
takes place in an uncontrolled way.
Resources for an internal community are established, such as a Teams channel or
Yammer group.
Initial planning for a consistent analytics governance strategy is underway.
There's recognition that a Center of Excellence (COE) can deliver value.
7 Note
The characteristics above are generalized. When considering maturity levels and
designing a plan, you'll want to consider each topic or goal independently. In
reality, it's probably not possible to reach level 500 maturity level for every aspect
of Fabric adoption for the entire organization. So, assess maturity levels
independently per goal. That way, you can prioritize your efforts where they will
deliver the most value. The remainder of the articles in this Fabric adoption series
present maturity levels on a per-topic basis.
7 Note
User adoption encompasses how consumers view content, as well as how self-service
creators generate content for others to consume.
User adoption occurs on an individual user basis, but it's measured and analyzed in the
aggregate. Individual users progress through the four stages of user adoption at their
own pace. An individual who adopts a new technology will take some time to achieve
proficiency. Some users will be eager; others will be reluctant to learn yet another tool,
regardless of the promised productivity improvements. Advancing through the user
adoption stages involves time and effort, and it involves behavioral changes to become
aligned with organizational adoption objectives. The extent to which the organization
supports users advancing through the user adoption stages has a direct correlation to
the organizational-level adoption maturity.
An individual has heard of, or been initially exposed to, analytics in some way.
An individual might have access to a tool, such as Fabric, but isn't yet actively using
it.
It's easy to underestimate the effort it takes to progress from stage 2 (understanding) to
stage 4 (proficiency). Typically, it takes the longest time to progress from stage 3
(momentum) to stage 4 (proficiency).
) Important
By the time a user reaches the momentum and proficiency stages, the organization
needs to be ready to support them in their efforts. You can consider some proactive
efforts to encourage users to progress through stages. For more information, see
the community of practice and the user support articles.
7 Note
Tip
Exploration and experimentation are the main approaches to testing out new
ideas. Exploration of new ideas can occur through informal self-service efforts, or
through a formal proof of concept (POC), which is purposely narrow in scope. The
goal is to confirm requirements, validate assumptions, address unknowns, and
mitigate risks.
A small group of users test the proof of concept solution and provide useful
feedback.
For simplicity, all exploration—and initial feedback—could occur within local user
tools (such as Power BI Desktop or Excel) or within a single Fabric workspace.
Target users find the solution to be valuable and experience tangible benefits.
The solution is promoted to a production workspace that's managed, secured, and
audited.
Validations and testing occur to ensure data quality, accurate presentation,
accessibility, and acceptable performance.
Content is endorsed, when appropriate.
Usage metrics for the solution are actively monitored.
User feedback loops are in place to facilitate suggestions and improvements that
can contribute to future releases.
Solution documentation is generated to support the needs of information
consumers (such as data sources used or how metrics are calculated). The
documentation helps future content creators (for example, for documenting any
future maintenance or planned enhancements).
Ownership and subject matter experts for the content are clear.
Report branding and theming are in place and in line with governance guidelines.
Target users actively and routinely use the solution, and it's considered essential
for decision-making purposes.
The solution resides in a production workspace well separated from development
and test content. Change management and release management are carefully
controlled due to the impact of changes.
A subset of users regularly provides feedback to ensure the solution continues to
meet evolving requirements.
Expectations for the success of the solution are clear and are measured.
Expectations for support of the solution are clear, especially if there are service
level agreements.
The solution aligns with organizational governance guidelines and practices.
Most content is certified due to its critical nature.
Formal user acceptance testing for new changes might occur, particularly for IT-
managed content.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
organizational data culture and its impact on adoption efforts.
Microsoft Fabric adoption roadmap:
Data culture
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
Building a data culture is closely related to adopting analytics, and it's often a key aspect
of an organization's digital transformation. The term data culture can be defined in
different ways by different organizations. In this series of articles, data culture means a
set of behaviors and norms in an organization. It encourages a culture that regularly
employs informed data decision-making:
) Important
Think of data culture as what you do, not what you say. Your data culture is not a
set of rules (that's governance). So, data culture is a somewhat abstract concept. It's
the behaviors and norms that are allowed, rewarded, and encouraged—or those
that are disallowed and discouraged. Bear in mind that a healthy data culture
motivates employees at all levels of the organization to generate and distribute
actionable knowledge.
Within an organization, certain business units or teams are likely to have their own
behaviors and norms for getting things done. The specific ways to achieve data culture
objectives can vary across organizational boundaries. What's important is that they
should all align with the organizational data culture objectives. You can think of this
structure as aligned autonomy.
The following circular diagram conveys the interrelated aspects that influence your data
culture:
The diagram depicts the somewhat ambiguous relationships among the following items:
Data culture is the outer circle. All topics within it contribute to the state of the
data culture.
Organizational adoption (including the implementation aspects of mentoring and
user enablement, user support, community of practice, governance, and system
oversight) is the inner circle. All topics are major contributors to the data culture.
Executive support and the Center of Excellence are drivers for the success of
organizational adoption.
Data literacy, data democratization, and data discovery are data culture aspects
that are heavily influenced by organizational adoption.
Content ownership and management, and content delivery scope, are closely
related to data democratization.
The elements of the diagram are discussed throughout this series of articles.
Data culture outcomes aren't specifically mandated. Rather, the state of the data culture
is the result of following the governance rules as they're enforced (or the lack of
governance rules). Leaders at all levels need to actively demonstrate through their
actions what's important to them, including how they praise, recognize, and reward staff
members who take initiative.
Tip
If you can take for granted that your efforts to develop a data solution (such as a
semantic model—previously known as a dataset, a lakehouse, or a report) will be
valued and appreciated, that's an excellent indicator of a healthy data culture.
Sometimes, however, it depends on what your immediate manager values most.
The initial motivation for establishing a data culture often comes from a specific
strategic business problem or initiative. It might be:
Although technology can help advance the goals of a data culture, implementing
specific tools or features isn't the objective. This series of articles covers a lot of topics
that contribute to adoption of a healthy data culture. The remainder of this article
addresses three essential aspects of data culture: data discovery, data democratization,
and data literacy.
Data discovery
A successful data culture depends on users working with the right data in their day-to-
day activities. To achieve this goal, users need to find and access data sources, reports,
and other items.
Data discovery is the ability to effectively locate relevant data assets across the
organization. Primarily, data discovery is concerned with improving awareness that data
exists, which can be particularly challenging when data is siloed in departmental
systems.
Data discovery allows users to see metadata for an item, like the name of a
semantic model, even if they don't currently have access to it. After a user is aware
of its existence, that user can go through the standard process to request access to
the item.
Search allows users to locate an existing item when they already have security
access to the item.
Tip
It's important to have a clear and simple process so users can request access to
data. Knowing that data exists—but being unable to access it within the guidelines
and processes that the domain owner has established—can be a source of
frustration for users. It can force them to use inefficient workarounds instead of
requesting access through the proper channels.
The OneLake data hub and the use of endorsements are key ways to promote data
discovery in your organization.
Furthermore, data catalog solutions are extremely valuable tools for data discovery.
They can record metadata tags and descriptions to provide deeper context and
meaning. For example, Microsoft Purview can scan and catalog items from a Fabric
tenant (as well as many other sources).
Is there a data hub where business users can search for data?
Is there a metadata catalog that describes definitions and data locations?
Are high-quality data sources endorsed by certifying or promoting them?
To what extent do redundant data sources exist because people can't find the data
they need? What roles are expected to create data items? What roles are expected
to create reports or perform ad hoc analysis?
Can end users find and use existing reports, or do they insist on data exports to
create their own?
Do end users know which reports to use to address specific business questions or
find specific data?
Are people using the appropriate data sources and tools, or resisting them in favor
of legacy ones?
Do analysts understand how to enrich existing certified semantic models with new
data—for example, by using a Power BI composite model?
How consistent are data items in their quality, completeness, and naming
conventions?
Can data item owners follow data lineage to perform impact analysis of data
items?
The following maturity levels can help you assess your current state of data discovery.
100: Initial • Data is fragmented and disorganized, with no clear structures or processes to
find it.
• Users struggle to find and use data they need for their tasks.
200: • Scattered or organic efforts to organize and document data are underway, but
Repeatable only in certain teams or departments.
300: Defined • A central repository, like the OneLake data hub, is used to make data easier to
find for people who need it.
400: Capable • Structured, consistent processes guide users how to endorse, document, and
find data from a central hub. Data silos are the exception instead of the rule.
500: Efficient • Data and metadata is systematically organized and documented with a full view
of the data lineage.
• Cataloging tools, like Microsoft Purview, are used to make data discoverable for
both use and governance.
Data democratization
Data democratization refers to putting data into the hands of more users who are
responsible for solving business problems. It's about enabling more users to make
better data-driven decisions.
7 Note
2 Warning
The following maturity levels can help you assess your current state of data
democratization.
100: Initial • Data and analytics are limited to a small number of roles, who gatekeep access to
others.
• Business users must request access to data or tools to complete tasks. They
struggle with delays or bottlenecks.
• Self-service initiatives are taking place with some success in various areas of the
organization. These activities are occurring in a somewhat chaotic manner, with few
formal processes and no strategic plan. There's a lack of oversight and visibility into
these self-service activities. The success or failure of each solution isn't well
understood.
• The enterprise data team can't keep up with the needs of the business. A
significant backlog of requests exists for this team.
Level State of data democratization
200: • There are limited efforts underway to expand access to data and tools.
Repeatable
• Multiple teams have had measurable success with self-service solutions. People in
the organization are starting to pay attention.
• Investments are being made to identify the ideal balance of enterprise and self-
service solutions.
300: • Many people have access to the data and tools they need, although not all users
Defined are equally enabled or held accountable for the content they create.
400: • Healthy partnerships exist among enterprise and self-service solution creators.
Capable Clear, realistic user accountability and policies mitigate risk of self-service analytics
and BI.
• Clear and consistent processes are in place for users to request access to data
and tools.
• Individuals who take initiative in building valuable solutions are recognized and
rewarded.
500: • User accountability and effective governance give central teams confidence in
Efficient what users do with data.
Data literacy
Data literacy refers to the ability to interpret, create, and communicate with data and
analytics accurately and effectively.
Training efforts, as described in the mentoring and user enablement article, often focus
on how to use the technology itself. Technology skills are important to producing high-
quality solutions, but it's also important to consider how to purposely advance data
literacy throughout the organization. Put another way, successful adoption takes a lot
more than merely providing software and licenses to users.
How you go about improving data literacy in your organization depends on many
factors, such as current user skillsets, complexity of the data, and the types of analytics
that are required. You might choose to focus on these types of activities related to data
literacy:
Tip
Getting the right stakeholders to agree on the problem is usually the first step.
Then, it's a matter of getting the stakeholders to agree on the strategic approach to
a solution, along with the solution details.
Does a common analytical vocabulary exist in the organization to talk about data
and BI solutions? Alternatively, are definitions fragmented and different across
silos?
How comfortable are people with making decisions based on data and evidence
compared to intuition and subjective experience?
When people who hold an opinion are confronted with conflicting evidence, how
do they react? Do they critically appraise the data, or do they dismiss it? Can they
alter their opinion, or do they become entrenched and resistant?
Do training programs exist to support people in learning about data and analytical
tools?
Is there significant resistance to visual analytics and interactive reporting in favor of
static spreadsheets?
Are people open to new analytical methods and tools to potentially address their
business questions more effectively? Alternatively, do they prefer to keep using
existing methods and tools to save time and energy?
Are there methods or programs to assess or improve data literacy in the
organization? Does leadership have an accurate understanding of the data literacy
levels?
Are there roles, teams, or departments where data literacy is particularly strong or
weak?
The following maturity levels can help you assess your current state of data literacy.
100: Initial • Decisions are frequently made based on intuition and subjective experience.
When confronted with data that challenges existing opinions, data is often
dismissed.
• Report consumers have a strong preference for static tables. These consumers
dismiss interactive visualizations or sophisticated analytical methods as "fancy" or
unnecessary.
200: • Some teams and individuals inconsistently incorporate data into their decision
Repeatable making. There are clear cases where misinterpretation of data has led to flawed
decisions or wrong conclusions.
300: • The majority of teams and individuals understand data relevant to their business
Defined area and use it implicitly to inform decisions.
Level State of data literacy
• Visualizations and advanced analytics are more widely accepted, though not
always used effectively.
400: • Data literacy is recognized explicitly as a necessary skill in the organization. Some
Capable training programs address data literacy. Specific efforts are taken to help
departments, teams, or individuals that have particularly weak data literacy.
• Most individuals can effectively use and apply data to make objectively better
decisions and take actions.
• Visual and analytical best practices are documented and followed in strategically
important data solutions.
500: • Data literacy, critical thinking, and continuous learning are strategic skills and
Efficient values in the organization. Effective programs monitor progress to improve data
literacy in the organization.
• Visual and analytical best practices are seen as essential to generate business
value with data.
Checklist - Here are some considerations and key actions that you can take to
strengthen your data culture.
" Align your data culture goals and strategy: Give serious consideration to the type
of data culture that you want to cultivate. Ideally, it's more from a position of user
empowerment than a position of command and control.
" Understand your current state: Talk to stakeholders in different business units to
understand which analytics practices are currently working well and which practices
aren't working well for data-driven decision-making. Conduct a series of workshops
to understand the current state and to formulate the desired future state.
" Speak with stakeholders: Talk to stakeholders in IT, BI, and the COE to understand
which governance constraints need consideration. These conversations can present
an opportunity to educate teams on topics like security and infrastructure. You can
also use the opportunity to educate stakeholders on the features and capabilities
included in Fabric.
" Verify executive sponsorship: Verify the level of executive sponsorship and support
that you have in place to advance data culture goals.
" Make purposeful decisions about your data strategy: Decide what the ideal
balance of business-led self-service, managed self-service, and enterprise data,
analytics and BI use cases should be for the key business units in the organization
(covered in the content ownership and management article). Also consider how the
data strategy relates to the extent of published content for personal, team,
departmental, and enterprise analytics and BI (described in the content delivery
scope article). Define your high-level goals and priorities for this strategic planning.
Determine how these decisions affect your tactical planning.
" Create a tactical plan: Begin creating a tactical plan for immediate, short-term, and
long-term action items. Identify business groups and problems that represent
"quick wins" and can make a visible difference.
" Create goals and metrics: Determine how you'll measure effectiveness for your
data culture initiatives. Create key performance indicators (KPIs) or objectives and
key results (OKRs) to validate the results of your efforts.
The following maturity levels will help you assess the current state of your data culture.
100: Initial • Enterprise data teams can't keep up with the needs of the business. A significant
backlog of requests exists.
• Self-service data and BI initiatives are taking place with some success in various
areas of the organization. These activities occur in a somewhat chaotic manner,
with few formal processes and no strategic plan.
200: • Multiple teams have had measurable successes with self-service solutions. People
Repeatable in the organization are starting to pay attention.
• Investments are being made to identify the ideal balance of enterprise and self-
service data, analytics, and BI.
300: Defined • Specific goals are established for advancing the data culture. These goals are
implemented incrementally.
400: • The data culture goals to employ informed decision-making are aligned with
Capable organizational objectives. They're actively supported by the executive sponsor, the
COE, and they have a direct impact on adoption strategies.
• A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared goals.
• Individuals who take initiative in building valuable data solutions are recognized
and rewarded.
500: • The business value of data, analytics, and BI solutions is regularly evaluated and
Efficient measured. KPIs or OKRs are used to track data culture goals and the results of
these efforts.
• Feedback loops are in place, and they encourage ongoing data culture
Level State of data culture
improvements.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
importance of an executive sponsor.
Microsoft Fabric adoption roadmap:
Executive sponsorship
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
When planning to advance the data culture and the state of organizational adoption for
data and analytics, it's crucial to have executive support. An executive sponsor is
imperative because analytics adoption is far more than just a technology project.
Formulating a strategic vision, goals, and priorities for data, analytics, and business
intelligence (BI).
Providing top-down guidance and reinforcement for the data strategy by regularly
promoting, motivating, and investing in strategic and tactical planning.
Leading by example by actively using data and analytics in a way that's consistent
with data culture and adoption goals.
Allocating staffing and prioritizing resources.
Approving funding (for example, Fabric licenses).
Removing barriers to enable action.
Communicating announcements that are of critical importance, to help them gain
traction.
Decision-making, particularly for strategic-level governance decisions.
Dispute resolution (for escalated issues that can't be resolved by operational or
tactical personnel).
Supporting organizational change initiatives (for example, creating or expanding
the Center of Excellence).
) Important
The ideal executive sponsor has sufficient credibility, influence, and authority
throughout the organization. They also have an invested stake in data efforts and
the data strategy. When the BI strategy is successful, the ideal executive sponsor
also experiences success in their role.
Top-down pattern
An executive sponsor might be selected by a more senior executive. For example, the
Chief Executive Officer (CEO) could hire a Chief Data Officer (CDO) or Chief Analytics
Officer (CAO) to explicitly advance the organization's data culture objectives or lead
digital transformation efforts. The CDO or CAO then becomes the ideal candidate to
serve as the executive sponsor for Fabric (or for data and analytics in general).
Here's another example: The CEO might empower an existing executive, such as the
Chief Financial Officer (CFO), because they have a good track record leading data and
analytics in their organization. As the new executive sponsor, the CFO could then lead
efforts to replicate the finance team's success to other areas of the organization.
7 Note
Bottom-up pattern
Alternatively, a candidate for the executive sponsor role could emerge due to the
success they've experienced with creating data solutions. For example, a business unit
within the organization, such as Finance, has organically achieved great success with
their use of data and analytics. Essentially, they've successfully formed their own data
culture on a smaller scale. A junior-level leader who hasn't reached the executive level
(such as a director) might then grow into the executive sponsor role by sharing
successes with other business units across the organization.
With a bottom-up approach, the sponsor might be able to make some progress, but
they won't have formal authority over other business units. Without clear authority, it's
only a matter of time until challenges occur that are beyond their level of authority. For
this reason, the top-down approach has a higher probability of success. However, initial
successes with a bottom-up approach can convince leadership to increase their level of
sponsorship, which might start a healthy competition across other business units in the
adoption of data and BI.
Checklist - Here's a list of considerations and key actions you can take to establish or
strengthen executive support for analytics.
Questions to ask
Use questions like those found below to assess data literacy.
Maturity levels
The following maturity levels will help you assess your current state of executive
support.
100: Initial • There might be awareness from at least one executive about the strategic
importance of how analytics can advance the organization's data culture goals.
However, neither a sponsor nor an executive-level decision-maker is identified.
200: • Informal executive support exists for analytics through informal channels and
Repeatable relationships.
Level State of executive support
300: • An executive sponsor is identified. Expectations are clear for the role.
Defined
400: • An executive sponsor is well established with someone with sufficient authority
Capable across organizational boundaries.
• A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared data culture goals.
500: • The executive sponsor is highly engaged. They're a key driver for advancing the
Efficient organization's data culture vision.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
importance of business alignment with organizational goals.
Microsoft Fabric adoption roadmap:
Business alignment
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
Business intelligence (BI) activities and solutions have the best potential to deliver value
when they're well aligned to organizational business goals. In general, effective business
alignment helps to improve adoption. With effective business alignment, the data
culture and data strategy enable business users to achieve their business objectives.
You can achieve effective business alignment with data activities and solutions by
having:
Improved adoption, because content consumers are more likely to use solutions
that enable them to achieve their objectives.
Increased business return on investment (ROI) for analytics initiatives and
solutions, because these initiatives and solutions will be more likely to directly
advance progress toward business goals.
Less effort and fewer resources spent on change management and changing
business requirements, due to an improved understanding of business data needs.
Communication alignment
Effective and consistent communication is critical to aligning processes. Consider the
following actions and activities when you want to improve communication for successful
business alignment.
Make and follow a plan for central teams and the user community to follow.
Plan regular alignment meetings between different teams and groups. For
example, central teams can plan regular planning and priority alignments with
business units. Another example is when central teams schedule regular meetings
to mentor and enable self-service users.
Set up a centralized portal to consolidate communication and documentation for
user communities. For strategic solutions and initiatives, consider using a
communication hub.
Limit complex business and technical terminology in cross-functional
communications.
Strive for concise communication and documentation that's formatted and well
organized. That way, people can easily find the information that they need.
Consider maintaining a visible roadmap that shows the planned solutions and
activities relevant to the user community in the next quarter.
Be transparent when communicating policies, decisions, and changes.
Create a process for people to provide feedback, and review that feedback
regularly as part of regular planning activities.
) Important
To achieve effective business alignment, you should make it a priority to identify
and dismantle any communication barriers between business teams and technical
teams.
Strategic alignment
Your business strategy should be well aligned with your data and BI strategy. To
incrementally achieve this alignment, we recommend that you commit to following
structured, iterative planning processes.
Strategic planning: Define data, analytics, and BI goals and priorities based on the
business strategy and current state of adoption and implementation. Typically,
strategic planning occurs every 12-18 months to iteratively define high-level
desired outcomes. You should synchronize strategic planning with key business
planning processes.
Tactical planning: Define objectives, action plans, and a backlog of solutions that
help you to achieve your data and BI goals. Typically, tactical planning occurs
quarterly to iteratively re-evaluate and align the data strategy and activities to the
business strategy. This alignment is informed by business feedback and changes to
business objectives or technology. You should synchronize tactical planning with
key project planning processes.
Solution planning: Design, develop, test, and deploy solutions that support
content creators and consumers in achieving their business objectives. Both
centralized content creators and self-service content creators conduct solution
planning to ensure that the solutions they create are well aligned with business
objectives. You should synchronize solution planning with key adoption and
governance planning processes.
) Important
U Caution
A governance strategy that's poorly aligned with business objectives can result in
more conflicts and compliance risk, because users will often pursue workarounds to
complete their tasks.
Executive alignment
Executive leadership plays a key role in defining the business strategy and business
goals. To this end, executive engagement is an important part of achieving top-down
business alignment.
To achieve executive alignment, consider the following key considerations and activities.
Work with your executive sponsor to organize short, quarterly executive feedback
sessions about the use of data in the organization. Use this feedback to identify
changes in business objectives, re-assess the data strategy, and inform future
actions to improve business alignment.
Schedule regular alignment meetings with the executive sponsor to promptly
identify any potential changes in the business strategy or data needs.
Deliver monthly executive summaries that highlight relevant information,
including:
Key performance indicators (KPIs) that measure progress toward data, analytics,
and BI goals.
Fabric adoption and implementation milestones.
Technology changes that might impact organizational business goals.
) Important
Don't underestimate the importance of the role your executive sponsor has in
achieving and maintaining effective business alignment.
Assign a responsible team: A working team reviews feedback and organizes re-
alignment sessions. This team is responsible for the alignment of planning and
priorities between the business and data strategy.
Create and support a feedback process: Your user community requires the means
to provide feedback. Examples of feedback can include requests to change existing
solutions, or to create new solutions and initiatives. This feedback is essential for
bottom-up business user alignment, and it drives iterative and continuous
improvement cycles.
Measure the success of business alignment: Consider using surveys, sentiment
analysis, and usage metrics to assess the success of business alignment. When
combined with other concise feedback mechanisms, this can provide valuable
input to help define future actions and activities to improve business alignment
and Fabric adoption.
Schedule regular re-alignment sessions: Ensure that data strategic planning and
tactical planning occur alongside relevant business strategy planning (when
business leadership review business goals and objectives).
7 Note
) Important
Questions to ask
Use questions like those found below to assess business alignment.
Can people articulate the goals of the organization and the business objectives of
their team?
To what extent do descriptions of organizational goals align across the
organization? How do they align between the business user community and
leadership community? How do they align between business teams and technical
teams?
Does executive leadership understand the strategic importance of data in
achieving business objectives? Does the user community understand the strategic
importance of data in helping them succeed in their jobs?
Are changes in the business strategy reflected promptly in changes to the data
strategy?
Are changes in business user data needs addressed promptly in data and BI
solutions?
To what extent do data policies support or conflict with existing business processes
and the way that users work?
Do solution requirements focus more on technical features than addressing
business questions? Is there a structured requirements gathering process? Do
content owners and creators interact effectively with stakeholders and content
consumers during requirements gathering?
How are decisions about data or BI investments made? Who makes these
decisions?
How well do people trust existing data and BI solutions? Is there a single version of
truth, or are there regular debates about who has the correct version?
How are data and BI initiatives and strategy communicated across the
organization?
Maturity levels
A business alignment assessment evaluates integration between the business strategy
and data strategy. Specifically, this assessment focuses on whether or not data and BI
initiatives and solutions support business users to achieve business strategic objectives.
The following maturity levels will help you assess your current state of business
alignment.
100: Initial • Business and data strategies lack formal alignment, which leads to reactive
implementation and misalignment between data teams and business users.
200: • There are efforts to align data and BI initiatives with specific data needs without
Repeatable a consistent approach or understanding of their success.
300: • Data and BI initiatives are prioritized based on their alignment with strategic
Defined business objectives. However, alignment is siloed and typically focuses on local
needs.
• Strategic initiatives and changes have a clear, structured involvement of both the
business and data strategic decision makers. Business teams and technical teams
can have productive discussions to meet business and governance needs.
400: • There's a consistent, organization-wide view of how data initiatives and solutions
Capable support business objectives.
• Regular and iterative strategic alignments occur between the business and
technical teams. Changes to the business strategy result in clear actions that are
reflected by changes to the data strategy to better support business needs.
500: • The data strategy and the business strategy are fully integrated. Continuous
Efficient improvement processes drive consistent alignment, and they are themselves data
driven.
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
7 Note
There are three primary strategies for how data, analytics, and business intelligence (BI)
content is owned and managed: business-led self-service, managed self-service, and
enterprise. For the purposes of this series of articles, the term content refers to any type
of data item (like a notebook, semantic model—previously known as a dataset, report,
or dashboard).
The organization's data culture is the driver for why, how, and by whom each of these
three content ownership strategies is implemented.
Area Description
Business-led self-service: All content is owned and managed by the creators and subject
matter experts within a business unit. This ownership strategy is also known as a
Area Description
Managed self-service: The data is owned and managed by a centralized team, whereas
business users take responsibility for reports and dashboards. This ownership strategy is
also known as discipline at the core and flexibility at the edge.
Enterprise: All content is owned and managed by a centralized team such as IT, enterprise
BI, or the Center of Excellence (COE).
It's unlikely that an organization operates exclusively with one content ownership and
management strategy. Depending on your data culture, one strategy might be far more
dominant than the others. The choice of strategy could differ from solution to solution,
or from team to team. In fact, a single team can actively use multiple strategies if it's
both a consumer of enterprise content and a producer of its own self-service content.
The strategy to pursue depends on factors such as:
How content is owned and managed has a significant effect on governance, the extent
of mentoring and user enablement, needs for user support, and the COE operating
model.
As discussed in the governance article, the level of governance and oversight depends
on:
As stated in the adoption maturity levels article, organizational adoption measures the
state of data management processes and governance. The choices made for content
ownership and management significantly affect how organizational adoption is
achieved.
Role Description
Data steward Responsible for defining and/or managing acceptable data quality levels as well
as master data management (MDM).
Subject Responsible for defining what the data means, what it's used for, who might
matter expert access it, and how the data is presented to others. Collaborates with domain
(SME) owner as needed and supports colleagues in their use of data.
Technical Responsible for creating, maintaining, publishing, and securing access to data and
owner reporting items.
7 Note
Be clear about who is responsible for managing data items. It's crucial to ensure a
good experience for content consumers. Specifically, clarity on ownership is helpful
for:
In the Fabric portal, content owners can set the contact list property for many
types of items. The contact list is also used in security workflows. For example,
when a user is sent a URL to open a Power BI app but they don't have permission,
they will be presented with an option to make a request for access.
The remainder of this article covers considerations related to the three content
ownership and management strategies.
Business-led self-service
With a business-led self-service approach to data and BI, all content is owned and
managed by creators and subject matter experts. Because responsibility is retained
within a business unit, this strategy is often described as the bottom-up, or decentralized,
approach. Business-led self-service is often a good strategy for personal BI and team BI
solutions.
) Important
The concept of business-led self-service isn't the same as shadow IT. In both
scenarios, data and BI content is created, owned, and managed by business users.
However, shadow IT implies that the business unit is circumventing IT and so the
solution is not sanctioned. With business-led self-service BI solutions, the business
unit has full authority to create and manage content. Resources and support from
the COE are available to self-service content creators. It's also expected that the
business unit will comply with all established data governance guidelines and
policies.
Decentralized data management aligns with the organization's data culture, and
the organization is prepared to support these efforts.
Data exploration and freedom to innovate is a high priority.
The business unit wants to have the most involvement and retain the highest level
of control.
The business unit has skilled users capable of—and fully committed to—
supporting solutions through the entire lifecycle. It covers all types of items,
including the data (such as a lakehouse, data warehouse, data pipeline, dataflow,
or semantic model), the visuals (such as reports and dashboards), and Power BI
apps.
The flexibility to respond to changing business conditions and react quickly
outweighs the need for stricter governance and oversight.
Here are some guidelines to help become successful with business-led self-service data
and BI.
Teach your creators to use the same techniques that IT would use, like shared
semantic models and dataflows. Make use of a well-organized OneLake. Centralize
data to reduce maintenance, improve consistency, and reduce risk.
Focus on providing mentoring, training, resources, and documentation (described
in the Mentoring and user enablement article). The importance of these efforts
can't be overstated. Be prepared for skill levels of self-service content creators to
vary significantly. It's also common for a solution to deliver excellent business value
yet be built in such a way that it won't scale or perform well over time (as historic
data volumes increase). Having the COE available to help when these situations
arise is very valuable.
Provide guidance on the best way to use endorsements. The promoted
endorsement is for content produced by self-service creators. Consider reserving
use of the certified endorsement for enterprise BI content and managed self-
service BI content (described next).
Analyze the activity log to discover situations where the COE could proactively
contact self-service owners to offer helpful information. It's especially useful when
a suboptimal usage pattern is detected. For example, log activity could reveal
overuse of individual item sharing when Power BI app audiences or workspace
roles might be a better choice. The data from the activity log allows the COE to
offer support and advice to the business units. In turn, this information can help
increase the quality of solutions, while allowing the business to retain full
ownership and control of their content. For more information, see Auditing and
monitoring.
Managed self-service
Managed self-service BI is a blended approach to data and BI. The data is owned and
managed by a centralized team (such as IT, enterprise BI, or the COE), while
responsibility for reports and dashboards belongs to creators and subject matter experts
within the business units. Managed self-service BI is frequently a good strategy for team
BI and departmental BI solutions.
This approach is often called_discipline at the core and flexibility at the edge_. It's
because the data architecture is maintained by a single team with an appropriate level
of discipline and rigor. Business units have the flexibility to create reports and
dashboards based on centralized data. This approach allows report creators to be far
more efficient because they can remain focused on delivering value from their data
analysis and visuals.
Here are some guidelines to help you become successful with managed self-service BI.
Teach users to separate model and report development. They can use live
connections to create reports based on existing semantic models. When the
semantic model is decoupled from the report, it promotes data reuse by many
reports and many authors. It also facilitates the separation of duties.
Use dataflows to centralize data preparation logic and to share commonly used
data tables—like date, customer, product, or sales—with many semantic model
creators. Refine the dataflow as much as possible, using friendly column names
and correct data types to reduce the downstream effort required by semantic
model authors, who consume the dataflow as a source. Dataflows are an effective
way to reduce the time involved with data preparation and improve data
consistency across semantic models. The use of dataflows also reduces the number
of data refreshes on source systems and allows fewer users who require direct
access to source systems.
When self-service creators need to augment an existing semantic model with
departmental data, educate them to create composite models. This feature allows
for an ideal balance of self-service enablement while taking advantage of the
investment in data assets that are centrally managed.
Use the certified endorsement for semantic models and dataflows to help content
creators identify trustworthy sources of data.
Include consistent branding on all reports to indicate who produced the content
and who to contact for help. Branding is particularly helpful to distinguish content
that is produced by self-service creators. A small image or text label in the report
footer is valuable when the report is exported from the Fabric portal.
Consider implementing separate workspaces for storing data and reports. This
approach allows for better clarity on who is responsible for content. It also allows
for more restrictive workspace roles assignments. That way, report creators can
only publish content to their reporting workspace; and, read and build semantic
model permissions allow creators to create new reports with row-level security
(RLS) in effect, when applicable. For more information, see Workspace-level
planning. For more information about RLS, see Content creator security planning.
Use the Power BI REST APIs to compile an inventory of Power BI items. Analyze the
ratio of semantic models to reports to evaluate the extent of semantic model
reuse.
Enterprise
Enterprise is a centralized approach to delivering data and BI solutions in which all
solution content is owned and managed by a centralized team. This team is usually IT,
enterprise BI, or the COE.
Here are some guidelines to help you become successful with enterprise data and BI.
Implement a rigorous process for use of the certified endorsement for content.
Not all enterprise content needs to be certified, but much of it probably should be.
Certified content should indicate that data quality has been validated. Certified
content should also follow change management rules, have formal support, and be
fully documented. Because certified content has passed rigorous standards, the
expectations for trustworthiness are higher.
Include consistent branding on enterprise BI reports to indicate who produced the
content, and who to contact for help. A small image or text label in the report
footer is valuable when the report is exported by a user.
If you use specific report branding to indicate enterprise BI content, be careful with
the save a copy functionality that would allow a user to download a copy of a
report and personalize it. Although this functionality is an excellent way to bridge
enterprise BI with managed self-service BI, it dilutes the value of the branding. A
more seamless solution is to provide a separate Power BI Desktop template file for
self-service authors. The template defines a starting point for report creation with a
live connection to an existing semantic model, and it doesn't include branding. The
template file can be shared as a link within a Power BI app, or from the community
portal.
Ownership transfers
Occasionally, the ownership of a particular solution might need to be transferred to
another team. An ownership transfer from a business unit to a centralized team can
happen when:
The COE should have well-documented procedures for identifying when a solution is a
candidate for ownership transfer. It's very helpful if help desk personnel know what to
look for as well. Having a customary pattern for self-service creators to build and grow a
solution, and hand it off in certain circumstances, is an indicator of a productive and
healthy data culture. A simple ownership transfer could be addressed during COE office
hours; a more complex transfer could warrant a small project managed by the COE.
7 Note
There's potential that the new owner will need to do some refactoring and data
validations before they're willing to take full ownership. Refactoring is most likely to
occur with the less visible aspects of data preparation, data modeling, and
calculations. If there are any manual steps or flat file sources, now is an ideal time
to apply those enhancements. The branding of reports and dashboards might also
need to change (for example, if there's a footer indicating report contact or a text
label indicating that the content is certified).
It's also possible for a centralized team to transfer ownership to a business unit. It could
happen when:
The team with domain knowledge is better equipped to own and manage the
content going forward.
The centralized team has created the solution for a business unit that doesn't have
the skills to create it from scratch, but it can maintain and extend the solution
going forward.
Tip
Don't forget to recognize and reward the work of the original creator, particularly if
ownership transfers are a common occurrence.
Considerations and key actions
Checklist - Here's a list of considerations and key actions you can take to strengthen
your approach to content ownership and management.
Questions to ask
Use questions like those found below to assess content ownership and management.
Do central teams that are responsible for Fabric have a clear understanding of who
owns what BI content? Is there a distinction between report and data items, or
different item types (like Power BI semantic models, data science notebooks, or
lakehouses)?
Which usage scenarios are in place, such as personal BI, team BI, departmental BI,
or enterprise BI? How prevalent are they in the organization, and how do they
differ between key business units?
What activities do business analytical teams perform (for example, data
integration, data modeling, or reporting)?
What kinds of roles in the organizations are expected to create and own content?
Is it limited to central teams, analysts, or also functional roles, like sales?
Where does the organization sit on the spectrum of business-led self-service,
managed self-service, or enterprise? Does it differ between key business units?
Do strategic data and BI solutions have ownership roles and stewardship roles that
are clearly defined? Which are missing?
Are content creators and owners also responsible for supporting and updating
content once it's released? How effective is the ownership of content support and
updates?
Is a clear process in place to transfer ownership of solutions (where necessary)? An
example is when an external consultant creates or updates a solution.
Do data sources have data stewards or subject matter experts (SMEs) who serve as
a special point of contact?
If your organization is already using Fabric or Power BI, does the current workspace
setup comply with the content ownership and delivery strategies that are in place?
Maturity levels
The following maturity levels will help you assess the current state of your content
ownership and management.
Level State of content ownership and management
100: Initial • Self-service content creators own and manage content in an uncontrolled way,
without a specific strategy.
• A high ratio of semantic models to reports exists. When many semantic models
exist only support one report, it indicates opportunities to improve data reusability,
improve trustworthiness, reduce maintenance, and reduce the number of duplicate
semantic models.
200: • A plan is in place for which content ownership and management strategy to use
Repeatable and in which circumstances.
• Initial steps are taken to improve the consistency and trustworthiness levels for
self-service efforts.
• Guidance for the user community is available that includes expectations for self-
service versus enterprise content.
• Roles and responsibilities are clear and well understood by everyone involved.
400: • Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.
• There's a plan in place for how to request and handle ownership transfers.
500: • Proactive steps to communicate with users occur when any concerning activities
Efficient are detected in the activity log. Education and information are provided to make
gradual improvements or reduce risk.
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
The four delivery scopes described in this article include personal, team, departmental,
and enterprise. To be clear, focusing on the scope of a delivered data and business
intelligence (BI) solution does refer to the number of people who might view the
solution, though the impact is much more than that. The scope strongly influences best
practices for not only content distribution, but also content management, security, and
information protection. The scope has a direct correlation to the level of governance
(such as requirements for change management, support, or documentation), the extent
of mentoring and user enablement, and needs for user support. It also influences user
licensing decisions.
The related content ownership and management article makes similar points. Whereas
the focus of that article was on the content creator, the focus of this article is on the
target content usage. Both inter-related aspects need to be considered to arrive at
governance decisions and the Center of Excellence (COE) operating model.
) Important
Not all data and solutions are equal. Be prepared to apply different levels of data
management and governance to different teams and various types of content.
Standardized rules are easier to maintain. However, flexibility or customization is
often necessary to apply the appropriate level of oversight for particular
circumstances. Your executive sponsor can prove invaluable by reaching consensus
across stakeholder groups when difficult situations arise.
Personal: Personal solutions are, as the name implies, intended for use by the
creator. Sharing content with others isn't an objective. Therefore, a personal data
and BI solution has the fewest number of target consumers.
Team: Collaborates and shares content with a relatively small number of colleagues
who work closely together.
Departmental: Delivers content to a large number of consumers, who can belong
to a department or business unit.
Enterprise: Delivers content broadly across organizational boundaries to the
largest number of target consumers. Enterprise content is most often managed by
a centralized team and is subject to additional governance requirements.
Contrast the above four scopes of content delivery with the following diagram, which
has an inverse relationship with respect to the number of content creators.
The four scopes of content creators shown in the above diagram include:
Personal: Represents the largest number of creators because the data culture
encourages any user to work with data using business-led self-service data and BI
methods. Although managed self-service BI methods can be used, it's less
common with personal data and BI efforts.
Team: Colleagues within a team collaborate and share with each other by using
business-led self-service patterns. It has the next largest number of creators in the
organization. Managed self-service patterns could also begin to emerge as skill
levels advance.
Departmental: Involves a smaller population of creators. They're likely to be
considered power users who are using sophisticated tools to create sophisticated
solutions. Managed self-service practices are very common and highly encouraged.
Enterprise: Involves the smallest number of content creators because it typically
includes only professional data and BI developers who work in the BI team, the
COE, or in IT.
The content ownership and management article introduced the concepts of business-
led self-service, managed self-service, and enterprise. The most common alignment
between ownership and delivery scope is:
Business-led self-service ownership: Commonly deployed as personal and team
solutions.
Managed self-service ownership: Can be deployed as personal, team, or
departmental solutions.
Enterprise ownership: Typically deployed as enterprise-scoped solutions.
Some organizations also equate self-service content with community-based support. It's
the case when self-service content creators and owners are responsible for supporting
the content they publish. The user support article describes multiple informal and formal
levels for support.
7 Note
The term sharing can be interpreted two ways: It's often used in a general way
related to sharing content with colleagues, which could be implemented multiple
ways. It can also reference a specific feature in Fabric, which is a specific
implementation where a user or group is granted access to a single item. In this
article, the term sharing is meant in a general way to describe sharing content with
colleagues. When the per-item permissions are intended, this article will make a
clear reference to that feature. For more information, see Report consumer
security planning.
Personal
The Personal delivery scope is about enabling an individual to gain analytical value. It's
also about allowing them to more efficiently perform business tasks through the
effective personal use of data, information, and analytics. It could apply to any type of
information worker in the organization, not just data analysts and developers.
Sharing content with others isn't the objective. Personal content can reside in Power BI
Desktop or in a personal workspace in the Fabric portal.
Here are the characteristics of creating content for a personal delivery scope.
The creator's primary intention is data exploration and analysis, rather than report
delivery.
The content is intended to be analyzed and consumed by one person: the creator.
The content might be an exploratory proof of concept that may, or may not, evolve
into a project.
Here are a few guidelines to help you become successful with content developed for
personal use.
Consider personal data and BI solutions to be like an analytical sandbox that has
little formal governance and oversight from the governance team or COE.
However, it's still appropriate to educate content creators that some general
governance guidelines could still apply to personal content. Valid questions to ask
include: Can the creator export the personal report and email it to others? Can the
creator store a personal report on a non-organizational laptop or device? What
limitations or requirements exist for content that contains sensitive data?
See the techniques described for business-led self-service, and managed self-
service in the content ownership and management article. They're highly relevant
techniques that help content creators create efficient and personal data and BI
solutions.
Analyze data from the activity log to discover situations where personal solutions
appear to have expanded beyond the original intended usage. It's usually
discovered by detecting a significant amount of content sharing from a personal
workspace.
Tip
For information about how users progress through the stages of user adoption, see
the Microsoft Fabric adoption roadmap maturity levels. For more information
about using the activity log, see Tenant-level auditing.
Team
The Team delivery scope is focused on a team of people who work closely together, and
who are tasked with solving closely related problems using the same data. Collaborating
and sharing content with each other in a workspace is usually the primary objective.
Content is often shared among the team more informally as compared to departmental
or enterprise content. For instance, the workspace is often sufficient for consuming
content within a small team. It doesn't require the formality of publishing the workspace
to distribute it as an app. There isn't a specific number of users when team-based
delivery is considered too informal; each team can find the right number that works for
them.
Here are the characteristics of creating content for a team delivery scope.
Content is created, managed, and viewed among a group of colleagues who work
closely together.
Collaboration and co-management of content is the highest priority.
Formal delivery of content might occur for report viewers (especially for managers
of the team), but it's usually a secondary priority.
Reports aren't always highly sophisticated or attractive; functionality and accessing
the information is what matters most.
Here are some guidelines to help you become successful with content developed for
team use.
Tip
Departmental
Content is delivered to members of a department or business unit. Content distribution
to a larger number of consumers is a priority for departmental delivery scopes.
Ensure that the COE is prepared to support the efforts of self-service creators.
Creators who publish content used throughout their department or business unit
might emerge as candidates to become champions. Or, they might become
candidates to join the COE as a satellite member.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for an app. Several workspaces will likely be required to meet all the
needs of a large department or business unit.
Plan how Power BI apps will distribute content to the enterprise. An app can
provide a significantly better user experience for consuming content. In many
cases, content consumers can be granted permissions to view content via the app
only, reserving workspace permissions management for content creators and
reviewers only. The use of app audience groups allows you to mix and match
content and target audience in a flexible way.
Be clear about what data quality validations have occurred. As the importance and
criticality level grows, expectations for trustworthiness grow too.
Ensure that adequate training, mentoring, and documentation is available to
support content creators. Best practices for data preparation, data modeling, and
data presentation will result in better quality solutions.
Provide guidance on the best way to use the promoted endorsement, and when
the certified endorsement could be permitted for departmental solutions.
Ensure that the owner is identified for all departmental content. Clarity on
ownership is helpful, including who to contact with questions, feedback,
enhancement requests, or support requests. In the Fabric portal, content owners
can set the contact list property for many types of items (like reports and
dashboards). The contact list is also used in security workflows. For example, when
a user is sent a URL to open an app but they don't have permission, they'll be
presented with an option to make a request for access.
Consider using deployment pipelines in conjunction with separate workspaces.
Deployment pipelines can support development, test, and production
environments, which provide more stability for consumers.
Consider enforcing the use of sensitivity labels to implement information
protection on all content.
Include consistent branding on reports by:
Using departmental colors and styling to indicate who produced the content.
For more information, see Content ownership and management.
Adding a small image or text label to the report footer, which is valuable when
the report is exported from the Fabric portal.
Using a standard Power BI Desktop template file. For more information, see
Mentoring and user enablement.
Apply the techniques described for business-led self-service and managed self-
service content delivery in the Content ownership and management article. They're
highly relevant techniques that can help content creators to create efficient and
effective departmental solutions.
Enterprise
Enterprise content is typically managed by a centralized team and is subject to
additional governance requirements. Content is delivered broadly across organizational
boundaries.
A centralized team of experts manages the content end-to-end and publishes it for
others to consume.
Formal delivery of data solutions like reports, lakehouses, and Power BI apps is a
high priority to ensure consumers have the best experience.
The content is highly sensitive, subject to regulatory requirements, or is considered
extremely critical.
Published enterprise-level semantic models (previously known as datasets) and
dataflows might be used as a source for self-service creators, thus creating a chain
of dependencies to the source data.
Stability and a consistent experience for consumers are highly important.
Application lifecycle management, such as deployment pipelines and DevOps
techniques , is commonly used. Change management processes to review and
approve changes before they're deployed are commonly used for enterprise
content, for example, by a change review board or similar group.
Processes exist to gather requirements, prioritize efforts, and plan for new projects
or enhancements to existing content.
Integration with other enterprise-level data architecture and management services
could exist, possibly with other Azure services and Power Platform products.
Here are some guidelines to help you become successful with enterprise content
delivery.
Checklist - Considerations and key actions you can take to strengthen your approach to
content delivery.
" Align goals for content delivery: Ensure that guidelines, documentation, and other
resources align with the strategic goals defined for Fabric adoption.
" Clarify the scopes for content delivery in your organization: Determine who each
scope applies to, and how each scope aligns with governance decisions. Ensure that
decisions and guidelines are consistent with how content ownership and
management is handled.
" Consider exceptions: Be prepared for how to handle situations when a smaller
team wants to publish content for an enterprise-wide audience.
Will it require the content be owned and managed by a centralized team? For
more information, see the Content ownership and management article, which
describes an inter-related concept with content delivery scope.
Will there be an approval process? Governance can become more complicated
when the content delivery scope is broader than the owner of the content. For
example, when an app that's owned by a divisional sales team is distributed to
the entire organization.
" Create helpful documentation: Ensure that you have sufficient training
documentation and support so that your content creators understand when it's
appropriate to use workspaces, apps, or per-item sharing (direct access or link) .
" Create a licensing strategy: Ensure that you have a specific strategy in place to
handle Fabric licensing considerations. Create a process for how workspaces could
be assigned each license type, and the prerequisites required for the type of
content that could be assigned to Premium.
Questions to ask
Use questions like those found below to assess content delivery scope.
Do central teams that are responsible for Fabric have a clear understanding of who
creates and delivers content? Does it differ by business area, or for different
content item types?
Which usage scenarios are in place, such as personal BI, team BI, departmental BI,
or enterprise BI? How prevalent are they in the organization? Are there advanced
scenarios, like advanced data preparation or advanced data model management,
or niche scenarios, like self-service real-time analytics?
For the identified content delivery scopes in place, to what extent are guidelines
being followed?
Are there trajectories for helpful self-service content to be "promoted" from
personal to team content delivery scopes and beyond? What systems and
processes enable sustainable, bottom-up scaling and distribution of useful self-
service content?
What are the guidelines for publishing content to, and using, personal
workspaces?
Are personal workspaces assigned to dedicated Fabric capacity? In what
circumstances are personal workspaces intended to be used?
On average, how many reports does someone have access to? How many reports
does an executive have access to? How many reports does the CEO have access
to?
If your organization is using Fabric or Power BI today, does the current workspace
setup comply with the content ownership and delivery strategies that are in place?
Is there a clear licensing strategy? How many licenses are used today? How many
tenants and capacities exist, who uses them, and why?
How do central teams decide what gets published to Premium (or Fabric)
dedicated capacity, and what uses shared capacity? Do development workloads
use separate Premium Per User (PPU) licensing to avoid affecting production
workloads?
Maturity levels
The following maturity levels will help you assess the current state of your content
delivery.
200: • Pockets of good practices exist. However, good practices are overly dependent
Repeatable on the knowledge, skills, and habits of the content creator.
300: Defined • Clear guidelines are defined and communicated to describe what can and can't
occur within each delivery scope. These guidelines are followed by some—but not
all—groups across the organization.
400: • Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.
Level State of content delivery
• Guidelines for content delivery scope are followed by most, or all, groups across
the organization.
• Changes are announced and follow a communication plan. Content creators are
aware of the downstream effects on their content. Consumers are aware of when
reports and apps are changed.
500: Efficient • Proactively take steps to communicate with users occur when any concerning
activities are detected in the activity log. Education and information are provided
to make gradual improvements or reduce risk.
• The business value that's achieved for deployed solutions is regularly evaluated.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
Center of Excellence (COE).
Microsoft Fabric adoption roadmap:
Center of Excellence
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
7 Note
) Important
One of the most powerful aspects of a COE is the cross-departmental insight into
how analytics tools like Fabric are used by the organization. This insight can reveal
which practices work well and which don't, that can facilitate a bottom-up
approach to governance. A primary goal of the COE is to learn which practices work
well, share that knowledge more broadly, and replicate best practices across the
organization.
Staffing a COE
People who are good candidates as COE members tend to be those who:
Tip
If you have self-service content creators in your organization who constantly push
the boundaries of what can be done, they might be a great candidate to become a
recognized champion, or perhaps even a satellite member of the COE.
When recruiting for the COE, it's important to have a mix of complementary analytical
skills, technical skills, and business skills.
Role Description
COE Manages the day-to-day operations of the COE. Interacts with the executive sponsor
leader and other organizational teams, such as the data governance board, as necessary.
Role Description
For an overview of additional roles and responsibilities, see the Governance article.
Coach Coaches and educates others on data and BI skills via office hours (community
engagement), best practices reviews, or co-development projects. Oversees and
participates in the discussion channel of the internal community. Interacts with, and
supports, the champions network.
Trainer Develops, curates, and delivers internal training materials, documentation, and
resources.
Data Domain-specific subject matter expert. Acts as a liaison between the COE and the
analyst business unit. Content creator for the business unit. Assists with content certification.
Works on co-development projects and proofs of concept.
Data Creates and manages data assets (such as shared semantic model—previously
modeler known as a dataset—and dataflows) to support other self-service content creators.
Data Plans for deployment and architecture, including integration with other services and
engineer data platforms. Publishes data assets which are utilized broadly across the
organization (such as a lakehouse, data warehouse, data pipeline, dataflow, or
semantic model).
User Assists with the resolution of data discrepancies and escalated help desk support
support issues.
As mentioned previously, the scope of responsibilities for a COE can vary significantly
between organizations. Therefore, the roles found for COE members can vary too.
Structuring a COE
The selected COE structure can vary among organizations. It's also possible for multiple
structures to exist inside of a single large organization. That's particularly true when
there are subsidiaries or when acquisitions have occurred.
7 Note
The following terms might differ to those defined for your organization, particularly
the meaning of federated, which tends to have many different IT-related meanings.
Centralized COE
A centralized COE comprises a single shared services team.
Pros:
There's a single point of accountability for a single team that manages standards,
best practices, and delivery end-to-end.
The COE is one group from an organizational chart perspective.
It's easy to start with this approach and then evolve to the unified or federated
model over time.
Cons:
Unified COE
A unified COE is a single, centralized, shared services team that has been expanded to
include embedded team members. The embedded team members are dedicated to
supporting a specific functional area or business unit.
Pros:
There's a single point of accountability for a single team that includes cross-
functional involvement from the embedded COE team members. The embedded
COE team members are assigned to various areas of the business.
The COE is one group from an organizational chart perspective.
The COE understands the needs of business units more deeply due to dedicated
members with domain expertise.
Cons:
The embedded COE team members, who are dedicated to a specific business unit,
have a different organizational chart responsibility than the people they serve
directly within the business unit. The organizational structure could potentially lead
to complications, differences in priorities, or necessitate the involvement of the
executive sponsor. Preferably, the executive sponsor has a scope of authority that
includes the COE and all involved business units to help resolve conflicts.
Federated COE
A federated COE comprises a shared services team (the core COE members) plus
satellite members from each functional area or major business unit. A federated team
works in coordination, even though its members reside in different business units.
Typically, satellite members are primarily focused on development activities to support
their business unit while the shared services personnel support the entire community.
Pros:
Cons:
Since core and satellite members span organizational boundaries, the federated
COE approach requires strong leadership, excellent communication, robust project
management, and ultra-clear expectations.
There's a higher risk of encountering competing priorities due to the federated
structure.
This approach typically involves part-time people and/or dotted line organizational
chart accountability that can introduce competing time pressures.
Tip
Decentralized COE
Decentralized COEs are independently managed by business units.
Pros:
A specialized data culture exists that's focused on the business unit, making it
easier to learn quickly and adapt.
Policies and practices are tailored to each business unit.
Agility, flexibility, and priorities are focused on the individual business unit.
Cons:
There's a risk that decentralized COEs operate in isolation. As a result, they might
not share best practices and lessons learned outside of their business unit.
Collaboration with a centralized team might be informal and/or inconsistent.
Inconsistent policies are created and applied across business units.
It's difficult to scale a decentralized model.
There's potential rework to bring one or more decentralized COEs in alignment
with organizational-wide policies.
Larger business units with significant funding might have more resources available
to them, which might not serve cost optimization goals from an organizational-
wide perspective.
) Important
Cost center.
Profit center with project budget(s).
A combination of cost center and profit center.
When the COE operates as a cost center, it absorbs the operating costs. Generally, it
involves an approved annual budget. Sometimes this is called a push engagement
model.
When the COE operates as a profit center (for at least part of its budget), it could accept
projects throughout the year based on funding from other business units. Sometimes
this is called a pull engagement model.
Funding is important because it impacts the way the COE communicates and engages
with the internal community. As the COE experiences more and more successes, they
might receive more requests from business units for help. It's especially the case as
awareness grows throughout the organization.
Tip
The choice of funding model can determine how the COE actively grows its
influence and ability to help. The funding model can also have a big impact on
where authority resides and how decision-making works. Further, it impacts the
types of services a COE can offer, such as co-development projects and/or best
practices reviews. For more information, see the Mentoring and user enablement
article.
Some organizations cover the COE operating costs with chargebacks to business units
based on the usage goals of Fabric. For a shared capacity, this could be based on
number of active users. For Premium capacity, chargebacks could be allocated based on
which business units are using the capacity. Ideally, chargebacks are directly correlated
to the business value gained.
Checklist - Considerations and key actions you can take to establish or improve your
COE.
" Define the scope of responsibilities for the COE: Ensure that you're clear on what
activities the COE can support. Once the scope of responsibilities is known, identify
the skills and competencies required to fulfill those responsibilities.
" Identify gaps in the ability to execute: Analyze whether the COE has the required
systems and infrastructure in place to meet its goals and scope of responsibilities.
" Determine the best COE structure: Identify which COE structure is most
appropriate (centralized, unified, federated, or decentralized). Verify that staffing,
roles and responsibilities, and appropriate organizational chart relationships (HR
reporting) are in place.
" Plan for future growth: If you're starting out with a centralized or decentralized
COE, consider how you will scale the COE over time by using the unified or
federated approach. Plan for any actions that you can take now that'll facilitate
future growth.
" Identify customers: Identify the internal community members, and any external
customers, to be served by the COE. Decide how the COE will generally engage with
those customers, whether it's a push model, pull model, or both models.
" Verify the funding model for the COE: Decide whether the COE is purely a cost
center with an operating budget, whether it will operate partially as a profit center,
and/or whether chargebacks to other business units will be required.
" Create a communication plan: Create you communications strategy to educate the
internal community of users about the services the COE offers, and how to engage
with the COE.
" Create goals and metrics: Determine how you'll measure effectiveness for the COE.
Create KPIs (key performance indicators) or OKRs (objectives and key results) to
validate that the COE consistently provides value to the user community.
Questions to ask
Use questions like those found below to assess the effectiveness of a COE.
Is there a COE? If so, who is in the COE and what's the structure?
If there isn't a COE, is there a central team that performs a similar function? Do
data decision makers in the organization understand what a COE does?
If there isn't a COE, does the organization aspire to create one? Why or why not?
Are there opportunities for federated or decentralized COE models due to a mix of
enterprise and departmental solutions?
Are there any missing roles and responsibilities from the COE?
To what extent does the COE engage with the user community? Do they mentor
users? Do they curate a centralized portal? Do they maintain centralized resources?
Is the COE recognized in the organization? Does the user community consider
them to be credible and helpful?
Do business users see central teams as enabling or restricting their work with data?
What's the COE funding model? Do COE customers financially contribute in some
way to the COE?
How consistent and transparent is the COE with their communication?
Maturity levels
The following maturity levels will help you assess the current state of your COE.
100: Initial • One or more COEs exist, or the activities are performed within the data team, BI
team, or IT. There's no clarity on the specific goals nor expectations for
responsibilities.
• Requests for assistance from the COE are handled in an unplanned manner.
200: • The COE is in place with a specific charter to mentor, guide, and educate self-
Repeatable service users. The COE seeks to maximize benefits of self-service approaches to
data and BI while reducing the risks.
• The goals, scope of responsibilities, staffing, structure, and funding model are
established for the COE.
300: Defined • The COE operates with active involvement from all business units in a unified or
federated mode.
400: Capable • The goals of the COE align with organizational goals, and they are reassessed
regularly.
• The COE is well-known throughout the organization, and consistently proves its
value to the internal user community.
500: Efficient • Regular reviews of KPIs or OKRs evaluate COE effectiveness in a measurable way.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about
implementing governance guidelines, policies, and processes.
Microsoft Fabric adoption roadmap:
Governance
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
Data governance is a broad and complex topic. This article introduces key concepts and
considerations. It identifies important actions to take when adopting Microsoft Fabric,
but it's not a comprehensive reference for data governance.
As defined by the Data Governance Institute , data governance is "a system of decision
rights and accountabilities for information-related processes, executed according to
agreed-upon models which describe who can take what actions, with what information,
and when, under what circumstances, using what methods."
The term data governance is a misnomer. The primary focus for governance isn't on the
data itself. The focus is on governing what users do with the data. Put another way: the
true focus is on governing user's behavior to ensure organizational data is well
managed.
When focused on self-service data and business intelligence (BI), the primary goals of
governance are to achieve the proper balance of:
The optimal balance between control and empowerment will differ between
organizations. It's also likely to differ among different business units within an
organization. You'll be most successful with a platform like Fabric when you put as much
emphasis on user empowerment as on clarifying its practical usage within established
guardrails.
Tip
Think of governance as a set of established guidelines and formalized policies. All
governance guidelines and policies should align with your organizational data
culture and adoption objectives. Governance is enacted on a day-to-day basis by
your system oversight (administration) activities.
Governance strategy
When considering data governance in any organization, the best place to start is by
defining a governance strategy. By focusing first on the strategic goals for data
governance, all detailed decisions when implementing governance policies and
processes can be informed by the strategy. In turn, the governance strategy will be
defined by the organization's data culture.
Empowering users throughout the organization to use data and make decisions,
within the defined boundaries.
Improving the user experience by providing clear and transparent guidance (with
minimal friction) on what actions are permitted, why, and how.
Ensuring that the data usage is appropriate for the needs of the business.
Ensuring that content ownership and stewardship responsibilities are clear. For
more information, see the Content ownership and management article.
Enhancing the consistency and standardization of working with data across
organizational boundaries.
Reducing risk of data leakage and misuse of data. For more information, see the
information protection and data loss prevention series of articles article.
Meeting regulatory, industry, and internal requirements for the proper use of data.
Tip
A well-executed data governance strategy makes it easier for more users to work
with data. When governance is approached from the perspective of user
empowerment, users are more likely to follow the documented processes.
Accordingly, the users become a trusted partner too.
Roll out Fabric first, then introduce governance: Fabric is made widely available to
users in the organization as a new self-service data and BI tool. Then, at some time in
the future, a governance effort begins. This method prioritizes agility.
Full governance planning first, then roll out Fabric: Extensive governance planning
occurs prior to permitting users to begin using Fabric. This method prioritizes control
and stability.
Choose method 1 when Fabric is already used for self-service scenarios, and you're
ready to start working in a more efficient manner.
Choose method 3 when you want to have a balance of control agility. This balanced
approach is the best choice for most organizations and most scenarios.
Pros:
Cons:
Pros:
Cons:
Pros:
Cons:
For more information about up-front planning, see the Preparing to migrate to Power BI
article.
Governance challenges
If your organization has implemented Fabric without a governance approach or strategic
direction (as described above by method 1), there could be numerous challenges
requiring attention. Depending on the approach that you've taken and your current
state, some of the following challenges could be applicable to your organization.
Strategy challenges
Lack of a cohesive data governance strategy that aligns with the business strategy
Lack of executive support for governing data as a strategic asset
Insufficient adoption planning for advancing adoption and the maturity level of BI
and analytics
People challenges
Lack of aligned priorities between centralized teams and business units
Lack of identified champions with sufficient expertise and enthusiasm throughout
the business units to advance organizational adoption objectives
Lack of awareness of self-service best practices
Resistance to following newly introduced governance guidelines and policies
Duplicate effort spent across business units
Lack of clear accountability, roles, and responsibilities
Process challenges
Lack of clearly defined processes resulting in chaos and inconsistencies
Lack of standardization or repeatability
Insufficient ability to communicate and share lessons learned
Lack of documentation and over-reliance on tribal knowledge
Inability to comply with security and privacy requirements
Tip
Identifying your current challenges—as well as your strengths—is essential to do
proper governance planning. There's no single straightforward solution to the
challenges listed above. Each organization needs to find the right balance and
approach that solves the challenges that are most important to them. The
challenges presented above will help you identify how they might affect your
organization, so you can start thinking about what the right solution is for your
circumstances.
Governance planning
Some organizations have implemented Fabric without a governance approach or clear
strategic direction (as described above by method 1). In this case, the effort to begin
governance planning can be daunting.
If a formal governance body doesn't currently exist in your organization, then the focus
of your governance planning and implementation efforts will be broader. If, however,
there's an existing data governance board in the organization, then your focus is
primarily to integrate with existing practices and customize them to accommodate the
objectives for self-service and enterprise data and BI scenarios.
) Important
Some potential governance planning activities and outputs that you might find valuable
are described next.
Strategy
Key activities:
Conduct a series of workshops to gather information and assess the current state
of data culture, adoption, and data and BI practices. For guidance about how to
gather information and define the current state of BI adoption, including
governance, see BI strategic planning.
Use the current state assessment and information gathered to define the desired
future state, including governance objectives. For guidance about how to use this
current state definition to decide on your desired future state, see BI tactical
planning.
Validate the focus and scope of the governance program.
Identify existing bottom-up initiatives in progress.
Identify immediate pain points, issues, and risks.
Educate senior leadership about governance, and ensure executive sponsorship is
sufficient to sustain and grow the program.
Clarify where Power BI fits in to the overall BI and analytics strategy for the
organization.
Assess internal factors such as organizational readiness, maturity levels, and key
challenges.
Assess external factors such as risk, exposure, regulatory, and legal requirements—
including regional differences.
Key output:
People
Key activities:
Key output:
Charter for the governance board
Charter and priorities for the COE
Staffing plan
Roles and responsibilities
Accountability and decision-making matrix
Communication plan
Issue management plan
Analyze immediate pain points, issues, risks, and areas to improve the user
experience.
Prioritize data policies to be addressed by order of importance.
Identify existing processes in place that work well and can be formalized.
Determine how new data policies will be socialized.
Decide to what extent data policies might differ or be customized for different
groups.
Key output:
Process for how data policies and documentation will be defined, approved,
communicated, and maintained
Plan for requesting valid exceptions and departures from documented policies
Project management
The implementation of the governance program should be planned and managed as a
series of projects.
Key activities:
Key output:
) Important
The scope of activities listed above that will be useful to take on will vary
considerably between organizations. If your organization doesn't have existing
processes and workflows for creating these types of outputs, refer to the guidance
found in the adoption roadmap conclusion for some helpful resources, as well as
the implementation planning BI strategy articles.
Governance policies
Decision criteria
All governance decisions should be in alignment with the established goals for
organizational adoption. Once the strategy is clear, more tactical governance decisions
will need to be made which affect the day-to-day activities of the self-service user
community. These types of tactical decisions correlate directly to the data policies that
get created.
Who owns and manages the data and BI content? The Content ownership and
management article introduced three types of strategies: business-led self-service,
managed self-service, and enterprise. Who owns and manages the content has a
significant impact on governance requirements.
What is the scope for delivery of the data and BI content? The Content delivery
scope article introduced four scopes for delivery of content: personal, team,
departmental, and enterprise. The scope of delivery has a considerable impact on
governance requirements.
What is the data subject area? The data itself, including its sensitivity level, is an
important factor. Some data domains inherently require tighter controls. For
instance, personally identifiable information (PII), or data subject to regulations,
should be subject to stricter governance requirements than less sensitive data.
Is the data, and/or the BI solution, considered critical? If you can't make an
informed decision easily without this data, you're dealing with critical data
elements. Certain reports and apps could be deemed critical because they meet a
set of predefined criteria. For instance, the content is delivered to executives.
Predefined criteria for what's considered critical helps everyone have clear
expectations. Critical data is usually subject to stricter governance requirements.
Tip
Different combinations of the above four criteria will result in different governance
requirements for Fabric content.
The following list includes items that you might choose to prioritize when introducing
governance for Fabric.
Although not every governance decision needs to be made upfront, it's important that
you identify the areas of greatest risk in your organization. Then, incrementally
implement governance policies and processes that will deliver the most impact.
Data policies
A data policy is a document that defines what users can and can't do. You might call it
something different, but the goal remains the same: when decisions—such as those
discussed in the previous section—are made, they're documented for use and reference
by the community of users.
A data policy should be as short as possible. That way, it's easy for people to understand
what is being asked of them.
7 Note
Here are three common data policy examples you might choose to prioritize.
Policy Description
Data ownership Specifies when an owner is required for a data asset, and what the data
policy owner's responsibilities include, such as: supporting colleagues who view the
Policy Description
Data certification Specifies the process that is followed to certify content. Requirements might
(endorsement) include activities such as: data accuracy validation, data source and lineage
policy review, technical review of the data model, security review, and
documentation review.
Data classification Specifies activities that are allowed and not allowed per classification
and protection (sensitivity level). It should specify activities such as: allowed sharing with
policy external users, with or without a non-disclosure agreement (NDA),
encryption requirements, and ability to download the data. Sometimes, it's
also called a data handling policy or a data usage policy. For more
information, see the Information protection for Power BI article.
U Caution
Having a lot of documentation can lead to a false sense that everything is under
control, which can lead to complacency. The level of engagement that the COE has
with the user community is one way to improve the chances that governance
guidelines and policies are consistently followed. Auditing and monitoring activities
are also important.
Scope of policies
Governance decisions will rarely be one-size-fits-all across the entire organization. When
practical, it's wise to start with standardized policies, and then implement exceptions as
needed. Having a clearly defined strategy for how policies will be handled for
centralized and decentralized teams will make it much easier to determine how to
handle exceptions.
Inflexible
Less autonomy and empowerment
Pros of departmental-scope policies:
Tip
Finding the right balance of standardization and customization for supporting self-
service data and BI across the organization can be challenging. However, by
starting with organizational policies and mindfully watching for exceptions, you can
make meaningful progress quickly.
) Important
Regardless of how the governance body is structured, it's important that there's a
person or group with sufficient influence over data governance decisions. This
person should have authority to enforce those decisions across organizational
boundaries.
Level Description
Tactical - Supporting teams: Level 2 includes several groups that support the efforts of
the users in the business units. Supporting teams include the COE, enterprise data and BI,
the data governance office, as well as other ancillary teams. Ancillary teams can include IT,
security, HR, and legal. A change control board is included here as well.
Tactical - Audit and compliance: Level 3 includes internal audit, risk management, and
compliance teams. These teams provide guidance to levels 1 and 2. They also provide
enforcement when necessary.
Strategic - Executive sponsor and steering committee: The top level includes the
executive-level oversight of strategy and priorities. This level handles any escalated issues
that couldn't be solved at lower levels. Therefore, it's important to have a leadership team
with sufficient authority to be able to make decisions when necessary.
) Important
Role Description
Chief Data Officer Defines the strategy for use of data as an enterprise asset. Oversees
or Chief Analytics enterprise-wide governance guidelines and policies.
Officer
Data governance Steering committee with members from each business unit who, as domain
board owners, are empowered to make enterprise governance decisions. They
make decisions on behalf of the business unit and in the best interest of the
organization. Provides approvals, decisions, priorities, and direction to the
enterprise data governance team and working committees.
Data governance Creates governance policies, standards, and processes. Provides enterprise-
team wide oversight and optimization of data integrity, trustworthiness, privacy,
and usability. Collaborates with the COE to provide governance education,
support, and mentoring to data owners and content creators.
Data governance Temporary or permanent teams that focus on individual governance topics,
working such as security or data quality.
committees
Project Manages individual governance projects and the ongoing data governance
management office program.
Fabric executive Promotes adoption and the successful use of Fabric. Actively ensures that
sponsor Fabric decisions are consistently aligned with business objectives, guiding
principles, and policies across organizational boundaries. For more
information, see the Executive sponsorship article.
Center of Mentors the community of creators and consumers to promote the effective
Excellence use of Fabric for decision-making. Provides cross-departmental
coordination of Fabric activities to improve practices, increase consistency,
and reduce inefficiencies. For more information, see the Center of
Excellence article.
Role Description
Fabric champions A subset of content creators found within the business units who help
advance the adoption of Fabric. They contribute to data culture growth by
advocating the use of best practices and actively assisting colleagues. For
more information, see the Community of practice article.
Risk management Reviews and assesses data sharing and security risks. Defines ethical data
policies and standards. Communicates regulatory and legal requirements.
Data steward Collaborates with governance committee and/or COE to ensure that
organizational data has acceptable data quality levels.
All BI creators and Adheres to policies for ensuring that data is secure, protected, and well-
consumers managed as an organizational asset.
Tip
Name a backup for each person in key roles, for example, members of the data
governance board. In their absence, the backup person can attend meetings and
make time-sensitive decisions when necessary.
Checklist - Considerations and key actions you can take to establish or strengthen your
governance initiatives.
" Align goals and guiding principles: Confirm that the high-level goals and guiding
principles of the data culture goals are clearly documented and communicated.
Ensure that alignment exists for any new governance guidelines or policies.
" Understand what's currently happening: Ensure that you have a deep
understanding of how Fabric is currently used for self-service and enterprise data
and BI scenarios. Document opportunities for improvement. Also, document
strengths and good practices that would be helpful to scale out more broadly.
" Prioritize new governance guidelines and policies: For prioritizing which new
guidelines or policies to create, select an important pain point, high priority need,
or known risk for a data domain. It should have significant benefit and can be
achieved with a feasible level of effort. When you implement your first governance
guidelines, choose something users are likely to support because the change is low
impact, or because they are sufficiently motivated to make a change.
" Create a schedule to review policies: Determine the cadence for how often data
policies are reevaluated. Reassess and adjust when needs change.
" Decide how to handle exceptions: Determine how conflicts, issues, and requests
for exceptions to documented policies will be handled.
" Understand existing data assets: Confirm that you understand what critical data
assets exist. Create an inventory of ownership and lineage, if necessary. Keep in
mind that you can't govern what you don't know about.
" Verify executive sponsorship: Confirm that you have support and sufficient
attention from your executive sponsor, as well as from business unit leaders.
" Prepare an action plan: Include the following key items:
Initial priorities: Select one data domain or business unit at a time.
Timeline: Work in iterations long enough to accomplish meaningful progress, yet
short enough to periodically adjust.
Quick wins: Focus on tangible, tactical, and incremental progress.
Success metrics: Create measurable metrics to evaluate progress.
Questions to ask
At a high level, what's the current governance strategy? To what extent is the
purpose and importance of this governance strategy clear to both end users and
the central data and BI teams?
In general, is the current governance strategy effective?
What are the key regulatory and compliance criteria that the organization (or
specific business units) must adhere to? Where's this criteria documented? Is this
information readily available to people who work with data and share data items as
a part of their role?
How well does the current governance strategy align to the user's way of working?
Is a specific role or team responsible for governance in the organization?
Who has the authority to create and change governance policies?
Do governance teams use Microsoft Purview or another tool to support
governance activities?
What are the prioritized governance risks, such as risks to security, information
protection, and data loss prevention?
What's the potential business impact of the identified governance risks?
How frequently is the governance strategy re-evaluated? What metrics are used to
evaluate it, and what mechanisms exist for business users to provide feedback?
What types of user behaviors create risk when users work with data? How are
those risks mitigated?
What sensitivity labels are in place, if any? Are data and BI decision makers aware
of sensitivity labels and the benefits to the business?
What data loss prevention policies are in place, if any?
How is "Export to Excel" handled? What steps are taken to prevent data loss
prevention? What's the prevalence of "Export to Excel"? What do people do with
data once they have it in Excel?
Are there practices or solutions that are out of regulatory compliance that must be
urgently addressed? Are these examples justified with an explanation of the
potential business impact, should they not be addressed?
Tip
"Export to Excel" is typically a controversial topic. Often, business users focus on the
requirement to have "Export to Excel" possible in BI solutions. Enabling "Export to
Excel" can be counter-productive because a business objective isn't to get data into
Excel. Instead, define why end users need the data in Excel. Ask what they do with
the data once it's in Excel, which business questions they try to answer, what
decisions they make, and what actions they take with the data.
Focusing on business decisions and actions helps steer focus away from tools and
features and toward helping people achieve their business objectives.
Maturity levels
The following maturity levels will help you assess the current state of your governance
initiatives.
100: Initial • Due to a lack of governance planning, the good data management and informal
governance practices that are occurring are overly reliant on judgment and
experience level of individuals.
200: • Some areas of the organization have made a purposeful effort to standardize,
Repeatable improve, and document their data management and governance practices.
300: Defined • A complete governance strategy with focus, objectives, and priorities is enacted
and broadly communicated.
• Specific governance guidelines and policies are implemented for the top few
priorities (pain points or opportunities). They're actively and consistently followed
by users.
400: Capable • All Fabric governance priorities align with organizational goals and business
objectives. Goals are reassessed regularly.
• It's clear where Fabric fits into the overall data and BI strategy for the
organization.
• Fabric activity log and API data is actively analyzed to monitor and audit Fabric
activities. Proactive action is taken based on the data.
500: Efficient • Regular reviews of KPIs or OKRs evaluate measurable governance goals. Iterative,
continual progress is a priority.
• Fabric activity log and API data is actively used to inform and improve adoption
and governance efforts.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about
mentoring and user enablement.
Microsoft Fabric adoption roadmap:
Mentoring and user enablement
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
A critical objective for adoption efforts is to enable users to accomplish as much as they
can within the requisite guardrails established by governance guidelines and policies.
For this reason, the act of mentoring users is one of the most important responsibilities
of the Center of Excellence (COE), and it has a direct influence on how user adoption
occurs. For more information about user adoption, see Microsoft Fabric adoption
maturity levels.
Skills mentoring
Mentoring and helping users in the Fabric community become more effective can take
on various forms, such as:
Office hours
Co-development projects
Best practices reviews
Extended support
Office hours
Office hours are a form of ongoing community engagements managed by the COE. As
the name implies, office hours are times of regularly scheduled availability where
members of the community can engage with experts from the COE to receive assistance
with minimal process overhead. Office hours are usually group-based, so Fabric
champions and other members of the community can also help solve an issue if a topic
is in their area of expertise.
Office hours are a very popular and productive activity in many organizations. Some
organizations call them drop-in hours or even a fun name such as Power Hour or Fabric
Fridays. The primary goal is usually to get questions answered, solve problems, and
remove blockers. Office hours can also be used as a platform for the user community to
share ideas, suggestions, and even complaints.
The COE publishes the times for regular office hours when one or more COE members
are available. Ideally, office hours are held on a regular and frequent basis. For instance,
it could be every Tuesday and Thursday. Consider offering different time slots or
rotating times if you have a global workforce.
Tip
One option is to set specific office hours each week. However, users might not
show up, so that can end up being inefficient. Alternatively, consider leveraging
Microsoft Bookings to schedule office hours. It shows the blocks of time when
each COE expert is available, with Outlook integration ensuring availability is up to
date.
Content creators and the COE actively collaborate to answer questions and solve
problems together.
Real work is accomplished while learning and problem solving.
Others might observe, learn, and participate.
Individual groups can head to a breakout room to solve a specific problem.
They're a great way for the COE to identify champions or users with specific skills
that the COE didn't previously know about.
The COE can learn what users throughout the organization are struggling with. It
helps inform whether additional resources, documentation, or training might be
required.
Tip
It's common for some tough issues to come up during office hours that cannot be
solved quickly, such as getting a complex DAX calculation to work, or addressing
performance challenges in a complex solution. Set clear expectations for what's in
scope for office hours, and if there's any commitment for follow up.
Co-development projects
One way the COE can provide mentoring services is during a co-development project. A
co-development project is a form of assistance offered by the COE where a user or
business unit takes advantage of the technical expertise of the COE to solve business
problems with data. Co-development involves stakeholders from the business unit and
the COE working in partnership to build a high-quality self-service analytics or business
intelligence (BI) solution that the business stakeholders couldn't deliver independently.
The goal of co-development is to help the business unit develop expertise over time
while also delivering value. For example, the sales team has a pressing need to develop
a new set of commission reports, but the sales team doesn't yet have the knowledge to
complete it on their own.
A co-development project forms a partnership between the business unit and the COE.
In this arrangement, the business unit is fully invested, deeply involved, and assumes
ownership of the project.
Time involvement from the COE reduces over time until the business unit gains expertise
and becomes self-reliant.
The active involvement shown in the above diagram changes over time, as follows:
Ideally, the period for the gradual reduction in involvement is identified up-front in the
project. This way, both the business unit and the COE can sufficiently plan the timeline
and staffing.
Co-development projects can deliver significant short- and long-term benefits. In the
short term, the involvement from the COE can often result in a better-designed and
better-performing solution that follows best practices and aligns with organizational
standards. In the long term, co-development helps increase the knowledge and
capabilities of the business stakeholder, making them more self-sufficient, and more
confident to deliver quality self-service data and BI solutions in the future.
) Important
Essentially, a co-development project helps less experienced users learn the right
way to do things. It reduces the risk that refactoring might be needed later, and it
increases the ability for a solution to scale and grow over time.
During a review, an expert from the COE evaluates self-service Fabric content developed
by a member of the community and identifies areas of risk or opportunities for
improvement.
Here are some examples of when a best practices review could be beneficial.
The sales team has a Power BI app that they intend to distribute to thousands of
users throughout the organization. Since the app represents high priority content
distributed to a large audience, they'd like to have it certified. The standard
process to certify content includes a best practices review.
The finance team would like to assign a workspace to a capacity. A review of the
workspace content is required to ensure sound development practices are
followed. This type of review is common when the capacity is shared among
multiple business units. (A review might not be required when the capacity is
assigned to only one business unit.)
The operations team is creating a new Fabric solution they expect to be widely
used. They would like to request a best practices review before it goes into user
acceptance testing (UAT), or before a request is submitted to the change
management board.
A best practices review is most often focused on the semantic model (previously known
as a dataset) design, though the review can encompass all types of data items (such as a
lakehouse, data warehouse, data pipeline, dataflow, or semantic model). The review can
also encompass reporting items (such as reports, dashboards, or metrics).
Before content is deployed, a best practices review can be used to verify other design
decisions, like:
Once the content has been deployed, the best practices review isn't necessarily
complete yet. Completing the remainder of the review could also include items such as:
Extended support
From time to time, the COE might get involved with complex issues escalated from the
help desk. For more information, see the User support article.
7 Note
Offering mentoring services might be a culture shift for your organization. Your
reaction might be that users don't usually ask for help with a tool like Excel, so why
would they with Power BI? The answer lies in the fact that Power BI and Fabric are
extraordinarily powerful tools. They provide data preparation and data modeling
capabilities in addition to data visualization. Having the ability to aid and enable
users can significantly improve their skills and increase the quality of their solutions
—it reduces risks too.
Centralized portal
A single centralized portal, or hub, is where the user community can find:
Tip
In general, only 10%-20% of your community will go out of their way to actively
seek out training and educational information. These types of users might naturally
evolve to become your champions. Everyone else is usually just trying to get the
job done as quickly as possible, because their time, focus, and energy are needed
elsewhere. Therefore, it's crucial to make information easy for your community
users to find.
The goal is to consistently direct users in the community to the centralized portal to find
information. The corresponding obligation for the COE is to ensure that the information
users need is available in the centralized portal. Keeping the portal updated requires
discipline when everyone is busy.
) Important
It takes time for community users to think of the centralized portal as their natural first
stop for finding information. It takes consistent redirection to the portal to change
habits. Sending someone a link to an original document location in the portal builds
better habits than, for instance, including the answer in an email response. It's the same
challenge described in the User support article.
Training
A key factor for successfully enabling self-service users in a Fabric community is training.
It's important that the right training resources are readily available and easily
discoverable. While some users are so enthusiastic about analytics that they'll find
information and figure things out on their own, it isn't true for most of the user
community.
Making sure your self-service users (particularly content creators and owners) have
access to the training resources they need to be successful doesn't mean that you need
to develop your own training content. Developing training content is often
counterproductive due to the rapidly evolving nature of the product. Fortunately, an
abundance of training resources is available in the worldwide community. A curated set
of links goes a long way to help users organize and focus their training efforts, especially
for tool training, which focuses on the technology. All external links should be validated
by the COE for accuracy and credibility. It's a key opportunity for the COE to add value
because COE stakeholders are in an ideal position to understand the learning needs of
the community, and to identify and locate trusted sources of quality learning materials.
You'll find the greatest return on investment with creating custom training materials for
organizational-specific processes, while relying on content produced by others for
everything else. It's also useful to have a short training class that focuses primarily on
topics like how to find documentation, getting help, and interacting with the
community.
Tip
One of the goals of training is to help users learn new skills while helping them
avoid bad habits. It can be a balancing act. For instance, you don't want to
overwhelm new users by adding in a lot of complexity and friction to a beginner-
level class for report creators. However, it's a great investment to make newer
content creators aware of things that could otherwise take them a while to figure
out. An ideal example is teaching the ability to use a live connection to report from
an existing semantic model. By teaching this concept at the earliest logical time,
you can save a less experienced creator thinking they always need one semantic
model for every report (and encourage the good habit of reusing existing semantic
models across reports).
Some larger organizations experience continual employee transfers and turnover. Such
frequent change results in an increased need for a repeatable set of training resources.
Some training might be delivered more formally, such as classroom training with hands-
on labs. Other types of training are less formal, such as:
Lunch and learn presentations
Short how-to videos targeted to a specific goal
Curated set of online resources
Internal user group presentations
One-hour, one-week, or one-month challenges
Hackathon-style events
Tip
) Important
Each type of user represents a different audience that has different training needs.
The COE will need to identify how best to meet the needs of each audience. For
instance, one audience might find a standard introductory Power BI Desktop class
overwhelming, whereas another will want more challenging information with depth
and detail for end-to-end solutions that include multiple Fabric workloads. If you
have a diverse population of Fabric content creators, consider creating personas
and tailoring the experience to an extent that's practical.
The completion of training can be a leading indicator for success with user adoption.
Some organizations add an element of fun by granting badges, like blue belt or black
belt, as users progress through the training programs.
Give some consideration to how you want to handle users at various stages of user
adoption. Training needs are very different for:
How the COE invests its time in creating and curating training materials will change over
time as adoption and maturity grows. You might also find over time that some
community champions want to run their own tailored set of training classes within their
functional business unit.
Consider using Microsoft Viva Learning , which is integrated into Microsoft Teams. It
includes content from sources such as Microsoft Learn and LinkedIn Learning . Custom
content produced by your organization can be included as well.
Tip
The Help and Support menu in the Fabric portal is customizable. When your
centralized location for training documentation is operational, update the tenant
setting in the Admin portal with the link. The link can then be accessed from menu
when users select the Get Help option. Also, be sure to teach users about the Help
ribbon tab in Power BI Desktop. It includes links to guided learning, training videos,
documentation, and more.
Documentation
Concise, well-written documentation can be a significant help for users trying to get
things done. Your needs for documentation, and how it's delivered, will depend on how
Fabric is managed in your organization. For more information, see the Content
ownership and management article.
Certain aspects of Fabric tend to be managed by a centralized team, such as the COE.
The following types of documentation are helpful in these situations:
How to request a Power BI license (and whether there are requirements for
manager approval)
How to request a new capacity
How to request a new workspace
How to request a workspace be added to an existing capacity
How to request access to a gateway data source
How to request software installation
Tip
For certain activities that are repeated over and over, consider automating them
using Power Apps and Power Automate. In this case, your documentation will also
include how to access and use the Power Platform functionality.
Tip
When planning for a centralized portal, as described earlier in this article, plan how
to handle situations when guidance or governance policies need to be customized
for one or more business units.
There are also going to be some governance decisions that have been made and should
be documented, such as:
) Important
One of the most helpful pieces of documentation you can publish for the
community is a description of the tenant settings, and the group memberships
required for each tenant setting. Users read about features and functionality online,
and sometimes find that it doesn't work for them. When they are able to quickly
look up your organization's tenant settings, it can save them from becoming
frustrated and attempting workarounds. Effective documentation can reduce the
number of help desk tickets that are submitted. It can also reduce the number of
people who need to be assigned the Fabric administrator role (who might have this
role solely for the purpose of viewing settings).
Over time, you might choose to allow certain types of documentation to be maintained
by the community if you have willing volunteers. In this case, you might want to
introduce an approval process for changes.
When you see questions repeatedly arise in the Q&A forum (as described in the User
support article), during office hours, or during lunch and learns, it's a great indicator that
creating new documentation might be appropriate. When the documentation exists, it
allows colleagues to reference it when needed. Documentation contributes to user
enablement and a self-sustaining community.
Tip
Providing Power BI template files for your community is a great way to:
Promote consistency.
Reduce learning curve.
Show good examples and best practices.
Increase efficiency.
Power BI template files can improve efficiency and help people learn during the normal
course of their work. A few ways that template files are helpful include:
Providing templates not only saves your content creators time, it also helps them
move quickly beyond a blank page in an empty solution.
You can use Power BI project files with Power BI Desktop developer mode for:
Advanced editing and authoring (for example, in a code editor such as Visual
Studio Code).
Purposeful separation of semantic model and report items (unlike the .pbix or .pbit
files).
Enabling multiple content creators and developers to work on the same project
concurrently.
Integrating with source control (such as by using Fabric Git integration).
Using continuous integration and continuous delivery (CI/CD) techniques to
automate integration, testing and deployment of changes, or versions of content.
7 Note
Power BI includes capabilities such as .pbit template files and .pbip project files that
make it simple to share starter resources with authors. Other Fabric workloads
provide different approaches to content development and sharing. Having a set of
starter resources is important regardless of the items being shared. For example,
your portal might include a set of SQL scripts or notebooks that present tested
approaches to solve common problems.
" Consider what mentoring services the COE can support: Decide what types of
mentoring services the COE is capable of offering. Types can include office hours,
co-development projects, and best practices reviews.
" Communicate regularly about mentoring services: Decide how you will
communicate and advertise mentoring services, such as office hours, to the user
community.
" Establish a regular schedule for office hours: Ideally, hold office hours at least once
per week (depending on demand from users as well as staffing and scheduling
constraints).
" Decide what the expectations will be for office hours: Determine what the scope
of allowed topics or types of issues users can bring to office hours. Also, determine
how the queue of office hours requests will work, whether any information should
be submitted ahead of time, and whether any follow up afterwards can be
expected.
" Create a centralized portal: Ensure that you have a well-supported centralized hub
where users can easily find training materials, documentation, and resources. The
centralized portal should also provide links to other community resources such as
the Q&A forum and how to find help.
" Create documentation and resources: In the centralized portal, create, compile,
and publish useful documentation. Identify and promote the top 3-5 resources that
will be most useful to the user community.
" Update documentation and resources regularly: Ensure that content is reviewed
and updated on a regular basis. The objective is to ensure that the information
available in the portal is current and reliable.
" Compile a curated list of reputable training resources: Identify training resources
that target the training needs and interests of your user community. Post the list in
the centralized portal and create a schedule to review and validate the list.
" Consider whether custom in-house training will be useful: Identify whether
custom training courses, developed in-house, will be useful and worth the time
investment. Invest in creating content that's specific to the organization.
" Provide templates and projects: Determine how you'll use templates including
Power BI template files and Power BI project files. Include the resources in your
centralized portal, and in training materials.
" Create goals and metrics: Determine how you'll measure effectiveness of the
mentoring program. Create KPIs (key performance indicators) or OKRs (objectives
and key results) to validate that the COE's mentoring efforts strengthen the
community and its ability to provide self-service BI.
Questions to ask
Use questions like those found below to assess mentoring and user enablement.
Maturity levels
The following maturity levels will help you assess the current state of your mentoring
and user enablement.
100: Initial • Some documentation and resources exist. However, they're siloed and
inconsistent.
Level State of mentoring and user enablement
• Few users are aware of, or take advantage of, available resources.
200: • A centralized portal exists with a library of helpful documentation and resources.
Repeatable
• A curated list of training links and resources are available in the centralized
portal.
• Office hours are available so the user community can get assistance from the
COE.
300: • The centralized portal is the primary hub for community members to locate
Defined training, documentation, and resources. The resources are commonly referenced
by champions and community members when supporting and learning from each
other.
• The COE's skills mentoring program is in place to assist users in the community in
various ways.
400: • Office hours have regular and active participation from all business units in the
Capable organization.
• Best practices reviews from the COE are regularly requested by business units.
• Co-development projects are repeatedly executed with success by the COE and
members of business units.
500: • Training, documentation, and resources are continually updated and improved by
Efficient the COE to ensure the community has current and reliable information.
• Measurable and tangible business value is gained from the mentoring program
by using KPIs or OKRs.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
community of practice.
Microsoft Fabric adoption roadmap:
Community of practice
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
A community of practice is a group of people with a common interest that interacts with,
and helps, each other on a voluntary basis. Using a tool such as Microsoft Fabric to
produce effective analytics is a common interest that can bring people together across
an organization.
Champions are the smallest group among creators and SMEs. Self-service content
creators and SMEs represent a larger number of people. Content consumers represent
the largest number of people in most organizations.
7 Note
All references to the Fabric community in this adoption series of articles refer to
internal users, unless explicitly stated otherwise. There's an active and vibrant
worldwide community of bloggers and presenters who produce a wealth of
knowledge about Fabric. However, internal users are the focus of this article.
For information about related topics including resources, documentation, and training
provided for the Fabric community, see the Mentoring and user enablement article.
Champions network
One important part of a community of practice is its champions. A champion is a self-
service content creator who works in a business unit that engages with the COE. A
champion is recognized by their peers as the go-to expert. A champion continually
builds and shares their knowledge even if it's not an official part of their job role.
Champions influence and help their colleagues in many ways including solution
development, learning, skills improvement, troubleshooting, and keeping up to date.
Have a deep interest in analytics being used effectively and adopted successfully
throughout the organization.
Possess strong technical skills as well as domain knowledge for their functional
business unit.
Have an inherent interest in getting involved and helping others.
Are early adopters who are enthusiastic about experimenting and learning.
Can effectively translate business needs into solutions.
Communicate well with colleagues.
Tip
Often, people aren't directly asked to become champions. Commonly, champions are
identified by the COE and recognized for the activities they're already doing, such as
frequently answering questions on an internal discussion channel or participating in
lunch and learn sessions.
Different approaches will be more effective for different organizations, and each
organization will find what works best for them as their maturity level increases.
) Important
Someone very well might be acting in the role of a champion without even
knowing it, and without any formal recognition. The COE should always be on the
lookout for champions. COE members should actively monitor the discussion
channel to see who is particularly helpful. The COE should deliberately encourage
and support potential champions, and when appropriate, invite them into a
champions network to make the recognition formal.
Knowledge sharing
The overriding objective of a community of practice is to facilitate knowledge sharing
among colleagues and across organizational boundaries. There are many ways
knowledge sharing occurs. It could be during the normal course of work. Or, it could be
during a more structured activity, such as:
Activity Description
Discussion A Q&A forum where anyone in the community can post and view messages.
channel Often used for help and announcements. For more information, see the User
support article.
Lunch and learn Regularly scheduled sessions where someone presents a short session about
sessions something they've learned or a solution they've created. The goal is to get a
Activity Description
Office hours with Regularly scheduled times when COE experts are available so the community
the COE can engage with them. Community users can receive assistance with minimal
process overhead. For more information, see the Mentoring and user
enablement article.
Internal blog Short blog posts, usually covering technical how-to topics.
posts or wiki posts
Internal analytics A subset of the community that chooses to meet as a group on a regularly
user group scheduled basis. User group members often take turns presenting to each
other to share knowledge and improve their presentation skills.
Book club A subset of the community select a book to read on a schedule. They discuss
what they've learned and share their thoughts with each other.
Tip
Inviting an external presenter can reduce the effort level and bring a fresh
viewpoint for learning and knowledge sharing.
Incentives
A lot of effort goes into forming and sustaining a successful community. It's
advantageous to everyone to empower and reward users who work for the benefit of
the community.
Contests with a small gift card or time off: For example, you might hold a
performance tuning event with the winner being the person who successfully
reduced the size of their data model the most.
Ranking based on help points: The more frequently someone participates in Q&A,
they achieve a change in status on a leaderboard. This type of gamification
promotes healthy competition and excitement. By getting involved in more
conversations, the participant learns and grows personally in addition to helping
their colleagues.
Leadership communication: Reach out to a manager when someone goes above
and beyond so that their leader, who might not be active in the community, sees
the value that their staff member provides.
Rewarding champions
Different types of incentives will appeal to different types of people. Some community
members will be highly motivated by praise and feedback. Some will be inspired by
gamification and a bit of fun. Others will highly value the opportunity to improve their
level of knowledge.
More direct access to the COE: The ability to have connections in the COE is
valuable. It's depicted in the diagram shown earlier in this article.
Champion of the month: Publicly thank one of your champions for something
outstanding they did recently. It could be a fun tradition at the beginning of a
monthly lunch and learn.
A private experts discussion area: A private area for the champions to share ideas
and learn from each other is usually highly valued.
Specialized or deep dive information and training: Access to additional
information to help champions grow their skillsets (as well as help their colleagues)
will be appreciated. It could include attending advanced training classes or
conferences.
Communication plan
Communication with the community occurs through various types of communication
channels. Common communication channels include:
The most critical communication objectives include ensuring your community members
know that:
Tip
Types of communication
There are generally four types of communication to plan for:
Tip
Community resources
Resources for the internal community, such as documentation, templates, and training,
are critical for adoption success. For more information about resources, see the
Mentoring and user enablement article.
Checklist - Considerations and key actions you can take for the community of practice
follow.
" Clarify goals: Clarify what your specific goals are for cultivating a champions
network. Make sure these goals align with your overall data and BI strategy, and
that your executive sponsor is on board.
" Create a plan for the champions network: Although some aspects of a champions
network will always be informally led, determine to what extent the COE will
purposefully cultivate and support champion efforts throughout individual business
units. Consider how many champions is ideal for each functional business area.
Usually, 1-2 champions per area works well, but it can vary based on the size of the
team, the needs of the self-service community, and how the COE is structured.
" Decide on commitment level for champions: Decide what level of commitment
and expected time investment will be required of champions. Be aware that the
time investment will vary from person to person, and team to team due to different
responsibilities. Plan to clearly communicate expectations to people who are
interested in getting involved. Obtain manager approval when appropriate.
" Decide how to identify champions: Determine how you will respond to requests to
become a champion, and how the COE will seek out champions. Decide if you will
openly encourage interested employees to self-identify as a champion and ask to
learn more (less common). Or, whether the COE will observe efforts and extend a
private invitation (more common).
" Determine how members of the champions network will be managed: One
excellent option for managing who the champions are is with a security group.
Consider:
How you will communicate with the champions network (for example, in a Teams
channel, a Yammer group, and/or an email distribution list).
How the champions network will communicate and collaborate with each other
directly (across organizational boundaries).
Whether a private and exclusive discussion forum for champions and COE
members is appropriate.
" Plan resources for champions: Ensure members of the champions network have
the resources they need, including:
Direct access to COE members.
Influence on data policies being implemented (for example, requirements for a
semantic model certification policy).
Influence on the creation of best practices and guidance (for example,
recommendations for accessing a specific source system).
" Involve champions: Actively involve certain champions as satellite members of the
COE. For more information about ways to structure the COE, see the Center of
Excellence article.
" Create a feedback loop for champions: Ensure that members of the champions
network can easily provide information or submit suggestions to the COE.
" Routinely provide recognition and incentives for champions: Not only is praise an
effective motivator, but the act of sharing examples of successful efforts can
motivate and inspire others.
Introduce incentives:
" Identify incentives for champions: Consider what type of incentives you could offer
to members of your champions network.
" Identify incentives for community members: Consider what type of incentives you
could offer to your broader internal community.
Improve communications:
Questions to ask
Use questions like those found below to assess the community of practice.
Maturity levels
The following maturity levels will help you assess the current state of your community of
practice.
100: Initial • Some self-service content creators are doing great work throughout the
organization. However, their efforts aren't recognized.
• Goals for transparent communication with the user community are defined.
400: Capable • Champions are identified for all business units. They actively support colleagues
in their self-service efforts.
500: Efficient • Bidirectional feedback loops exist between the champions network and the COE.
• Automation is in place when it adds direct value to the user experience (for
example, automatic access to a group that provides community resources).
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about user
support.
Microsoft Fabric adoption roadmap:
User support
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
This article addresses user support. It focuses primarily on the resolution of issues.
The first sections of this article focus on user support aspects you have control over
internally within your organization. The final topics focus on external resources that are
available.
The following diagram shows some common types of user support that organizations
employ successfully:
The six types of user support shown in the above diagram include:
Type Description
Intra-team support (internal) is very informal. Support occurs when team members learn
from each other during the natural course of their job.
Type Description
Help desk support (internal) handles formal support issues and requests.
Extended support (internal) involves handling complex issues escalated by the help desk.
Microsoft support (external) includes support for licensed users and Fabric administrators.
It also includes comprehensive documentation.
In some organizations, intra-team and internal community support are most relevant for
self-service data and business intelligence (BI)—content is owned and managed by
creators and owners in decentralized business units. Conversely, the help desk and
extended support are reserved for technical issues and enterprise data and BI (content is
owned and managed by a centralized BI team or Center of Excellence). In some
organizations, all four types of support could be relevant for any type of content.
Tip
Each of the six types of user support introduced above are described in further detail in
this article.
Intra-team support
Intra-team support refers to when team members learn from and help each other during
their daily work. Self-service content creators who emerge as your champions tend to
take on this type of informal support role voluntarily because they have an intrinsic
desire to help. Although it's an informal support mode, it shouldn't be undervalued.
Some estimates indicate that a large percentage of learning at work is peer learning,
which is particularly helpful for analysts who are creating domain-specific analytics
solutions.
7 Note
Intra-team support does not work well for individuals who are the only data analyst
within a department. It's also not effective for those who don't have very many
connections yet in their organization. When there aren't any close colleagues to
depend on, other types of support, as described in this article, become more
important.
Tip
Be sure to cultivate multiple experts in the more difficult topics like T-SQL,
Python, Data Analysis eXpressions (DAX) and the Power Query M formula
language. When a community member becomes a recognized expert, they
could become overburdened with too many requests for help.
A greater number of community members might readily answer certain types
of questions (for example, report visualizations), whereas a smaller number of
members will answer others (for example, complex T-SQL or DAX). It's
important for the COE to allow the community a chance to respond yet also
be willing to promptly handle unanswered questions. If users repeatedly ask
questions and don't receive an answer, it will significantly hinder growth of
the community. In this case, a user is likely to leave and never return if they
don't receive any responses to their questions.
An internal community discussion channel is commonly set up as a Teams channel or a
Yammer group. The technology chosen should reflect where users already work, so that
the activities occur within their natural workflow.
One benefit of an internal discussion channel is that responses can come from people
that the original requester has never met before. In larger organizations, a community of
practice brings people together based on a common interest. It can offer diverse
perspectives for getting help and learning in general.
Use of an internal community discussion channel allows the Center of Excellence (COE)
to monitor the kind of questions people are asking. It's one way the COE can understand
the issues users are experiencing (commonly related to content creation, but it could
also be related to consuming content).
Monitoring the discussion channel can also reveal additional analytics experts and
potential champions who were previously unknown to the COE.
) Important
It's a best practice to continually identify emerging champions, and to engage with
them to make sure they're equipped to support their colleagues. As described in
the Community of practice article, the COE should actively monitor the discussion
channel to see who is being helpful. The COE should deliberately encourage and
support community members. When appropriate, invite them into the champions
network.
Another key benefit of a discussion channel is that it's searchable, which allows other
people to discover the information. It is, however, a change of habit for people to ask
questions in an open forum rather than private messages or email. Be sensitive to the
fact that some individuals aren't comfortable asking questions in such a public way. It
openly acknowledges what they don't know, which might be embarrassing. This
reluctance might reduce over time by promoting a friendly, encouraging, and helpful
discussion channel.
Tip
You might be tempted to create a bot to handle some of the most common,
straightforward questions from the community. A bot can work for uncomplicated
questions such as "How do I request a license?" or "How do I request a
workspace?" Before taking this approach, consider if there are enough routine and
predictable questions that would make the user experience better rather than
worse. Often, a well-created FAQ (frequently asked questions) works better, and it's
faster to develop and easier to maintain.
There are also certain technical issues that can't be fully resolved without IT involvement,
like software installation and upgrade requests when machines are IT-managed.
Busy help desk personnel are usually dedicated to supporting multiple technologies. For
this reason, the easiest types of issues to support are those which have a clear resolution
and can be documented in a knowledgebase. For instance, software installation
prerequisites or requirements to get a license.
Some organizations ask the help desk to handle only very simple break-fix issues. Other
organizations have the help desk get involved with anything that is repeatable, like new
workspace requests, managing gateway data sources, or requesting a new capacity.
) Important
Your Fabric governance decisions will directly impact the volume of help desk
requests. For example, if you choose to limit workspace creation permissions in
the tenant settings, it will result in users submitting help desk tickets. While it's a
legitimate decision to make, you must be prepared to satisfy the request very
quickly. Respond to this type of request within 1-4 hours, if possible. If you delay
too long, users will use what they already have or find a way to work around your
requirements. That might not be the ideal scenario. Promptness is critical for certain
help desk requests. Consider that automation by using Power Apps and Power
Automate can help make some processes more efficient. For more information, see
Tenant-level workspace planning.
Over time, troubleshooting and problem resolution skills become more effective as help
desk personnel expand their knowledgebase and experience with supporting Fabric. The
best help desk personnel are those who have a good grasp of what users need to
accomplish.
Tip
Purely technical issues, for example data refresh failure or the need to add a new
user to a gateway data source, usually involve straightforward responses
associated with a service-level agreement (SLA). For instance, there could be an SLA
to respond to blocking issues within one hour and resolve them within eight hours.
It's generally more difficult to define SLAs for troubleshooting issues, like data
discrepancies.
Extended support
Since the COE has deep insight into how Fabric is used throughout the organization,
they're a great option for extended support should a complex issue arise. Involving the
COE in the support process should be by an escalation path.
Managing requests as purely an escalation path from the help desk gets difficult to
enforce since COE members are often well-known to business users. To encourage the
habit of going through the proper channels, COE members should redirect users to
submit a help desk ticket. It will also improve the data quality for analyzing help desk
requests.
Microsoft support
In addition to the internal user support approaches discussed in this article, there are
valuable external support options directly available to users and Fabric administrators
that shouldn't be overlooked.
Microsoft documentation
Check the Fabric support website for high-priority issues that broadly affect all
customers. Global Microsoft 365 administrators have access to additional support issue
details within the Microsoft 365 portal.
Refer to the comprehensive Fabric documentation. It's an authoritative resource that can
aid troubleshooting and search for information. You can prioritize results from the
documentation site. For example, enter a site-targeted search request into your web
search engine, like power bi gateway site:learn.microsoft.com .
Power BI Pro and Premium Per User end-user support
Licensed users are eligible to log a support ticket with Microsoft .
Tip
Make it clear to your internal user community whether you prefer technical issues
to be reported to the internal help desk. If your help desk is equipped to handle the
workload, having a centralized internal area collect user issues can provide a
superior user experience versus every user trying to resolve issues on their own.
Having visibility and analyzing support issues is also helpful for the COE.
Administrator support
There are several support options available for Fabric administrators.
For customers who have a Microsoft Unified Support contract, consider granting help
desk and COE members access to the Microsoft Services Hub . One advantage of the
Microsoft Services Hub is that your help desk and COE members can be set up to
submit and view support requests.
Community documentation
The Fabric global community is vibrant. Every day, there are a great number of Fabric
blog posts, articles, webinars, and videos published. When relying on community
information for troubleshooting, watch out for:
How recent the information is. Try to verify when it was published or last updated.
Whether the situation and context of the solution found online truly fits your
circumstance.
The credibility of the information being presented. Rely on reputable blogs and
sites.
Checklist - Considerations and key actions you can take for user support follow.
" Determine help desk responsibilities: Decide what the initial scope of Fabric
support topics that the help desk will handle.
" Assess the readiness level: Determine whether your help desk is prepared to
handle Fabric support. Identify whether there are readiness gaps to be addressed.
" Arrange for additional training: Conduct knowledge transfer sessions or training
sessions to prepare the help desk staff.
" Update the help desk knowledgebase: Include known questions and answers in a
searchable knowledgebase. Ensure someone is responsible for regular updates to
the knowledgebase to reflect new and enhanced features over time.
" Set up a ticket tracking system: Ensure a good system is in place to track requests
submitted to the help desk.
" Decide whether anyone will be on-call for any issues related to Fabric: If
appropriate, ensure the expectations for 24/7 support are clear.
" Determine what SLAs will exist: When a specific service level agreement (SLA)
exists, ensure that expectations for response and resolution are clearly documented
and communicated.
" Be prepared to act quickly: Be prepared to address specific common issues
extremely quickly. Slow support response will result in users finding workarounds.
" Determine how escalated support will work: Decide what the escalation path will
be for requests the help desk cannot directly handle. Ensure that the COE (or
equivalent personnel) is prepared to step in when needed. Clearly define where
help desk responsibilities end, and where COE extended support responsibilities
begin.
" Encourage collaboration between COE and system administrators: Ensure that
COE members and Fabric administrators have a direct escalation path to reach
global administrators for Microsoft 365 and Azure. It's critical to have a
communication channel when a widespread issue arises that's beyond the scope of
Fabric.
" Create a feedback loop from the COE back to the help desk: When the COE learns
of new information, the IT knowledgebase should be updated. The goal is for the
primary help desk personnel to continually become better equipped at handling
more issues in the future.
" Create a feedback loop from the help desk to the COE: When support personnel
observe redundancies or inefficiencies, they can communicate that information to
the COE, who might choose to improve the knowledgebase or get involved
(particularly if it relates to governance or security).
Questions to ask
Who is responsible for supporting enterprise data and BI solutions? What about
self-service solutions?
How are the business impact and urgency of issues identified to effectively detect
and prioritize critical issues?
Is there a clear process for business users to report issues with data and BI
solutions? How does this differ between enterprise and self-service solutions?
What are the escalation paths?
What types of issues do content creators and consumers typically experience? For
example, do they experience data quality issues, performance issues, access issues,
and others?
Are any issues closed without them being resolved? Are there "known issues" in
data items or reports today?
Is a process in place for data asset owners to escalate issues with self-service BI
solutions to central teams like the COE?
How frequent are issues in the data and existing solutions? What proportion of
these issues are found before they impact business end users?
How long does it typically take to resolve issues? Is this timing sufficient for
business users?
What are examples of recent issues and the concrete impact on the business?
Do enterprise teams and content creators know how to report Fabric issues to
Microsoft? Can enterprise teams effectively leverage community resources to
unblock critical issues?
U Caution
When assessing user support and describing risks or issues, be careful to use
neutral language that doesn't place blame on individuals or teams. Ensure
everyone's perspective is fairly represented in an assessment. Focus on objective
facts to accurately understand and describe the context.
Maturity levels
The following maturity levels will help you assess the current state of your Power BI user
support.
100: Initial • Individual business units find effective ways of supporting each other. However,
the tactics and practices are siloed and not consistently applied.
200: • The COE actively encourages intra-team support and growth of the champions
Repeatable network.
• The internal discussion channel gains traction. It's become known as the default
place for questions and discussions.
• The help desk handles a small number of the most common technical support
issues.
300: Defined • The internal discussion channel is popular and largely self-sustaining. The COE
actively monitors and manages the discussion channel to ensure that all questions
are answered quickly and correctly.
Level State of user support
400: Capable • The help desk is fully trained and prepared to handle a broader number of
known and expected technical support issues.
• SLAs are in place to define help desk support expectations, including extended
support. The expectations are documented and communicated so they're clear to
everyone involved.
500: Efficient • Bidirectional feedback loops exist between the help desk and the COE.
• Automation is in place to allow the help desk to react faster and reduce errors
(for example, use of APIs and scripts).
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, learn about system
oversight and administration activities.
Microsoft Fabric adoption roadmap:
System oversight
Article • 11/24/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
) Important
Your organizational data culture objectives provide direction for your governance
decisions, which in turn dictate how Fabric administration activities take place and
by whom.
System oversight is a broad and deep topic. The goal of this article is to introduce some
of the most important considerations and actions to help you become successful with
your organizational adoption objectives.
Fabric administrators
The Fabric administrator role is a defined role in Microsoft 365, which delegates a subset
of management activities. Global Microsoft 365 administrators are implicitly Fabric
administrators. Power Platform administrators are also implicitly Fabric administrators.
) Important
Tip
The best type of person to serve as a Fabric administrator is one who has enough
knowledge about the tools and workloads to understand what self-service users
need to accomplish. With this understanding, the administrator can balance user
empowerment and governance.
In addition to the Fabric administrator, there are other roles which use the term
administrator. The following table describes the roles that are commonly and regularly
used.
Fabric Tenant Manages tenant settings and other settings in the Fabric portal.
administrator All general references to administrator in this article refer to
this type of administrator.
Capacity One Manages workspaces and workloads, and monitors the health
administrator capacity of a Fabric capacity.
Data gateway One Manages gateway data source configuration, credentials, and
administrator gateway users assignments. Might also handle gateway software
updates (or collaborate with infrastructure team on updates).
The Fabric ecosystem of workloads is broad and deep. There are many ways that Fabric
integrates with other systems and platforms. From time to time, it'll be necessary to
work with other administrators and IT professionals. For more information, see
Collaborate with other administrators.
The remainder of this article provides an overview of the most common activities that a
Fabric administrator does. It focuses on activities that are important to carry out
effectively when taking a strategic approach to organizational adoption.
Service management
Overseeing the tenant is a crucial aspect to ensure that all users have a good experience
with Power BI. A few of the key governance responsibilities of a Fabric administrator
include:
Tenant settings: Control which Power BI features and capabilities are enabled, and
for which users in your organization.
Domains: Group together two or more workspaces that have similar
characteristics.
Workspaces: Review and manage workspaces in the tenant.
Embed codes: Govern which reports have been published publicly on the internet.
Organizational visuals: Register and manage organizational visuals.
Azure connections: Integrate with Azure services to provide additional
functionality.
For more information, see Tenant administration.
How will users request access to new tools? Will access to licenses, data, and
training be available to help users use tools effectively?
How will content consumers view content that's been published by others?
How will content creators develop, manage, and publish content? What's your
criteria for deciding which tools and applications are appropriate for which use
cases?
How will you install and set up tools? Does that include related prerequisites and
data connectivity components?
How will you manage ongoing updates for tools and applications?
Architecture
In the context of Fabric, architecture relates to data architecture, capacity management,
and data gateway architecture and management.
Data architecture
Data architecture refers to the principles, practices, and methodologies that govern and
define what data is collected, and how it's ingested, stored, managed, integrated,
modeled, and used.
There are many data architecture decisions to make. Frequently the COE engages in
data architecture design and planning. It's common for administrators to get involved as
well, especially when they manage databases or Azure infrastructure.
) Important
It's important for administrators to become fully aware of Fabric's technical capabilities
—as well as the needs and goals of their stakeholders—before they make architectural
decisions.
Tip
Get into the good habit of completing a technical proof of concept (POC) to test
out assumptions and ideas. Some organizations also call them micro-projects when
the goal is to deliver a small unit of work. The goal of a POC is to address
unknowns and reduce risk as early as possible. A POC doesn't have to be
throwaway work, but it should be narrow in scope. Best practices reviews, as
described in the Mentoring and user enablement article, are another useful way to
help content creators with important architectural decisions.
Capacity management
Capacity includes features and capabilities to deliver analytics solutions at scale. There
are two types of Fabric organizational licenses: Premium per User (PPU) and capacity.
There are several types of capacity licenses. The type of capacity license determines
which Fabric workloads are supported.
The use of capacity can play a significant role in your strategy for creating, managing,
publishing, and distributing content. A few of the top reasons to invest in capacity
include:
The above list isn't all-inclusive. For a complete list, see Power BI Premium features.
U Caution
Autoscale
Autoscale is intended to handle occasional or unexpected bursts in capacity usage
levels. Autoscale can respond to these bursts by automatically increasing CPU resources
to support the increased workload.
Automated scaling up reduces the risk of performance and user experience challenges
in exchange for a financial impact. If the capacity isn't well-managed, autoscale might
trigger more often than expected. In this case, the metrics app can help you to
determine underlying issues and do capacity planning.
Be aware that workspace administrators can also assign a workspace to PPU if the
workspace administrator possesses a PPU license. However, it would require that all
other workspace users must also have a PPU license to collaborate on, or view, Power BI
content in the workspace. Other Fabric workloads can't be included in a workspace
assigned to PPU.
The limits per capacity are lower. The maximum memory size allowed for semantic
models isn't the entire P3 capacity node size that was purchased. Rather, it's the
assigned capacity size where the semantic model is hosted.
It's more likely one of the smaller capacities will need to be scaled up at some
point in time.
There are more capacities to manage in the tenant.
7 Note
Resources for Power BI Premium per Capacity are referred to as v-cores. However, a
Fabric capacity refers to them as capacity units (CUs). The scale for CUs and v-cores
is different for each SKU. For more information, see the Fabric licensing
documentation.
Tip
The decision of who can install gateway software is a governance decision. For
most organizations, use of the data gateway in standard mode, or a virtual network
data gateway, should be strongly encouraged. They're far more scalable,
manageable, and auditable than data gateways in personal mode.
Decentralized gateway management works best when it's a joint effort as follows.
Managed by IT:
Tip
User licenses
Every user needs a commercial license, which is integrated with a Microsoft Entra
identity. The user license could be Free, Pro, or Premium Per User (PPU).
7 Note
Although each user requires a license, a Pro or PPU license is only required to share
Power BI content. Users with a free license can create and share Fabric content
other than Power BI items.
Self-service purchasing
An important governance decision relates to what extent self-service purchasing will be
allowed or encouraged.
There are serious cost concerns that would make it unlikely to grant full licenses at
the end of the trial period.
Prerequisites are required for obtaining a license (such as approval, justification, or
a training requirement). It's not sufficient to meet this requirement during the trial
period.
There's a valid need, such as a regulatory requirement, to control access to the
Fabric service closely.
Tip
Don't introduce too many barriers to obtaining a Fabric license. Users who need to
get work done will find a way, and that way might involve workarounds that aren't
ideal. For instance, without a license to use Fabric, people might rely far too much
on sharing files on a file system or via email when significantly better approaches
are available.
Cost management
Managing and optimizing the cost of cloud services, like Fabric, is an important activity.
Here are several activities you can consider.
Analyze who is using—and, more to the point, not using—their allocated Fabric
licenses and make necessary adjustments. Fabric usage is analyzed using the
activity log.
Analyze the cost effectiveness of capacity or Premium Per User. In addition to the
additional features, perform a cost/benefit analysis to determine whether capacity
licensing is more cost-effective when there are a large number of consumers.
Carefully monitor and manage Fabric capacity. Understanding usage patterns over
time will allow you to predict when to purchase more capacity. For example, you
might choose to scale up a single capacity from a P1 to P2, or scale out from one
P1 capacity to two P1 capacities.
If there are occasional spikes in the level of usage, use of autoscale with Fabric is
recommended to ensure the user experience isn't interrupted. Autoscale will scale
up capacity resources for 24 hours, then scale them back down to normal levels (if
sustained activity isn't present). Manage autoscale cost by constraining the
maximum number of v-cores, and/or with spending limits set in Azure. Due to the
pricing model, autoscale is best suited to handle occasional unplanned increases in
usage.
For Azure data sources, co-locate them in the same region as your Fabric tenant
whenever possible. It will avoid incurring Azure egress charges . Data egress
charges are minimal, but at scale can add up to be considerable unplanned costs.
The Power BI security whitepaper is an excellent resource for understanding the breadth
of considerations, including aspects that Microsoft manages. This section will introduce
several topics that customers are responsible for managing.
User responsibilities
Some organizations ask Fabric users to accept a self-service user acknowledgment. It's a
document that explains the user's responsibilities and expectations for safeguarding
organizational data.
One way to automate its implementation is with a Microsoft Entra terms of use policy.
The user is required to view and agree to the policy before they're permitted to visit the
Fabric portal for the first time. You can also require it to be acknowledged on a recurring
basis, like an annual renewal.
Data security
In a cloud shared responsibility model, securing the data is always the responsibility of
the customer. With a self-service data platform, self-service content creators have
responsibility for properly securing the content that they shared with colleagues.
The COE should provide documentation and training where relevant to assist content
creators with best practices (particularly situations for dealing with ultra-sensitive data).
Administrators can be help by following best practices themselves. Administrators can
also raise concerns when they see issues that could be discovered when managing
workspaces, auditing user activities, or managing gateway credentials and users. There
are also several tenant settings that are usually restricted except for a few users (for
instance, the ability to publish to web or the ability to publish apps to the entire
organization).
External user access is controlled by tenant settings and certain Microsoft Entra ID
settings. For details of external user considerations, review the Distribute Power BI
content to external guest users using Microsoft Entra B2B whitepaper.
Data residency
For organizations with requirements to store data within a geographic region, Fabric
capacity can be set for a specific region that's different from the home region of the
Fabric tenant.
Encryption keys
Microsoft handles encryption of data at rest in Microsoft data centers with transparent
server-side encryption and auto-rotation of certificates. For customers with regulatory
requirements to manage the Premium encryption key themselves, Premium capacity can
be configured to use Azure Key Vault. Using customer-managed keys—also known as
bring-your-own-key or BYOK—is a precaution to ensure that, in the event of a human
error by a service operator, customer data can't be exposed.
Be aware that Premium Per User (PPU) only supports BYOK when it's enabled for the
entire Fabric tenant.
There are different ways to approach auditing and monitoring depending on your role
and your objectives. The following articles describe various considerations and planning
activities.
REST APIs
The Power BI REST APIs and the Fabric REST APIs provide a wealth of information about
your Fabric tenant. Retrieving data by using the REST APIs should play an important role
in managing and governing a Fabric implementation. For more information about
planning for the use of REST APIs for auditing, see Tenant-level auditing.
You can retrieve auditing data to build an auditing solution, manage content
programmatically, or increase the efficiency of routine actions. The following table
presents some actions you can perform with the REST APIs.
Audit content shared to entire REST API to check use of widely shared links
organization
Manage gateway data sources REST API to update credentials for a gateway data
source
Programmatically retrieve a query result REST API to run a DAX query against a semantic model
from a semantic model
Tip
There are many other Power BI REST APIs. For a complete list, see Using the Power
BI REST APIs.
Planning for change
Every month, Microsoft releases new Fabric features and capabilities. To be effective, it's
crucial that everyone involved with system oversight stays current. For more
information, see Tenant-level monitoring.
) Important
Don't underestimate the importance of staying current. If you get a few months
behind on announcements, it can become difficult to properly manage Fabric and
support your users.
Checklist - Considerations and key actions you can take for system oversight follow.
" Review tenant settings: Conduct a review of all tenant settings to ensure they're
aligned with data culture objectives and governance guidelines and policies. Verify
which groups are assigned for each setting.
" Document the tenant settings: Create documentation of your tenant settings for
the internal Fabric community and post it in the centralized portal. Include which
groups a user would need to request to be able to use a feature. Use the Get Tenant
Settings REST API to make the process more efficient, and to create snapshots of
the settings on a regular basis.
" Customize the Get Help links: When user resources are established, as described in
the Mentoring and user enablement article, update the tenant setting to customize
the links under the Get Help menu option. It will direct users to your
documentation, community, and help.
" Create a consistent onboarding process: Review your process for how onboarding
of new content creators is handled. Determine if new requests for software, such as
Power BI Desktop, and user licenses (Free, Pro, or PPU) can be handled together. It
can simplify onboarding since new content creators won't always know what to ask
for.
" Handle user machine updates: Ensure an automated process is in place to install
and update software, drivers, and settings to ensure all users have the same version.
" Assess what your end-to-end data architecture looks like: Make sure you're clear
on:
How Fabric is currently used by the different business units in your organization
versus how you want Fabric to be used. Determine if there's a gap.
If there are any risks that should be addressed.
If there are any high-maintenance situations to be addressed.
What data sources are important for Fabric users, and how they're documented
and discovered.
" Review existing data gateways: Find out what gateways are being used throughout
your organization. Verify that gateway administrators and users are set correctly.
Verify who is supporting each gateway, and that there's a reliable process in place
to keep the gateway servers up to date.
" Verify use of personal gateways: Check the number of personal gateways that are
in use, and by whom. If there's significant usage, take steps to move towards use of
the standard mode gateway.
" Review the process to request a user license: Clarify what the process is, including
any prerequisites, for users to obtain a license. Determine whether there are
improvements to be made to the process.
" Determine how to handle self-service license purchasing: Clarify whether self-
service licensing purchasing is enabled. Update the settings if they don't match
your intentions for how licenses can be purchased.
" Confirm how user trials are handled: Verify user license trials are enabled or
disabled. Be aware that all user trials are Premium Per User. They apply to Free
licensed users signing up for a trial, and Pro users signing up for a Premium Per
User trial.
" Clarify exactly what the expectations are for data protection: Ensure the
expectations for data protection, such as how to use sensitivity labels, are
documented and communicated to users.
" Determine how to handle external users: Understand and document the
organizational policies around sharing Fabric content with external users. Ensure
that settings in Fabric support your policies for external users.
" Set up monitoring: Investigate the use of Microsoft Defender for Cloud Apps to
monitor user behavior and activities in Fabric.
" Plan for auditing needs: Collect and document the key business requirements for
an auditing solution. Consider your priorities for auditing and monitoring. Make key
decisions related to the type of auditing solution, permissions, technologies to be
used, and data needs. Consult with IT to clarify what auditing processes currently
exist, and what preferences of requirements exist for building a new solution.
" Consider roles and responsibilities: Identify which teams will be involved in
building an auditing solution, as well as the ongoing analysis of the auditing data.
" Extract and store user activity data: If you aren't currently extracting and storing
the raw data, begin retrieving user activity data.
" Extract and store snapshots of tenant inventory data: Begin retrieving metadata to
build a tenant inventory, which describes all workspaces and items.
" Extract and store snapshots of users and groups data: Begin retrieving metadata
about users, groups, and service principals.
" Create a curated data model: Perform data cleansing and transformations of the
raw data to create a curated data model that'll support analytical reporting for your
auditing solution.
" Analyze auditing data and act on the results: Create analytic reports to analyze the
curated auditing data. Clarify what actions are expected to be taken, by whom, and
when.
" Include additional auditing data: Over time, determine whether other auditing
data would be helpful to complement the activity log data, such as security data.
Tip
" Plan for your use of the REST APIs: Consider what data would be most useful to
retrieve from the Power BI REST APIs and the Fabric REST APIs.
" Conduct a proof of concept: Do a small proof of concept to validate data needs,
technology choices, and permissions.
Questions to ask
Are there atypical administration settings enabled or disabled? For example, is the
entire organization allowed to publish to web (we strongly advise restricting this
feature).
Do administration settings and policies align with, or inhibit, business the way user
work?
Is there a process in place to critically appraise new settings and decide how to set
them? Alternatively, are only the most restrictive settings set as a precaution?
Are Microsoft Entra ID security groups used to manage who can do what?
Do central teams have visibility of effective auditing and monitoring tools?
Do monitoring solutions depict information about the data assets, user activities,
or both?
Are auditing and monitoring tools actionable? Are there clear thresholds and
actions set, or do monitoring reports simply describe what's in the data estate?
Is Azure Log Analytics used (or planned to be used) for detailed monitoring of
Fabric capacities? Are the potential benefits and cost of Azure Log Analytics clear
to decision makers?
Are sensitivity labels and data loss prevention policies used? Are the potential
benefits and cost of these clear to decision makers?
Do administrators know the current number of licenses and licensing cost? What
proportion of the total BI spend goes to Fabric capacity, and to Pro and PPU
licenses? If the organization is only using Pro licenses for Power BI content, could
the number of users and usage patterns warrant a cost-effective switch to Power BI
Premium or Fabric capacity?
Maturity levels
The following maturity levels will help you assess the current state of your Power BI
system oversight.
100: Initial • Tenant settings are configured independently by one or more administrators
based on their best judgment.
• Fabric activity logs are unused, or selectively used for tactical purposes.
200: • The tenant settings purposefully align with established governance guidelines
Repeatable and policies. All tenant settings are reviewed regularly.
• A well-defined process exists for users to request licenses and software. Request
forms are easy for users to find. Self-service purchasing settings are specified.
• Sensitivity labels are configured in Microsoft 365. However, use of labels remains
inconsistent. The advantages of data protection aren't well understood by users.
300: Defined • The tenant settings are fully documented in the centralized portal for users to
reference, including how to request access to the correct groups.
Level State of system oversight
• An automated process is in place to export Fabric activity log and API data to a
secure location for reporting and auditing.
400: Capable • Administrators work closely with the COE and governance teams to provide
oversight of Fabric. A balance of user empowerment and governance is
successfully achieved.
• Automated policies are set up and actively monitored in Microsoft Defender for
Cloud Apps for data loss prevention.
• Activity log and API data is actively analyzed to monitor and audit Fabric
activities. Proactive action is taken based on the data.
500: Efficient • The Fabric administrators work closely with the COE actively stay current. Blog
posts and release plans from the Fabric product team are reviewed frequently to
plan for upcoming changes.
• Regular cost management analysis is done to ensure user needs are met in a
cost-effective way.
• The Fabric REST API is used to retrieve tenant setting values on a regular basis.
• Activity log and API data is actively used to inform and improve adoption and
governance efforts.
Next steps
For more information about system oversight and Fabric administration, see the
following resources.
In the next article in the Microsoft Fabric adoption roadmap series, learn about effective
change management.
Microsoft Fabric adoption roadmap:
Change management
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
When working toward improved data and business intelligence (BI) adoption, you
should plan for effective change management. In the context of data and BI, change
management includes procedures that address the impact of change for people in an
organization. These procedures safeguard against disruption and productivity loss due
to changes in solutions or processes.
7 Note
Helps content creators and consumers use analytics more effectively and sooner.
Limits redundancy in data, analytical tools, and solutions.
Reduces the likelihood of risk-creating behaviors that affect shared resources (like
Fabric capacity) or organizational compliance (like data security and privacy).
Mitigates resistance to change that obstructs planning and inhibits user adoption.
Mitigates the impact of change and improving user wellbeing by reducing the
potential for disruption, stress, and conflict.
) Important
Tip
Consider the following types of change to manage when you plan for Fabric adoption.
Process-level changes
Process-level changes are changes that affect a broader user community or the entire
organization. These changes typically have a larger impact, and so they require more
effort to manage. Specifically, this change management effort includes specific plans
and activities.
7 Note
Solution-level changes
Solution-level changes are changes that affect a single solution or set of solutions.
These changes limit their impact to the user community of those solutions and their
dependent processes. Although solution-level changes typically have a lower impact,
they also tend to occur more frequently.
7 Note
In the context of this article, a solution is built to address specific business needs for
users. A solution can take many forms, such as a data pipeline, a lakehouse, a
semantic model, or a report. The considerations for change management described
in this article are relevant for all types of solutions, and not only reporting projects.
How you prepare change management plans and activities will depend on the types of
change. To successfully and sustainably manage change, we recommend that you
implement incremental changes.
The following steps outline how you can incrementally address change.
1. Define what's changing: Describe the change by outlining the before and after
states. Clarify the specific parts of the process or situation that you'll change,
remove, or introduce. Justify why this change is necessary, and when it should
occur.
2. Describe the impact of the change: For each of these changes, estimate the
business impact. Identify which processes, teams, or individuals the change affects,
and how disruptive it will be for them. Also consider any downstream effects the
change has on other dependent solutions or processes. Downstream effects might
result in other changes. Additionally, consider how long the situation remained the
same before it was changed. Changes to longer-standing processes tend to have a
higher impact, as preferences and dependencies arise over time.
3. Identify priorities: Focus on the changes with the highest potential impact. For
each change, outline a more detailed description of the changes and how it will
affect people.
4. Plan how to incrementally implement the change: Identify whether any high-
impact changes can be broken into stages or parts. For each part, describe how it
might be incrementally implemented in phases to limit its impact. Determine
whether there are any constraints or dependencies (such as when changes can be
made, or by whom).
5. Create an action plan for each phase: Plan the actions you will take to implement
and support each phase of the change. Also, plan for how you can mitigate
disruption in high-impact phases. Be sure to include a rollback plan in your action
plan, whenever possible.
Tip
Iteratively plan how you'll implement each phase of these incremental changes as
part of your quarterly tactical planning.
When you plan to mitigate the impact of changes on Power BI adoption, consider the
activities described in the following sections.
What's changing: What the situation is now and what it will be after the change.
Why it's changing: The benefit and value of the change for the audience.
When it's changing: An estimation of when the change will take effect.
Further context: Where people can go for more information.
Contact information: Who people should contact provide feedback, ask questions,
or raise concerns.
) Important
You should communicate change with sufficient advanced notice so that people are
prepared. The higher the potential impact of the change, the earlier you should
communicate it. If unexpected circumstances prevent advance notice, be sure to
explain why in your communication.
Here are some actions you can take to plan for training and support.
Centralize training and support by using a centralized portal. The portal can help
organize discussions, collect feedback, and distribute training materials or
documentation by topic.
Consider incentives to encourage self-sustaining support within a community.
Schedule recurring office hours to answer questions and provide mentorship.
Create and demonstrate end-to-end scenarios for people to practice a new
process.
For high-impact changes, prepare training and support plans that realistically
assess the effort and actions needed to prevent the change from causing
disruption.
7 Note
These training and support actions will differ depending on the scale and scope of
the change. For high-impact, large-scale changes (like transitioning from enterprise
to managed self-service approaches to data and BI), you'll likely need to plan
iterative, multi-phase plans that span multiple planning periods. In this case,
carefully consider the effort and resources needed to deliver success.
U Caution
Resistance to change from the executive leadership is often a warning sign that
stronger business alignment is needed between the business and BI strategies. In
this scenario, consider specific alignment sessions and change management actions
with executive leadership.
Involve stakeholders
To effectively manage change, you can also take a bottom-up approach by engaging the
stakeholders, who are the people the change affects. When you create an action plan to
address the changes, identify and engage key stakeholders in focused, limited sessions.
In this way you can understand the impact of the change on the people whose work will
be affected by the change. Take note of their concerns and their ideas for how you
might lessen the impact of this change. Ensure that you identify any potentially
unexpected effects of the change on other people and processes.
Involve your executive sponsor: The authority, credibility, and influence of the
executive sponsor is essential to support change management and resolve
disputes.
Identify blocking issues: When change disrupts the way people work, this change
can prevent people from effectively completing tasks in their regular activities. For
such blocking issues, identify potential workarounds when you take into account
the changes.
Focus on data and facts instead of opinions: Resistance to change is sometimes
due to opinions and preferences, because people are familiar with the situation
prior to the change. Understand why people have these opinions and preferences.
Perhaps it's due to convenience, because people don't want to invest time and
effort in learning new tools or processes.
Focus on business questions and processes instead of requirements: Changes
often introduce new processes to address problems and complete tasks. New
processes can lead to a resistance to change because people focus on what they
miss instead of fully understanding what's new and why.
To effectively manage change, you should identify and engage promoters early in the
process. You should involve them and inform them about the change to better utilize
and amplify their advocacy.
Tip
The promoters you identify might also be great candidates for your champions
network.
To effectively manage change, you should identify and engage detractors early in the
process. That way, you can mitigate the potential negative impact they have.
Furthermore, if you address their concerns, you might convert these detractors into
promoters, helping your adoption efforts.
Tip
A common source of detractors is content owners for solutions that are going to be
modified or replaced. The change can sometimes threaten these content owners,
who are incentivized to resist the change in the hope that their solution will remain
in use. In this case, identify these content owners early and involve them in the
change. Giving these individuals a sense of ownership of the implementation will
help them embrace, and even advocate in favor, of the change.
Questions to ask
Maturity levels
The following maturity levels will help you assess your current state of change
management, as it relates to data and BI initiatives.
100: Initial • Change is usually reactive, and it's also poorly communicated and
communicated.
• No clear teams or roles are responsible for managing change for data initiatives.
200: • Executive leadership and decision makers recognize the need for change
Repeatable management in data and BI projects and initiatives.
• Some efforts are taken to plan or communicate change, but they're inconsistent
and often reactive. Resistance to change is still common. Change often disrupts
existing processes and tools.
300: • Formal change management plans or roles are in place. These plans include
Defined communication tactics and training, but they're not consistently or reliably
followed. Change occasionally disrupts existing processes and tools.
Next steps
In the next article in the Microsoft Fabric adoption roadmap series, in conclusion, learn
about adoption-related resources that you might find valuable.
Microsoft Fabric adoption roadmap
conclusion
Article • 11/14/2023
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
This article concludes the series on Microsoft Fabric adoption. The strategic and tactical
considerations and action items presented in this series will assist you in your analytics
adoption efforts, and with creating a productive data culture in your organization.
Adoption introduction
Adoption maturity levels
Data culture
Executive sponsorship
Business alignment
Content ownership and management
Content delivery scope
Center of Excellence
Governance
Mentoring and enablement
Community of practice
User support
System oversight
Change management
The rest of this article includes suggested next actions to take. It also includes other
adoption-related resources that you might find valuable.
A few important key points are implied within the previous suggestions.
Focus on the near term: Although it's important to have an eye on the big picture,
we recommend that you focus primarily on the next quarter, next semester, and
next year. It's easier to assess, plan, and act when you focus on the near term.
Progress will be incremental: Changes that happen every day, every week, and
every month add up over time. It's easy to become discouraged and sense a lack
of progress when you're working on a large adoption initiative that takes time. If
you keep track of your incremental progress, you'll be surprised at how much you
can accomplish over the course of a year.
Changes will continually happen: Be prepared to reconsider decisions that you
make, perhaps every quarter. It's easier to cope with continual change when you
expect the plan to change.
Everything correlates together: As you progress through each of the steps listed
above, it's important that everything's correlated from the high-level strategic
organizational objectives, all the way down to more detailed action items. That
way, you'll know that you're working on the right things.
7 Note
7 Note
Microsoft's BI transformation
Consider reading about Microsoft's journey and experience with driving a data culture.
This article describes the importance of two terms: discipline at the core and flexibility at
the edge. It also shares Microsoft's views and experience about the importance of
establishing a COE.
The Power CAT Adoption Maturity Model , published by the Power CAT team,
describes repeatable patterns for successful Power Platform adoption.
The Power Platform Center of Excellence Starter Kit is a collection of components and
tools to help you develop a strategy for adopting and supporting Microsoft Power
Platform.
The Power Platform adoption best practices includes a helpful set of documentation and
best practices to help you align business and technical strategies.
The Maturity Model for Microsoft 365 provides information and resources to use
capabilities more fully and efficiently.
Microsoft Learn has a learning path for using the Microsoft service adoption
framework to drive adoption in your enterprise.
The Microsoft Cloud Adoption Framework for Azure is a collection of
documentation, implementation guidance, best practices, and tools to accelerate
your cloud adoption journey.
A wide variety of other adoption guides for individual technologies can be found online.
A few examples include:
Industry guidance
The Data Management Book of Knowledge (DMBOK2) is a book available for
purchase from DAMA International. It contains a wealth of information about maturing
your data management practices.
7 Note
The additional resources provided in this article aren't required to take advantage
of the guidance provided in this Fabric adoption series. They're reputable resources
should you wish to continue your journey.
Partner community
Experienced partners are available to help your organization succeed with adoption
initiatives. To engage a partner, visit the Power BI partner portal .
Discover data items in the OneLake data
hub
Article • 09/13/2023
The OneLake data hub makes it easy to find, explore, and use the Fabric data items in
your organization that you have access to. It provides information about the items and
entry points for working with them.
This article explains what you see on the data hub and describes how to use it.
The OneLake data hub icon and label you see may differ slightly from that shown
above, and may also differ slightly from that seen by other users. The data hub
functionality is the same, however, no matter which icon/label appears. For more
information, see Considerations and limitations.
The list has three tabs to narrow down the list of data items.
Tab Description
Endorsed in Endorsed data items in your organization that you're allowed to find. Certified
your org data items are listed first, followed by promoted data items. For more information
Tab Description
Column Description
Name The data item name. Select the name to open the item's details page.
Owner Data item owner (listed in the All and Endorsed in your org tabs only).
Refreshed Last refresh time (rounded to hour, day, month, and year. See the details section
on the item's detail page for the exact time of the last refresh).
Next refresh The time of the next scheduled refresh (My data tab only).
Sensitivity Sensitivity, if set. Select the info icon to view the sensitivity label description.
The Explorer pane may list workspaces that you don't have access to if the
workspace contains items that you do have access to (through explicitly granted
permissions, for example). If you select such a workspace, only the items you have
access to will be displayed in the data items list.
To display the options menu, select More options (...) on one of the items shown in the
data items list or a recommended item. In the data items list, you need to hover over the
item to reveal More options.
7 Note
The Explorer pane may list workspaces that you don't have access to if the
workspace contains items that you do have access to (through explicitly granted
permissions, for example). If you select such a workspace, only the items you have
access to will be displayed in the data items list.
Next steps
Navigate to your items from Microsoft Fabric Home
Endorsement
Feedback
Was this page helpful? Yes No
Fabric provides two ways you can endorse your valuable, high-quality items to increase
their visibility: promotion and certification.
Promotion: Promotion is a way to highlight items you think are valuable and
worthwhile for others to use. It encourages the collaborative use and spread of
content within an organization.
Any item owner, as well as anyone with write permissions on the item, can
promote the item when they think it's good enough for sharing.
Certification: Certification means that the item meets the organization's quality
standards and can be regarded as reliable, authoritative, and ready for use across
the organization.
Only authorized reviewers (defined by the Fabric administrator) can certify items.
Item owners who wish to see their item certified and aren't authorized to certify it
themselves need to follow their organization's guidelines about getting items
certified.
Currently it's possible to endorse all Fabric items except Power BI dashboards.
This article describes how to promote items, how to certify items if you're an authorized
reviewer, and how to request certification if you're not.
Promote items
To promote an item, you must have write permissions on the item you want to promote.
Certify items
Item certification is a significant responsibility, and only authorized users can certify
items. Other users can request item certification. This section describes how to certify an
item.
1. Get write permissions on the item you want to certify. You can request these
permissions from the item owner or from anyone who as an admin role in
workspace where the item is located.
2. Carefully review the item and determine whether it meets your organization's
certification standards.
3. If you decide to certify the item, go to the workspace where it resides, and open
the settings of the item you want to certify.
1. Go to the workspace where the item you want to be certified is located, and then
open the settings of that item.
2. Expand the endorsement section. The Certified button is greyed out since you
aren't authorized to certify content. Select the link about how to get your item
certified.
7 Note
If you clicked the link above but got redirected back to this note, it means that
your Fabric admin has not made any information available. In this case,
contact the Fabric admin directly.
Next steps
Read more about endorsement
Enable content certification (Fabric admins)
Read more about semantic model discoverability
Feedback
Was this page helpful? Yes No
Workspaces are the central places where you collaborate with your colleagues in
Microsoft Fabric. Besides assigning workspace roles, you can also use item sharing to
grant and manage item-level permissions in scenarios where:
You want to collaborate with colleagues who don't have a role in the workspace.
You want to grant additional item level-permissions for colleagues who already
have a role in the workspace.
This document describes how to share an item and manage its permissions.
2. The Create and send link dialog opens. Select People in your organization can
view.
3. The Select permissions dialog opens. Choose the audience for the link you're
going to share.
You have the following options:
People with existing access This type of link generates a URL to the item, but
it doesn't grant any access to the item. Use this link type if you just want to
send a link to somebody who already has access.
Specific people This type of link allows specific people or groups to access
the report. If you select this option, enter the names or email addresses of the
people you wish to share with. This link type also lets you share to guest
users in your organization's Azure Active Directory (Azure AD). You can't
share to external users who aren't guests in your organization.
7 Note
If your admin has disabled shareable links to People in your organization, you
can only copy and share links using the People with existing access and
Specific people options.
Links that give access to People in your organization or Specific people always
include at least read access. However, you can also specify if you want the link to
include additional permissions as well.
7 Note
The Additional permissions settings vary for different items. Learn more
about the item permission model.
Links for People with existing access don't have additional permission
settings because these links don't give access to the item.
Select Apply.
5. In the Create and send link dialog, you have the option to copy the sharing link,
generate an email with the link, or share it via Teams.
Copy link: This option automatically generates a shareable link. Select Copy
in the Copy link dialog that appears to copy the link to your clipboard.
by Email: This option opens the default email client app on your computer
and creates an email draft with the link in it.
by Teams: This option opens Teams and creates a new Teams draft message
with the link in it.
6. You can also choose to send the link directly to Specific people or groups
(distribution groups or security groups). Enter their name or email address,
optionally type a message, and select Send. An email with the link is sent to your
specified recipients.
When your recipients receive the email, they can access the report through the
shareable link.
This image shows the Edit link pane when the selected audience is People in your
organization can view and share.
This image shows the Edit link pane when the selected audience is Specific people
can view and share. Note that the pane enables you to modify who can use the
link.
4. For more access management capabilities, select the Advanced option in the
footer of the Manage permissions pane. On the management page that opens,
you can:
Grant and manage access directly
In some cases, you need to grant permission directly instead of sharing link, such as
granting permission to service account, for example.
4. Enter the names of people or accounts that you need to grant access to directly.
Select the permissions that you want to grant. You can also optionally notify
recipients by email.
5. Select Grant.
6. You can see all the people, groups, and accounts with access in the list on the
permission management page. You can also see their workspace roles,
permissions, and so on. By selecting the context menu, you can modify or remove
the permissions.
7 Note
You can't modify or remove permissions that are inherited from a workspace
role in the permission management page. Learn more about workspace roles
and the item permission model.
Permission Effect
granted while
sharing
Read Recipient can discover the item in the data hub and open it. Connect to SQL
endpoints of Lakehouse and Data warehouse.
Share Recipient can share the item and grant permissions up to the permissions
that they have. For example, if the original recipient has Share, Edit, and Read
permissions, they can at most grant Share, Edit, and Read permissions to the
next recipient.
Read All with SQL Read Lakehouse or Data warehouse data through SQL endpoints.
endpoint
Read all with Read Lakehouse or Data warehouse data through OneLake APIs and Spark.
Apache Spark Read Lakehouse data through Lakehouse explorer.
The Shared with me option in the Browse pane currently only displays Power BI
items that have been shared with you. It doesn't show you non-Power BI Fabric
items that have been shared with you.
Next steps
Workspace roles
Feedback
Was this page helpful? Yes No
Sensitivity labels from Microsoft Purview Information Protection on items can guard
your sensitive content against unauthorized data access and leakage. They're a key
component in helping your organization meet its governance and compliance
requirements. Labeling your data correctly with sensitivity labels ensures that only
authorized people can access your data. This article shows you how to apply sensitivity
labels to your Microsoft Fabric items.
7 Note
For information about applying sensitivity labels in Power BI Desktop, see Apply
sensitivity labels in Power BI Desktop.
Prerequisites
Requirements needed to apply sensitivity labels to Fabric items:
7 Note
If you can't apply a sensitivity label, or if the sensitivity label is greyed out in the
sensitivity label menu, you may not have permissions to use the label. Contact your
organization's tech support.
Apply a label
There are two common ways of applying a sensitivity label to an item: from the flyout
menu in the item header, and in the item settings.
From the flyout menu - select the sensitivity indication in the header to display the
flyout menu:
In items settings - open the item's settings, find the sensitivity section, and then
choose the desired label:
Next steps
Sensitivity label overview
Feedback
Was this page helpful? Yes No
In Microsoft Fabric, the Delta Lake table format is the standard for analytics. Delta Lake is an
open-source storage layer that brings ACID (Atomicity, Consistency, Isolation, Durability)
transactions to big data and analytics workloads.
All Fabric experiences generate and consume Delta Lake tables, driving interoperability and a
unified product experience. Delta Lake tables produced by one compute engine, such as
Synapse Data warehouse or Synapse Spark, can be consumed by any other engine, such as
Power BI. When you ingest data into Fabric, Fabric stores it as Delta tables by default. You can
easily integrate external data containing Delta Lake tables by using OneLake shortcuts.
Writers: Data warehouses, eventstreams, and exported Power BI semantic models into
OneLake
Readers: SQL analytics endpoint and Power BI direct lake semantic models
Writers and readers: Fabric Spark runtime, dataflows, data pipelines, and Kusto Query
Language (KQL) databases
The following matrix shows key Delta Lake features and their support on each Fabric capability.
SQL analytics No Yes N/A (not N/A (not N/A (not Yes N/A (not
endpoint applicable) applicable) applicable) applicable)
Power BI Yes Yes N/A (not N/A (not N/A (not Yes N/A (not
direct lake applicable) applicable) applicable) applicable)
semantic
models
Export Power Yes N/A (not Yes No Yes N/A (not Reader: 2
BI semantic applicable) applicable) Writer: 5
models into
OneLake
7 Note
Fabric doesn't write name-based column mappings by default. The default Fabric
experience generates tables that are compatible across the service. Delta lake,
produced by third-party services, may have incompatible table features.
Some Fabric experiences do not have inherited table optimization and maintenance
capabilities, such as bin-compaction, V-order, and clean up of old unreferenced files.
To keep Delta Lake tables optimal for analytics, follow the techniques in Use table
maintenance feature to manage delta tables in Fabric for tables ingested using
those experiences.
Current limitations
Currently, Fabric doesn't support these Delta Lake features:
Next steps
What is Delta Lake?
Learn more about Delta Lake tables in Fabric Lakehouse and Synapse Spark.
Learn about Direct Lake in Power BI and Microsoft Fabric.
Learn more about querying tables from the Warehouse through its published Delta Lake
Logs.
Feedback
Was this page helpful? Yes No
This page lists known issues for Fabric features. Before submitting a Support request,
review this list to see if the issue that you're experiencing is already known and being
addressed. Known issues are also available as an interactive Power BI report .
563 Data Engineering Lakehouse doesn't recognize table names November 22,
with special characters 2023
549 Data Warehouse Making model changes to a semantic model November 15,
might not work 2023
536 Administration & Feature Usage and Adoption report activity November 9,
Management missing 2023
530 Administration & Creating or updating Fabric items is blocked October 23,
Management 2023
529 Data Warehouse Data warehouse with more than 20,000 tables October 23,
fails to load 2023
519 Administration & Capacity Metrics app shows variance October 13,
Management between workload summary and operations 2023
521 Administration & New throttling logic delayed for Power BI and October 5,
Management eventstream 2023
483 Administration & Admin monitoring dataset refresh fails and August 24,
Management credentials expire 2023
454 Data Warehouse Warehouse's object explorer doesn't support July 10, 2023
case-sensitive object names
447 Data Warehouse Temp tables in Data Warehouse and SQL July 5, 2023
analytics endpoint
453 Data Warehouse Data Warehouse only publishes July 10, 2023 Fixed:
Delta Lake Logs for Inserts November 15,
2023
446 Data Warehouse OneLake table folder not July 5, 2023 Fixed:
removed when table dropped in November 15,
data warehouse 2023
514 Data Engineering Unable to start new Spark session September Fixed: October
after deleting all libraries 25, 2023 13, 2023
507 Administration & Selecting view account link in September Fixed: October
Management account manager shows wrong 25, 2023 13, 2023
page
467 Data Engineering Notebook fails to load after August 3, Fixed: October
workspace migration 2023 13, 2023
463 Data Warehouse Failure occurs when accessing a August 3, Fixed: October
renamed Lakehouse or 2023 13, 2023
Warehouse
462 Administration & Fabric users see the workspace July 26, 2023 Fixed: October
Management git status column display synced 13, 2023
for unsupported items
458 Data Factory Not able to add Lookup activity July 26, 2023 Fixed: October
output to body object of Office 13, 2023
365
Next steps
Go to the Power BI report version of this page
Service level outages
Get your questions answered by the Fabric community
Feedback
Was this page helpful? Yes No
The Lakehouse explorer doesn't correctly identify Data Warehouse tables names
containing spaces and special characters, such as non-Latin characters.
Status: Open
Symptoms
In the Lakehouse Explorer user interface, you see tables whose names contains spaces
and special characters in the "Unidentified tables" section.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
The Microsoft Fabric Capacity Metrics app doesn't show data for OneLake transaction
usage reporting. OneLake compute doesn't appear in the Fabric Capacity Metrics app
and doesn't count against capacity limits. OneLake storage reporting doesn't have any
issues and is reported correctly.
Status: Open
Symptoms
You don't see OneLake compute usage in the Microsoft Fabric Capacity Metrics app.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Making model changes in a Fabric Data Warehouse's semantic model might not work.
The types of model changes include, but aren't limited to, making changes to
relationships, measures, and more. When the change doesn't work, a data warehouse
semantic model error appears.
Status: Open
Symptoms
If impacted, you experience the following error while attempting to model the Semantic
Model: "You cannot use Direct Lake mode together with other storage modes in the
same model. Composite model does not support Direct Lake mode. Please remove the
unsupported tables or switch them to Direct Lake mode. See
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In the Feature Usage and Adoption report, you see all usage activity for workspaces on
Premium Per User (PPU) and Shared capacities filtered out. When viewing the report,
you see less than expected activity levels for the affected workspaces. For workspaces
not on PPU and Shared capacities, usage activity should be considered accurate.
Status: Open
Symptoms
In the Feature Usage and Adoption report, you notice gaps in audit log activity for
certain workspaces that are hosted on Premium Per User (PPU) and Shared capacities.
The report also shows less activity than reality for the affected workspaces.
Next steps
Monitor report usage metrics
Monitor usage metrics in the workspaces (preview)
Admin - Get Activity Events
About known issues
Feedback
Was this page helpful? Yes No
You can't create or update a Fabric or Power BI item because your organization's
compute capacity has exceeded its limits. You don't receive an error message when the
creation or update is blocked. However, when the compute capacity exceeded its limits,
a notification was sent to your company's Fabric admin.
Status: Open
Symptoms
You can't load a page, create an item, or update anything in Fabric.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
A data warehouse or SQL analytics endpoint that has more than 20,000 tables fails to
load in the portal. If connecting through any other client tools, you can load the tables.
The issue is only observed while accessing the data warehouse through the portal.
Status: Open
Symptoms
Your data warehouse or SQL analytics endpoint fails to load in the portal with the error
message "Batch was canceled," but the same connection strings are reachable using
other client tools.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Fabric capacities support a breakdown of the capacity usage by workload meter. The
meter usage is derived from the workload summary usage, which contains smoothed
data over multiple time periods. Due to a rounding issue with this summary usage, it
appears lower than the usage from the workload operations in the Capacity Metrics app.
Until this issue is fixed, you can't correlate your operation level usage to your Azure bill
breakdown. While the difference doesn't change the total Fabric capacity bill, the usage
attributed to Fabric workloads might be under-reported.
Status: Open
Symptoms
A customer can uses the Capacity Metrics app to look at their workload usage for any
item such as a Data warehouse or a Lakehouse. In a 14-day period on the Capacity
Metrics app, the usage appears to be lower than the bill for that workload meter. Note:
Due to this issue, the available capacity meter shows higher usage, giving the erroneous
impression that the capacity is underutilized more than it actually is.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Provide product feedback | Ask the community
Known issue - New throttling logic
delayed for Power BI and eventstream
Article • 10/06/2023
On October 1, Fabric launched a new capacity throttling logic to reduce throttling for
intermittent usage spikes and prevent overloading capacity. All experiences use the new
throttling logic except for Power BI and eventstream.
Status: Open
Symptoms
Power BI and eventstream users don't currently see the benefits of the new throttling
logic.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In a limited number of cases, when you make a user-initiated request to the data
warehouse, the user identity isn't correctly reported to the Fabric capacity metrics app.
In the capacity metrics app, the User column shows as System.
Status: Open
Symptoms
In the interactive operations table on the timepoint page, you incorrectly see the value
System under the User column.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In the Fabric capacity metrics app, completed queries in the Data Warehouse SQL
analytics endpoint appear with the status as "InProgress" in the interactive operations
table on the timepoint page.
Status: Open
Symptoms
In the interactive operations table on the timepoint page, completed queries in the Data
Warehouse SQL analytics endpoint appear with the status InProgress
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You might not be able to start a new Spark session in a Fabric notebook. You receive a
message that the session has failed. The failure occurs when you installed libraries
through Workspace settings > Data engineering > Library management, and then you
removed all the libraries.
Symptoms
You're unable to start a new Spark session in a Fabric notebook and receive the error
message: "SparkCoreError/PersonalizingFailed: Livy session has failed. Error code:
SparkCoreError/PersonalizingFailed. SessionInfo.State from SparkCore is Error:
Encountered an unexpected error while personalizing session. Failed step:
'LM_LibraryManagementPersonalizationStatement'. Source: SparkCoreService."
If you aren't using any library in your workspace, adding a library from PyPi or Conda to
work around this issue might slow your session start time slightly. Instead, you can
install a small custom library to have faster session start times. You can download a
simple JAR file from Fabric Samples/SampleCustomJar and install.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When you open the Account manager on any Microsoft Fabric page, you can select the
View account link. When you select the View account link, the page redirects to the
wrong page instead of the user account information page.
Symptoms
Selecting the View account link in Account manager doesn't show your account
information.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In some workspaces, the credentials for the admin monitoring workspace semantic
model expire, which shouldn't happen. As a result, the semantic model refresh fails, and
the Feature Usage and Adoption report doesn't work.
Status: Open
Symptoms
In the admin monitoring workspace, you receive refresh failures. Although the semantic
model refreshed in the past, now the semantic model refresh fails with the error: Data
source error: The credentials provided for the data source are invalid.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
If a workspace ever contained a Fabric item other than a Power BI item, even if all the
Fabric items have since been deleted, then moving that workspace to a different
capacity in a different region isn't supported. If you do move the workspace cross-
region, you can't create any Fabric items. In addition, the same behavior occurs if you
configure Spark compute settings in the Data Engineering or Data Science section of the
workspace settings.
Symptoms
After moving the workspace to a different region, you can't create a new Fabric item.
The creation fails, sometimes showing an error message of "Unknown error."
Next steps
About known issues
Feedback
Was this page helpful? Yes No
If you migrate your workspace that contains Reflex or Kusto items to another capacity,
may see issues loading a notebook within that workspace.
Symptoms
When you try to open your notebook, you see an error message similar to "Loading
Notebook... Failed to get content of notebook. TypeError: Failed to fetch".
Next steps
About known issues
Feedback
Was this page helpful? Yes No
After renaming your Lakehouse or Warehouse items in Microsoft Fabric, you may
experience a failure when trying to access the SQL endpoint or Warehouse item using
client tools or the Web user experience. The failure happens when the underlying SQL
file system isn't properly updated after the rename operation resulting in different
names in the portal and SQL file system.
Symptoms
You'll see an HTTP status code 500 failure after renaming your Lakehouse or Warehouse
when trying to access the renamed items.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Fabric users in the admin portal see the workspace git status column as Synced for
unsupported items.
Symptoms
Fabric users see workspace git status column display Synced for unsupported items. To
identify which items are supported, see Git integration
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Currently there's a bug when the user tries to add the output of a Lookup activity as a
dynamic content to the body object of Office 365. Office 365 activity hangs indefinitely.
Symptoms
Office 365 activity hangs indefinitely when output of Lookup activity is added as a
dynamic content to the body object of Office 365
Next steps
About known issues
Feedback
Was this page helpful? Yes No
OneLake file explorer for the Windows Desktop application doesn't contain items under
the "My workspace" folder.
Symptoms
The "My workspace" appears in the list of workspaces but the folder is empty, when user
view OneLake file explorer from the Windows Desktop application.
Next steps
None
The object explorer fails to display the Fabric Data Warehouse objects (ex. tables, views,
etc.) when have same noncase sensitive name (ex. table1 and Table1). In case there are
two objects with same name, one displays in the object explorer. but, if there's three or
more objects, nothing gets display. The objects show and can be used from system
views (ex. sys.tables). The objects aren't available in the object explorer.
Status: Open
Symptoms
If the customer notice the object shares the same noncase sensitive name as another
object listed in a system view and is working as intended, but isn't listed in the object
explorer, then the customer has encountered this known issue.
Next steps
About known issues
Known issue - Data Warehouse only
publishes Delta Lake logs for inserts
Article • 11/16/2023
Delta tables referencing Lakehouse shortcuts that are created using Data Warehouse
tables, don't update when there's an 'update' or 'delete' operation performed on the
Data Warehouse table. The limitation is listed in our public documentation:
(/fabric/data-warehouse/query-delta-lake-logs#limitations)
Symptoms
The data that a customer sees when querying the Delta table by using either a shortcut
or using the Delta Lake log, doesn't match the data shown when using Data Warehouse.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When the pipeline is deployed via public API with the 'update app' option
(/rest/api/power-bi/pipelines/deploy-all#pipelineupdateappsettings), opening the
pipeline page gets stuck on loading.
Symptoms
The pipeline page gets stuck on loading.
Next steps
About known issues
Known issue - Temp table usage in Data
Warehouse and SQL analytics endpoint
Article • 11/15/2023
Users can create Temp tables in the Data Warehouse and in SQL analytics endpoint but
data from user tables can't be inserted into Temp tables. Temp tables can't be joined to
user tables.
Status: Open
Symptoms
Users may notice that data from their user tables can't be inserted into a Temp table.
Temp tables can't be joined to user tables.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When you drop a table in the Data Warehouse, it isn't removed from the folder in
OneLake.
Symptoms
After a user drops a table in the Data Warehouse using a TSQL query, the corresponding
folder in OneLake, under Tables, isn't removed automatically and can't be dropped
manually.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In SQL Server Management Studio (SSMS) with the COPY statement, you may see an
incorrect row count reported in the Messages tab.
Symptoms
Incorrect row count is reported when using the COPY command to ingest data from
SSMS.
Next steps
About known issues
Known issue - Moving files from outside
of OneLake to OneLake with file
explorer doesn't sync files
Article • 08/01/2023
Within OneLake file explorer, moving a folder (cut and paste or drag and drop) from
outside of OneLake into OneLake fails to sync the contents in that folder. The contents
move locally, but only the top-level folder syncs to OneLake. You must trigger a sync by
either opening the files and saving them or moving them back out of OneLake and then
copying and pasting (versus moving).
Symptoms
You continuously see the sync pending arrows for the folder and underlying files
indicating the files aren't synced to OneLake.
Next steps
About known issues
Known issue - 'Get Tenant Settings' API
returns default values instead of user
configured values
Article • 06/28/2023
When users call the admin API to retrieve tenant settings, it currently returns default
values instead of the user-configured values and security groups. This issue is limited to
the API and doesn't affect the functionality of the tenant settings page in the admin
portal.
Symptoms
This bug is currently affecting a large number of customers, resulting in the symptoms
to be observed more widely. Due to the API returning default values, the properties and
corresponding values obtained through the API may not match what users see in the
admin portal. Additionally, the API response doesn't include the security group sections,
as the default security group is always empty. Here's an example comparing the
expected response with the faulty response:
SQL
This article provides information about the official collection of icons for Microsoft
Fabric that you can use in architectural diagrams, training materials, or documentation.
Do's
Use the icons to illustrate how products can work together.
In diagrams, we recommend including a label that contains the product,
experience or item name somewhere close to the icon.
Use the icons as they appear within the product.
Don'ts
Don't crop, flip or rotate icons.
Don't distort or change icon shape in any way.
Don't use Microsoft product icons to represent your product or service.
Terms
Microsoft permits the use of these icons in architectural diagrams, training materials, or
documentation. You may copy, distribute, and display the icons only for the permitted
use unless granted explicit permission by Microsoft. Microsoft reserves all other rights.
Related content
Microsoft Power Platform icons
Azure icons
Dynamics 365 icons
Feedback
Was this page helpful? Yes No
This archive page is periodically updated with an archival of content from What's new in
Microsoft Fabric?
To follow the latest in Fabric news and features, see the Microsoft Fabric Blog . Also
follow the latest in Power BI at What's new in Power BI?
Community
This section summarizes previous Microsoft Fabric community opportunities for
prospective and current influencers and MVPs. To learn about the Microsoft MVP Award
and to find MVPs, see mvp.microsoft.com .
May Learn about Prior to our official announcement of Microsoft Fabric at Build 2023,
2023 Microsoft MVPs had the opportunity to familiarize themselves with the product.
Fabric from For several months, they have been actively testing Fabric and gaining
MVPs valuable insights. Now, their enthusiasm for the product is evident as
they eagerly share their knowledge and thoughts about Microsoft
Fabric with the community .
May Introducing Data Data Factory enables you to develop enterprise-scale data
2023 Factory in Microsoft integration solutions with next-generation dataflows and
Fabric data pipelines .
May Introducing Data With Synapse Data Engineering, one of the core experiences of
2023 Engineering in Microsoft Fabric, data engineers feel right at home, able to
Microsoft Fabric leverage the power of Apache Spark to transform their data at
scale and build out a robust lakehouse architecture .
May Introducing Synapse With data science in Microsoft Fabric, you can utilize the
2023 Data Science in power of machine learning features to seamlessly enrich
Microsoft Fabric data as part of your data and analytics workflows .
May Introducing Synapse Synapse Data Warehouse is the next generation of data
2023 Data Warehouse in warehousing in Microsoft Fabric that is the first
Microsoft Fabric transactional data warehouse to natively support an open
data format, Delta-Parquet.
May What's New in Kusto – Announcing the Synapse Real Time Analytics in
2023 Build 2023! Microsoft Fabric (Preview) !
May Ingest, transform, and route You can now ingest, capture, transform and route real-
2023 real-time events with time events to various destinations in Microsoft
Microsoft Fabric event Fabric with a no-code experience using Microsoft
streams Fabric eventstreams.
May Step-by-Step Guide to This blog reviews how to enable Microsoft Fabric with a
2023 Enable Microsoft Fabric for Microsoft 365 Developer Account and the Fabric Free
Microsoft 365 Developer Trial .
Account
May Microsoft 365 Data + Microsoft 365 Data Integration for Microsoft Fabric
2023 Microsoft Fabric better enables you to manage your Microsoft 365 alongside
together your other data sources in one place with a suite of
analytical experiences.
Related content
Modernization Best Practices and Reusable Assets Blog
Azure Data Explorer Blog
Get started with Microsoft Fabric
Microsoft Training Learning Paths for Fabric
End-to-end tutorials in Microsoft Fabric
Fabric Known Issues
Microsoft Fabric Blog
Microsoft Fabric terminology
What's new in Power BI?
Next step
What's new in Microsoft Fabric?
Feedback
Was this page helpful? Yes No