A data dashboard is a software tool that manages information by visually tracking, displaying and analyzing data points known as key performance indicators (KPIs). These metrics assess the health of a business process, department or organization by providing quick views with simple interpretations, without modifying the front end. A layperson should be able to set up a data dashboard with readily available tools, including those that disseminate this information throughout an organization. Dashboards are an older method of accomplishing these tasks, but they still have a place and use in today’s businesses.

Data notebooks are a form of interactive computing in which users write code, execute it, visualize the results and share insights from those results. They contain elements of a dashboard, but provide a more detailed view of data that includes reports and other types of written context. Notebooks provide an easier way to interpret data since they provide more context than dashboards. However, they’re also more difficult to implement as they require the expertise of programmers and data scientists. The use of notebooks to visualize data will continue to grow, although they won’t replace dashboards completely.

History: The Evolution of Data Dashboards and Notebooks

Dashboards made their first appearance at the beginning of the 21st century as key components of business intelligence models. Most organizations rapidly adopted them as their preferred tool for providing indicators and achieving data-driven insights into business processes. The introduction of Hadoop in 2007 was followed by additional big data technologies that dramatically changed the way in which organizations interpreted their data.

For example, big data tools allowed organizations to process data in parallel on a scale that was not previously practical. These changes were initially limited to data storage and processing, as changing the access of that data by end-users seemed to be unnecessary due to the perceived effectiveness of dashboards at that time.

Notebooks have since become the standard tool for data exploration after the release of project Jupyter in 2014. This solution provided functionalities in data exploration that were unavailable from IPython, which attracted data scientists due to the interactivity of notebooks. The popularity of notebooks has increased since their introduction, but their use grew exponentially beginning in 2018. People who need to interpret data are currently the most common users of notebooks, especially data scientists and analysts.

Comparisons of Data Visualization 

The primary purpose of dashboards is to present data in an accessible and engaging manner. Some dashboards are designed to answer specific questions like “How many new cases of Covid occurred in my country during the last week?” In fairness to dashboards, they’re much more efficient at this task than simply posting a table or link to download the desired information.

While dashboards provide superficial findings, they don’t allow users to take action based on that data. Dashboard results lack the context necessary to trust the data or make it useful, preventing anyone from using them for a specific purpose. Even if the user obtains this context from another source, a dashboard still lacks the power and flexibility to perform the required analysis. A dashboard may succeed in getting users to do something with the data it provides, but it won’t necessarily be meaningful action.

Organizations initially attempted to solve this problem by adding more dashboards to their workflow. However, this process also necessitated the addition of more filters, often resulting in the discontinuation of those dashboards. This negative feedback loop would often cause users to mistrust their data, resulting in friction between team members. Nevertheless, dashboards have greatly benefited data management over the last two decades, even if they are the ideal interface for the collaboration and reporting of data.

The greatest advantage of data notebooks over dashboards is that notebooks claim the data and allow users to perform multiple processes for interpreting the data in a single document area. Collaboration is also a major benefit for notebook users due to their process-oriented nature, as opposed to the traditional scripting that dashboards use to analyze data. These differences benefit both data scientists and end-users of data notebooks.

The fundamental capability of a notebook is to provide users with the capability of answering any question, provided they know the programming language that it uses. Users are able to trust this process because they can see both the code itself and the author’s comments. This approach allows team members to collaborate on a project, present their findings and share them with others.

The popularity of data notebooks means that many of these solutions are currently available. Jupyter is the most widely used at this time, but Google also has a data notebook connected to Google Drive. Count may provide the most user-friendly data notebooks including A/B Test Email Report, Customer Success Portal and Spotify Data Directory. These solutions show the variety of notebooks, all of which have the interactivity that traditional reports lack.


Jupyter is an acronym of the three programming languages this data notebook uses, which include Julia, Python and R. It’s an interactive client-based web application that allows users to visualize and share content such as code, equations and text. Users can also bring these materials together to create a cohesive story for all the work products. Furthermore, Jupyter Notebook is a multi-language computing environment that supports over 40 programming languages.

Jupyter Notebook was released in 2014 as a further refinement of IPython. The data science community eagerly adopted it, and it’s now the default research environment for many organizations. Jupyter is open-source software, which means it’s generally free to use, modify and implement. While computational notebooks have generally proven highly beneficial during the few years they’ve been around, Jupyter’s popularity has exploded since 2018. Its support for multiple languages has made Jupyter the preferred choice for data scientists who want to create and share code, especially those who perform rapid prototyping and exploratory data analysis.

Many language-specific Integrated Development Environments (IDEs), including Atom, Python and Spyder. However, Jupyter’s flexibility and interactivity is the major reason for its appeal to data scientists looking for an IDE. In addition, experts in digital humanities are also enthusiastically adopting Jupyter as a pedagogical tool. GitHub reports that about 200,000 public Jupyter notebooks were being shared in 2015, which increased to over 2.5 million by September 2018.

Jupyter Notebook combines programming code and its commentary with interactivity, regardless of its specific applications. This capability makes Jupyter especially useful for data scientists who need to streamline their entire workflows. Common applications for Jupyter may include the development of engineering concepts as well as the creation of music and other art forms.

Anaconda automatically installs Jupyter, but users can also install Jupyter manually with the Python pip command. The Jupyter installation consists of three distinct components, including the kernel, notebook web application and notebook documents. The kernel controls the system by executing and inspecting user code. The notebook web application creates and executes code interactively, and the notebook documents contain all the notebook’s contents. Each document also contains the kernel that controls it.


Google Colaboratory, commonly known as Colab for short, is a cloud-based Jupyter notebook environment that stores its notebooks on Google Drive. Google originally developed Colab as open-source software that would work more directly upstream. This project led to the development of a Google Chrome extension called “Open in Colab,” which was never completed. However, Google continued the development of Colab as an internal project.

Colab’s user interface (UI) only allows users to create notebooks with Python 2 and Python 3 kernels as of October 2019. However, it can also use existing notebooks with IR or Swift kernels, since these kernels are installed in Colab’s container. Colab supports the Julia programming language, as does Google’s tensor processing units (TPUs). While the basic version of Google Colab is free, users can upgrade to the premium version for $9.99 per month per account.

Colab allows users to run Jupyter Notebook on their local computer and improves Jupyter in many other ways. For example, users can obtain any Jupyter Notebook from a GitHub repository. They can also load, edit and save any .ipynbfile to the Google Drive associated with the Colab login they used to sign on to Colab. Users who employ this strategy will typically want to have a separate Google account for each project, ensuring each project has its own Google Drive.

Each project folder on Google Drive can also have its own GitHub account. Team members can host Colab on their own local computer, which only requires internet connectivity and a web browser. This capability allows team members to be geographically separated since they can connect to their project through the cloud.

Colab also allows users to provision the many generations of both the Google TPU and NVIDIA graphics processing units (GPUs). They can also provision a multi-core central processing unit (CPU). In addition, a Colab notebook may contain many useful Jupyter Notebook extensions.


Count is a data analysis platform made by a company of the same name. It’s built around data notebooks but goes beyond the boundaries of traditional data science. However, the same fundamental principles of data notebooks still apply in addition to other benefits. Count can accommodate users of all experience levels, so there’s no need to teach them languages like Python or SQL. Instead, Count users who don’t know SQL can build queries by dragging and dropping icons.

Count provides quick visuals of data with a single click, so there’s no need to install complex visualization packages or other software. It also joins tables and query results automatically, eliminating the need for users to write complex join statements or study the database schema. However, users who know SQL can write queries in “notebook SQL” or full SQL.

Count has collaboration built into it, allowing team members to share notebooks with each other or the entire organization by sending them a link. They can also add call-outs and comments to document, making it truly shared. Count’s inclusion of notebooks as a core feature provides it with the collaboration, power, and transparency that today’s project teams need. In addition to raw data, Count provides users with meaningful insights that they can share with other members of their organization.

Notebook Usage

Count’s development has included collaboration with multiple organizations to assess the way in which team members use the data their notebooks provide. The results of this study show that data analysts preferred using notebooks instead of writing complex SQL scripts, especially for simple tasks like creating a few base tables for other team members to use. The data from notebooks are viewable by anyone, making it difficult for skeptical users to dismiss simply because they don’t know the source of that data.

Many users also use notebooks to answer ad-hoc queries, either by forking existing notebooks or creating their own. They can then share their notebook data with other team members, which can guide them in making presentations or sharing that data with other parts of their organization. Team members also use notebooks to create base reports that often provide detailed guidelines on interpreting that data, including special considerations that users need to make.

Notebooks typically have a higher degree of credibility than other data sources because the data is stored in a single location and viewable by all team members. In comparison, dashboards are often created for users who won’t read them, much less trust the data they provide. Dashboards also have many filters, thousands in many cases, because they must accommodate the needs of all users. The shift from dashboards to notebooks thus has a great impact on the way teams use data.

End-User Findings

The self-service reporting of data wasn’t new when notebooks were introduced. Furthermore, notebooks haven’t changed the fact that the primary challenge in using data has always been to make the right data sources available for end-users.

This process typically follows a chain of events that begins with attaching reporting tools directly to daily use tools within a production environment. However, this step has the obvious disadvantage of a user creating a faulty data request or simply a very demanding request, which can bring the production system down. The next step in improving data usage is to make a copy of the production system each day and attach the reporting tools to that copy.

Users are typically unhappy with the prospect of using a daily copy of data since they frequently need their data to be as close to live as possible. While a certain level of user dissatisfaction may be unavoidable for the sake of data security, this practice may also prevent users from understanding the structure of the data source they’re using. This is primarily because back-end systems use relational databases that aren’t well-designed for reporting. As a result, users may need to construct complex joins to make sense of data coming from multiple sources, which often causes errors that aren’t immediately obvious. For example, a user may want to know the number of cars sold in a particular month for each region but writes a query that also includes canceled offers.

Increasingly strict data protection regulations are another driving factor for the rapid adoption of data notebooks. Organizations now have a strong incentive to prevent reporting users from accessing personal data unless their duties specifically require it. The traditional solution to protecting this data is to develop data warehouse structures with online analytical processing (OLAP) capability. Another option is to create views to ensure users select the right data.

Performance is usually a critical design consideration for these applications, so it’s often necessary to consolidate these views where possible. This process is often tedious, but it can allow managers to perform self-service reporting effectively. However, they still need to trust the data source, just as they do with prepared dashboards.

These considerations exist even when users understand what they’re doing and why. Self-service reporting becomes more problematic when dashboard users aren’t data scientists, even though they have other technical qualifications. These users often apply filters to their queries and form strong opinions that the results are what they want.

Which Data Analytics Tool Should You Choose?

A notebook approach to data analysis may provide all the tools that users need, but the work that data scientists put into setting up the notebook can still be wasted when users begin using filters excessively. It’s still important to maintain a structured workflow, even when you want users to play with the data. For example, users should be able to construct a query that will retrieve a KPI they’re looking for without future experiments overwriting that query.

Developers have often created traditional dashboards simply because a manager wanted a KPI for a performance review. However, dashboards may also have a well-defined use case that multiple managers have agreed is an important metric. In these cases, the dashboard should remain unchanged until that group agrees on a new definition of the KPI.

Another effective use of dashboards is monitoring the service level agreements (SLAs) for contracts. This type of dashboard should be more concise and display the required information in a clear, simple manner. Some dashboards that monitor SLAs are available to external suppliers, making it especially important to prevent user experimentation.

These specific use cases favor dashboards, even though they may require more time to set up the notebooks correctly. Specifically, users must have access to some documentation that allows them to properly interpret the data provided by the dashboard. This requirement can be particularly difficult to achieve in a modern business, where dashboards can change hands so fast that the data scientists who created the dashboard don’t get to know the requirements of each user group. One solution to this problem is to provide an icon on each chart that presents users with appropriate documentation when the user clicks on it. Users can also edit this documentation directly or issue change requests, depending on their access rights.

Dashboards must also be alive if they’re to compete with notebooks. Users must continually adjust their dashboards to reflect changes in their data, and what they want to learn from it. They should also review their dashboards often to identify new KPIs they want to monitor. These requirements mean that a dashboard must be built on a flexible platform with enough resources available to regularly implement changes. An effective dashboard must also use an iterative approach that facilitates data exploration, making it suitable for conversion to a notebook.

Dashboards are here to stay for the time being, as they still have a place in most business environments. However, they should be built on a notebook-style foundation by data scientists, rather than end-users with expertise in other areas. This approach would involve assigning some of the dashboard’s results to a view that users can then share. Dashboards may also benefit from a button that provides additional details on the view. This feature could allow read-only or write access depending on the user’s privileges.

What Does This Mean for Contact Centers?

Contact centers have traditionally used dashboards to view their data, but they may benefit from notebooks. The ability of notebooks to take data from a variety of sources, clean it and present it to the users often makes them an attractive improvement upon dashboards. The process of transitioning to notebooks in a contact center typically involves combining the existing reports and dashboards into a single document. A team of data scientists can then prepare these reports with a notebook to provide real-time insights into data that are so important in a contact center.

Contact center managers also need quick views of straightforward KPIs, which favor dashboards. However, contact centers often expand their operations into areas like sales or marketing, in which case they should consider a notebook. At the very least, managers in these contact centers should develop a business case for remaining with dashboards.


A data visualization dashboard provides insights into what the data is telling them. These dashboards pull data from multiple systems and output the specific metrics of interest to a particular user. Aceyus’ data visualization dashboards also provide data in real-time, with automatic refreshes occurring in seconds. We offer options for customizing your dashboard, so you can see the data you want the way you want to see it. Our dashboards can also aggregate your data to provide actionable insights in a user-friendly manner, whether your data is stored in a single repository or multiple systems.

Are you ready to see the difference real-time dashboards can make for your business? Contact us today to receive your free consultation.

Aceyus Team

Aceyus Team

Related Posts

Blog News

Five9 to Acquire Aceyus

Five9 to Acquire Aceyus extending the Five9 platform to streamline the migration of large enterprise customers …