linkedin Skip to Main Content
Just announced: CoderPad Play! Engage your team with fun technical challenges.
Back to blog

Mastering Jupyter Notebooks: Best Practices for Data Science

Data Science

Ah, Jupyter Notebooks—a data scientist’s trusty companion, as reliable as a warm croissant in a French bakery. 

If you’ve ever dipped your toes into data science, machine learning, or even scientific computing, chances are you’ve encountered a Jupyter Notebook. These powerful tools have become ubiquitous for good reasons: they blend code, narrative text, equations, and visualizations in a single document, making data storytelling a breeze. 

The beauty of a Jupyter Notebook lies in its layers—the layers of information, code, and commentary that give context to raw data and findings. 

But as we all know, beauty can quickly turn to chaos without proper care. 

This becomes particularly crucial in team environments where clarity and readability are not just courtesies, but necessities. 

This article delves into best practices for working with Jupyter Notebooks, guided by a simple daily sales analysis project from a French bakery (French bakery daily sales | Kaggle). The accompanying Notebook is accessible here: Daily Sales Analysis Best Practices Demo. Alternatively you can use the sandbox embedded at the bottom of the page. 

In this article, you’ll discover why and how to keep your Notebooks focused, the role of Markdown for readability, discipline in cell execution, the importance of modular programming, and tips for optimized data loading and memory management.  

All right, let’s get this trip started. You’ll come away from the article with a newfound proficiency in Jupyter Notebook. 

🔖 Related resource: Jupyter Notebook for realistic data science interviews

1. Ensure your Notebook stays focused 

The Dilemma: One comprehensive Notebook vs. Multiple specialized Notebooks 

So you’re asked to work on daily sales data for a French bakery—croissants, baguettes, and all those delicious goodies. Would you put every analysis—customer behavior, seasonal trends, inventory levels—into one grand, all-encompassing Notebook? Or would you break it down into bite-sized notebooks, each catering to a specific question? Every time I start a new project, I question things like this. 

Strategies for deciding between one large Notebook and multiple smaller ones 

When initiating a data science project, the scope of your analysis can significantly influence whether you opt for a single, all-encompassing notebook or multiple specialized ones. 

It often makes sense to split the work into several notebooks for projects covering various topics or analyses. This way, each Notebook can stay focused and more readily understandable, making it easier to collaborate with others and revisit the work later on. 

Another crucial factor is your audience. A series of focused notebooks might be more beneficial if your audience comprises experts looking for a deep dive into the data. On the other hand, if the audience is looking for a comprehensive overview, a single notebook that brings multiple analyses together might be more effective. 

There’s no straightforward answer to this dilemma. There are pros and cons to both focused and comprehensive notebooks. Let’s take a closer look at each option. 

The value of a focused Notebook 

Having a focused notebook is similar to following a well-organized recipe. It allows your readers, or even a future version of yourself, to navigate easily through the steps without getting sidetracked by irrelevant details. 

A well-defined objective paves a straightforward path from raw data to valuable insights, enhancing your work’s clarity and readability. Each cell and section is tailored to a specific role, which minimizes clutter and adds efficiency. 

This focus doesn’t just help the author; it also simplifies sharing and collaboration. With clear objectives, team members can quickly grasp the Notebook’s purpose, making it easier to either extend the existing work or offer constructive critiques. 

The advantages of a comprehensive Notebook 

While specialized notebooks offer modularity and focus, there are scenarios where a single, comprehensive Notebook may be more appropriate. 

For instance, when your analyses are deeply interconnected or when you aim to present a unified narrative, a single Notebook allows for seamless storytelling and data exploration. This centralized approach also minimizes the risk of duplicating data preparation steps across multiple files, thereby making your analysis more efficient.  

This table provides a quick overview of the advantages and disadvantages of how to focus your Notebooks, which can help you decide between choosing one large Notebook or multiple smaller ones. 

 One Big Notebook Many Small Notebooks 
Pros Centralized analysis Enhanced clarity 
 Easy to see the big picture Better modularity 
Cons Reduced clarity Management overhead 
 Performance issues Redundancy in data prep 

The need for adaptability 

It’s worth noting that the choice between one large Notebook and multiple smaller ones is rarely set in stone. As a project evolves, so too can its documentation. 

You may start with one Notebook and later find it beneficial to split the work into more specialized notebooks as the scope expands. Your Notebook’s structure should be flexible enough to cater to different project needs and audience expectations. 

Just remember the mantra: “Adaptable, Clear, and Purpose-Driven.” 

Best practices for keeping your Notebooks well organized 

Organizing your notebooks can be challenging, even with the previous explanations. To make it easier, here are some practical tips you can use: 

  1. Clearly define the objective at the start: Before you even begin coding, outline what you aim to achieve with the Notebook. A clear objective sets the stage for focused analyses and helps your audience quickly understand the Notebook’s purpose. 
  2. Limit tangential analyses: You may encounter exciting side routes as you dive into your data. While it’s tempting to go off on tangents, these can dilute the primary focus of your Notebook. If a tangential analysis starts to take on a life of its own, it may warrant a separate notebook. 
  3. Use a Table of Contents (TOC) or an index: Navigation can become a challenge in larger notebooks. Implementing a table of contents or an index can significantly improve the Notebook’s usability, helping you and your collaborators find relevant sections more efficiently.

By following these guidelines and deciding strategically between one large and multiple smaller ones, you can make your Jupyter Notebook projects more organized, focused, and collaborative. 

2. Mastering Markdown usage 

Now that you have chosen the scope of your Notebook, you need to pay attention to the format. 

Imagine you’ve crafted the perfect baguette. It’s not just the ingredients that matter, but also how they’re presented—the golden crust, the soft interior. Similarly, a well-structured Jupyter Notebook isn’t solely about the code and data. It’s also about presentation and narrative, which is where Markdown shines. 

The value of structured Notebooks 

Markdown helps you create structured notebooks by allowing you to include headers, lists, images, links, and even mathematical equations. These elements contribute to the Notebook’s readability and flow, making it easier for anyone to understand your work process. 

How Markdown text improves narrative and flow 

Think of the Markdown text as the storyline that weaves through your Notebook. It guides the reader through your thought process, explains the rationale behind code snippets, and adds context to your data visualizations. 

Keeping audience and purpose in mind 

A notebook aimed at technical experts might focus more on code and algorithms, but one intended for business stakeholders may benefit from more narrative explanations and visualizations. Markdown lets you tailor your Notebook to its intended audience. 

Demonstrating Markdown capabilities 

For our purposes, the following Markdown elements will be useful in creating an eye-popping narrative for your data. You can see how they render in the Notebook in the image below.

  • Headers: The use of ## generates a subheading, Average Revenue per Transaction, which serves as a navigational landmark in the Notebook. 
  • Anchor links: The <a class="anchor" id="avg-revenue-per-transaction"></a> is an HTML tag used to create a hyperlink anchor for more straightforward navigation within the Notebook. 
  • Math equations: The formula for the KPI is displayed using LaTeX notation enclosed within dollar signs $. This allows for a clear presentation of mathematical concepts. 
  • Inline code: The backticks around Average Revenue per Transaction set this particular text apart, typically indicating that it refers to a code element or technical term.

This example offers just a snapshot of what Markdown can do. The possibilities for enriching your Notebook with Markdown are extensive, enabling you to turn a collection of code cells into a well-organized, compelling narrative. 

Making your Notebooks presentation-ready 

When we talk about a “presentation-ready” Jupyter Notebook, we’re referring to a notebook transcending the role of a mere draft or a playground for data tinkering. 

A presentation-ready notebook is polished, well-organized, and easy to follow, even for someone who didn’t participate in its creation. It should be able to “tell the story” on its own, meaning it can be handed off to a colleague, a stakeholder, or even your future self with minimal guidance – and still make complete sense. 

A presentation-ready notebook typically displays several key characteristics that set it apart from a rough draft: 

  1. Well-organized sections and subsections: The Notebook should be logically segmented into parts that guide the reader through your thought process. Each section should flow naturally into the next, like chapters in a book. 
  2. Descriptive comments and Markdown text: A good Notebook uses the markdown cells effectively to describe what each code cell is doing. It’s not just the code that speaks; the text around it elucidates why a particular analysis is essential, what the results signify, or why a specific coding approach was taken. 
  3. Effective use of visualizations: Charts, graphs, and other visual aids should not just be addressed as afterthoughts. Instead, they should be integral parts of the narrative, aiding in understanding the data and the points you are trying to convey. 
  4. Significance in data storytelling: A presentation-ready notebook is particularly vital for data storytelling. In many ways, your Notebook is the story—the narrative explaining your data-driven insights. 

Next, you’ll find a screenshot that exemplifies the use of Markdown to enhance the comprehension of data visualization. 

The screenshot shows a Notebook section where Markdown text provides a description and a plot analysis. This serves as a real-world demonstration of how Markdown can make visual data more meaningful and contribute to a presentation-ready Notebook.  

By focusing on structured organization, effective Markdown usage, and meaningful visualizations, you transform your Notebook from a mere collection of code into a powerful tool for both analysis and decision-making. The Markdown text, in particular, elevates your narrative by clarifying not just the “what” but also the “why” and “how,” adding depth to your data storytelling. 

3. Cell execution order discipline 

It’s great to have a well-organized and nicely presented notebook, but having one that runs smoothly is even better! 

Sequential flow and logic

When analyzing sales data for our cozy French bakery, we could hop between different cells to explore new ideas, debug issues, or revisit past analyses. 

One of the most unique features of Jupyter Notebooks is the ability to execute cells out of order, which is often a double-edged sword. 

While this non-linear execution gives Jupyter Notebooks their interactive power, it can also lead to confusion and errors if we’re not careful. This ability allows for greater flexibility and exploration but also opens the door to logical inconsistencies. 

Potential pitfalls of out-of-order cell execution 

Imagine you’re calculating the average sales of croissants for the last week. You execute a cell that deletes outliers, but then you return to a previous cell to adjust some parameters and re-run it. 

You’ve now got a Frankenstein dataset—part cleaned, part raw. This kind of scenario makes it extremely difficult to replicate your results or debug issues. 

Maintaining logical flow and data integrity in your Notebook 

Maintaining a logical progression in your Notebook is crucial for ensuring it remains a reliable tool for data analysis. 

Each cell should build upon the output of the previous one, creating a coherent flow of information and analysis. However, this comes with the risk of stale or overwritten variables. If you modify a variable in one cell but forget to update it in subsequent cells that rely on it, inconsistencies can occur, leading to misleading results. 

Let’s say you’re preparing a sales report and accidentally run the “total sales calculation” cell before updating the “monthly discount” cell. Your total sales figure ends up being incorrect, and you only realize it during a team meeting—embarrassing, right? 

A disciplined approach could have prevented this. To mitigate such risks, it is advisable to temporarily duplicate some of your data for testing purposes. This allows you to make changes and run tests without affecting the original variables, reducing the potential for errors and headaches. 

Restart and Run All: A best friend 

Make a habit of regularly restarting the kernel to clear the Notebook’s state. This helps in catching hidden dependencies or assumptions about prior cell executions. Also, use the “Run All” feature to verify that your Notebook flows logically from start to finish. Keep an eye on the execution count indicator to help track the order in which cells have been run. 

To reiterate this important message: maintaining discipline in cell execution order is not just good practice; it’s a necessity for creating reliable, shareable, and replicable notebooks. 

Whether working solo on a pet project or collaborating on a critical business analysis, disciplined execution ensures that your Notebook is as dependable as it is powerful. 

4. Optimized data loading and memory management 

Although optimizing data loading and memory management isn’t particularly relevant to our study of French bakery sales, it’s still important to be aware of the pitfalls that can arise from a lack of attention to these issues. 

Challenges of working with large datasets in Jupyter Notebooks 

Challenges abound when handling large datasets, as limited system memory and performance degradation can seriously hamper your work. 

Overlooking data size can slow your analysis, freeze your Notebook, or even crash the system. Being mindful of these factors is crucial; for example, attempting to load a 10GB dataset on a machine with only 8GB of RAM is a recipe for failure. Therefore, understanding and managing these challenges is integral to a smooth and productive workflow in Jupyter Notebooks. 

Efficient memory use techniques 

1. Data sampling: Working with representative subsets for exploratory analysis 

When initially exploring data, consider loading only a subset of rows and columns you need. This can be quickly done using the nrows and usecols parameters in pandas.read_csv() or similar functions in other data-loading libraries. 

# Example: Load only first 1000 rows and selected columns
df_sample = pd.read_csv(
    "bakery sales.csv", nrows=1000, usecols=["date", "time", "article"]
)

Code language: Python (python)

For preliminary analyses, sometimes working with a representative sample of your data can be as good as using the entire dataset. This also significantly reduces memory usage. 

# Example: Random sampling of 10% of the dataset
df_sample = df.sample(frac=0.1)
Code language: Python (python)

2. Tips: Functions like pandas.DataFrame.info() to monitor memory usage

For instance, using pandas.DataFrame.info() can provide a detailed summary of the DataFrame, including memory usage, which helps manage computational resources. 

3. Using suitable data types

One of the easiest ways to reduce memory usage is by converting data types. For example, float64 can often be safely downcast to float32, and int64 can often become int32 or even smaller data types. For instance, we convert unit_price from string to float and then downcast it to the smallest possible float type. 

# 'unit_price' is in string format with the Euro symbol, and comma as
#  a decimal separator
# remove the Euro symbol
df_copy["unit_price"] = df_copy["unit_price"].str.replace(" €", "")
# replace comma with dot
df_copy["unit_price"] = df_copy["unit_price"].str.replace(",", ".")

# Downcasting unit_price column
df_copy["unit_price"] = pd.to_numeric(df_copy["unit_price"], downcast="float")

Code language: Python (python)

Also, columns with a low number of unique values (low cardinality) can be converted to the category data type. 

df_copy['article'] = df_copy['article'].astype('category')
Code language: Python (python)

Thanks to these downcasts, we’ve freed up storage, reducing the size of the DataFrame by over 50%, from 68.94 MB to 32.37 MB. 

4. Techniques like dimensionality reduction and aggregating data

Sometimes, you can reduce your data size by aggregating it at a higher level. For instance, if you have transaction-level data, summarizing it daily or weekly can significantly reduce data size without losing the overall trend information. 

5. Deleting large variables to free up memory

Suppose you’ve created large intermediate variables that are no longer needed. In that case, you can free up memory by deleting them using the del keyword in Python. 

☝️ Do you remember your best friend “Restart and Run All”? Because you’ll be running your notebooks a lot if you follow this advice, even a tiny increase in execution time due to memory optimization can save you a lot of time in the long run. 

Effective data and memory management in Jupyter Notebooks is not merely a good-to-have but necessary for achieving a smooth workflow. Being judicious about what data to load, optimizing code to be memory-efficient, and systematically freeing up resources make for a more robust and hiccup-free data analysis experience. 

ℹ️ Mastering memory management is a discipline in itself, and what we’ve touched upon here is just the tip of the iceberg regarding the techniques and practices that can be employed. Checkout this article on Python Memory Management for further information.

5. Refactor and create modules 

As we come to the end of this article, there is one more critical aspect to address for mastering Jupyter Notebook: refactoring and modularity. 

Role of modular programming in data science 

In data science, modular programming serves as a cornerstone for creating efficient and streamlined workflows. 

This approach is centered around compartmentalizing your code into distinct, functional modules, much like organizing a complex machine into its fundamental, working parts. It provides a two-fold advantage: First, it elevates the readability of your Notebook by organizing code into more understandable and navigable units. Second, it enriches the reusability of your code, enabling it to serve as a set of building blocks for future projects. 

Continuing our earlier discussion on efficient data handling and memory optimization, let’s take those principles further by incorporating them into a refactored and modular code design. 

I’d like to introduce the function optimize_memory, which serves as a practical example of code refactoring. This function encapsulates all the various techniques we’ve discussed for memory optimization into a single, reusable piece of code. 

Instead of manually applying data type conversions and downcasting in multiple places throughout your Notebook, you can now make a single function call to optimize_memory

def optimize_memory(df, obj_cols_to_optimize=[], in_place=False):

    if not in_place:
        df = df.copy()

    # Downcast integer columns
    int_cols = df.select_dtypes(include=["int"]).columns
    df[int_cols] = df[int_cols].apply(pd.to_numeric, downcast="integer")

    # Downcast float columns
    float_cols = df.select_dtypes(include=["float"]).columns
    df[float_cols] = df[float_cols].apply(pd.to_numeric, downcast="float")

    # Downcast specified object columns
    for col in obj_cols_to_optimize:

        # Check if column exists and is actually of object dtype
        if col in df.columns and df[col].dtype == "object":
            # Convert to category
            df[col] = df[col].astype("category")

    if in_place:
        return None
    else:
        return df


df_memory_optimized = optimize_memory(
    df_copy, obj_cols_to_optimize=["article"], in_place=False
)

Code language: Python (python)

Creating and Organizing Custom Modules for Notebooks 

While many notebooks commence as a straightforward list of code cells, the truly effective ones break free from this basic form. 

By leveraging custom Python modules, they encapsulate intricate tasks or repetitive actions. Take, for example, our optimize_memory function. This function could be an excellent candidate for exporting to a custom module. By doing so, you can effortlessly integrate it into your Notebook whenever needed, akin to slotting a new gear into a well-oiled machine. 

This level of organization keeps your main document streamlined, allowing you to concentrate on high-level logic and data interpretation rather than getting mired in the details of code syntax. 

Effortless integration with team collaboration in mind 

Incorporating custom modules like one containing optimize_memory is generally as simple as bringing in standard Python libraries. 

Simple commands like import my_module or from my_module import optimize_memory can help weave these external Python assets into the fabric of your Notebook. To ensure seamless collaboration, remember to document any external dependencies in a requirements.txt file. 

Modular notebooks are easier to decipher, more straightforward to extend, and thus ideal for team-based projects. A notebook arranged in this fashion transforms from a personal sandbox into an enterprise-ready tool that multiple individuals can comfortably use for various aspects of the project. This transformation naturally fosters better collaboration and makes the Notebook a more versatile asset in the data science toolkit. 

Refactoring and modularizing your Jupyter Notebook is like refining a good recipe. The result is something more efficient, shareable, and enjoyable. Both for you, the chef, and those lucky enough to sample your culinary (or data science) creations. 

Conclusion 

Recap of best practices 

Throughout this article, we’ve dived deep into best practices for Jupyter Notebooks, ranging from maintaining focus and clarity in your notebooks to mastering Markdown for enhanced readability. 

We’ve also discussed the importance of disciplined cell execution and how to handle data loading and memory management for smoother notebook operation. Lastly, we tackled the advantages of modular programming.  

The value of each practice in professional data science settings 

In a professional setting, these practices are not just recommendations but necessities. 

A well-organized and focused notebook ensures that your work is understandable and replicable, not just by you but by anyone on your team. Efficient memory management and modular code can dramatically speed up development time and make the maintenance of long-term projects more sustainable. 

A collaborative endeavor 

The ultimate goal of adhering to these best practices is to facilitate better teamwork and collaboration. 

In a shared environment, a disciplined approach to Notebook usage ensures that everyone can follow along, contribute, and derive value from what has been done. 

But let’s not forget, that collaboration isn’t just about working well with others. It’s also about being kind to your future self. After all, future you will undoubtedly appreciate the meticulous organization and readability when revisiting your Notebook! 

As Jupyter Notebooks become increasingly central in data science and related fields, mastering these best practices is more critical than ever. If you’re using Jupyter Notebooks in an interview setting or simply collaborating on a project, these tips can be your secret weapon for efficient and practical analysis. 

Now that you’re equipped with these best practices, why not put them to the test? Experience the efficiency and clarity of a well-organized Jupyter Notebook during your next project or even in data science interviews. 

And there you have it! Best practices for Jupyter Notebooks that can elevate your data science projects from good to exceptional. Thank you for reading, and happy coding! 

Simon Bastide is a data scientist with a Ph.D. in Human Movement Sciences and nearly five years of experience. Specializing in human-centric data, he’s worked in both academic and corporate settings. Has has also been an independent technical writer for the past two years, distilling complex data topics into accessible insights. Visit his website or connect with him on LinkedIn for collaborations and discussions.