Data Science Models Cheat Sheet



At DataCamp, we always look out for ways to help our students, who are all eager to become more data savvy, reach their objectives even faster. That’s why we recently created a series of Python cheat sheets that target people who are using it for data analysis. The ongoing series already covers some of the most important and fundamental topics in data science and are must-haves for anyone that wants to get started with Python for data science.

  1. Data Science Models Cheat Sheet
  2. Data Science Cheat Sheet Pdf
  3. Python For Data Science Cheat Sheet

At DataCamp, we always look out for ways to help our students, who are all eager to become more data savvy, reach their objectives even faster. That’s why we recently created a series of Python cheat sheets that target people who are using it for data analysis. The ongoing series already covers some of the most important and fundamental topics in data science and are must-haves for anyone that wants to get started with Python for data science.

I hope this huge list will be helpful to you. If you like any of the cheatsheet whether it will be machine learning algorithms cheat sheet or scikit-learn cheat sheet or data visualization cheat sheet or keras cheat sheet or tensorflow cheat sheet or any other cheatsheets, then please share this list with others, so they can also use this in machine learning or data science task. The Pandas library is built on NumPy and provides easy-to-use data structures and data analysis tools for the Python programming language.The Pandas cheat sheet will guide you through the basics of the Pandas library, going from the data structures to I/O, selection, dropping indices or columns, sorting and ranking, retrieving basic information of the data structures you’re working with to. Cheat Sheet for Machine Learning Models. M.S.E Data Science @Johns Hopkins University with a B.S. In Applied Mathematics. Previously @MongoDB, current Data Science Intern @EA. Python For Data Science Cheat Sheet Keras Learn Python for data science Interactively at www.DataCamp.com Keras DataCamp Learn Python for Data Science Interactively Data Also see NumPy, Pandas & Scikit-Learn Keras is a powerful and easy-to-use deep learning library for Theano and TensorFlow that provides a high-level neural networks API to develop and evaluate deep learning models.

And if you haven’t yet, you should consider learning this programming language. Year after year, Python’s popularity is increasing in the data science industry. The use of Python as a data science tool has been on the rise over the past few years: 54% of the respondents of the latest O'Reilly Data Science Salary Survey indicated that they used Python. The results of the 2015 survey showed that 51% of the respondents used Python.

Nobody can deny that Python has been on the rise in the data science industry and it certainly seems that it's here to stay.

So why not start now and make sure that the first steps you take count?

Get a copy of Python for data science cheat sheet and go through DataCamp’s Intro to Python for Data Science course. You’ll cover topics such as variables and data types, strings, lists, the basics of NumPy arrays, and much more. Complete your Python basics with an interactive Python List tutorial, to practice using this built-in data structure in Python for data analysis.

After, it’s time to lay the foundation for learning other data science libraries and dig deeper into (part of) the fundaments of the Pandas and Scikit-Learn libraries: take a look at NumPy, the Python scientific computing library that is excellent for data analysis. You’ll see that this library provides you with an array data structure that is a great alternative to Python lists: it is more compact, allows faster access when you’re reading and writing items, and is more convenient and more efficient overall.


The NumPy cheat sheet will introduce you to array creation, array mathematics, selecting elements (through subsetting, slicing and indexing), array manipulation and much more!

Make sure to use the reference sheet when you’re practicing arrays with DataCamp’s Python NumPy Tutorial or when you go through the Intro to Python for Data Science course. Undoubtedly, you’ll take your first steps with NumPy with confidence!

When you have mastered the basics, it’s time to get your hands dirty and analyze some real-life data. But you cannot start without the Pandas library: it’s all you ever need and want to use if you want to do data manipulation and analysis in Python.

But don’t go in unprepared: take DataCamp’s Pandas Foundations and Manipulating DataFrames with Pandas courses and make sure to keep the Pandas cheat sheet handy when you’re starting the Pandas DataFrame tutorial, where you can get extra practice to use this fast, flexible and expressive data structure.

Just like the tutorial, the cheat sheet not only gives basic information about the Pandas data structures and how to select values or basic statistics from them, but also shows you how inputting and outputting of data, sorting and ranking the data in your DataFrame or Series and data alignment works.

After you have already explored your data with some summary statistics on your DataFrame and manipulated your data in such a way that it’s ready for further analysis, it’s time to visualize your data!

The Bokeh library is the one that you need quickly and easily create interactive plots, dashboards, and data applications. What’s more, Bokeh enables high-performance visual presentations of large data sets in modern web browsers!

This Python visualization library is a powerful tool for your data science toolbox, so why not get started straight away?


First, get a copy of our Bokeh cheat sheet: it will make you familiar with the steps you need to go through to plotting and creating statistical charts. It summarizes how you can prepare your data, create a new plot, add renderers for your data with custom visualizations, output your plot and save or show it. Also, the creation of basic statistical charts will hold no secrets for you any longer.

Pdf

But don’t just sit around and look at the cheat sheet: take the Interactive Data Visualization with Bokeh course and get the practice you need to become a data viz wizard in no time!

After exploring your data, you’ll have even more detailed research questions. Here’s where modeling your data gets important if you want to find a solid answer for them.


Machine learning is essential to data science; And everybody that says “machine learning” and “Python” in the same sentence, knows that Scikit-Learn is the way to go for machine learning in Python. This library implements a wide variety of machine learning, preprocessing, cross-validation and visualization algorithms with the help of a unified interface.

However, starting to tackle machine learning problems can be a pain: you don’t necessarily know where to start and how to go about it. That’s why the Scikit-Learn cheat sheet is a perfect companion to your first steps with Scikit-Learn: you'll not only see how to load in your data and how to preprocess it, but you’ll also see how to create your own model to which you can fit your data and predict target labels. Validation and tuning of your models to improve performance are also included in the reference sheet. Keep it handy while you’re going through our Scikit-Learn tutorial with character recognition as a topic.

About DataCamp

DataCamp is an online interactive education platform that focuses on building the best learning experience specifically for Data Science.

Anaconda Perspectives
Data Scientists: Bring the Narrative to the ForefrontRead More
Anaconda Perspectives
There Is No Data – Only Frozen ModelsRead More
Anaconda Perspectives
Why Organizations Should Invest in a Chief Data Officer
Read More
Data Science Models Cheat Sheet

Being able to make causal claims is a key business value for any data science team, no matter their size.
Quick analytics (in other words, descriptive statistics) are the bread and butter of any good data analyst working on quick cycles with their product team to understand their users. But sometimes some important questions arise that need more precise answers. Business value sometimes means distinguishing what is true insights from what is incidental noise. Insights that will hold up versus temporary marketing material. In other terms causation.

When answering these questions, absolute rigour is required. Failing to understand key mechanisms could mean missing out on important findings, rolling out the wrong version of a product, and eventually costing your business millions of dollars, or crucial opportunities.
Ron Kohavi, former director of the experimentation team at Microsoft, has a famous example: changing the place where credit card offers were displayed on amazon.com generated millions in revenue for the company.

The tech industry has picked up on this trend in the last 6 years, making Causal Inference a hot topic in data science. Netflix, Microsoft and Google all have entire teams built around some variations of causal methods. Causal analysis is also (finally!) gaining a lot of traction in pure AI fields. Having an idea of what causal inference methods can do for you and for your business is thus becoming more and more important.

The causal inference levels of evidence ladder

Hence the causal inference ladder cheat sheet! Beyond the value for data scientists themselves, I’ve also had success in the past showing this slide to internal clients to explain how we were processing the data and making conclusions.

The “ladder” classification explains the level of proof each method will give you. The higher, the easier it will be to make sure the results from your methods are true results and reproducible – the downside is that the set-up for the experiment will be more complex. For example, setting up an A/B test typically requires a dedicated framework and engineering resources.
Methods further down the ladder will require less effort on the set-up (think: observational data), but more effort on the rigour of the analysis. Making sure your analysis has true findings and is not just commenting some noise (or worse, is plain wrong) is a process called robustness checks. It’s arguably the most important part of any causal analysis method. The further down on the ladder your method is, the more robustness checks I’ll require if I’m your reviewer 🙂

I also want to stress that methods on lower rungs are not less valuable – it’s almost the contrary! They are brilliant methods that allow use of observational data to make conclusions, and I would not be surprised if people like Susan Athey and Guido Imbens, who have made significant contributions to these methods in the last 10 years, were awarded the Nobel prize one of these days!

Rung 1 – Scientific experiments

On the first rung of the ladder sit typical scientific experiments. The kind you were probably taught in middle or even elementary school. To explain how a scientific experiment should be conducted, my biology teacher had us take seeds from a box, divide them into two groups and plant them in two jars. The teacher insisted that we made the conditions in the two jars completely identical: same number of seeds, same moistening of the ground, etc.
The goal was to measure the effect of light on plant growth, so we put one of our jars near a window and locked the other one in a closet. Two weeks later, all our jars close to the window had nice little buds, while the ones we left in the closet barely had grown at all.
The exposure to light being the only difference between the two jars, the teacher explained, we were allowed to conclude that light deprivation caused plants to not grow.

Sounds simple enough? Well, this is basically the most rigorous you can be when you want to attribute cause. The bad news is that this methodology only applies when you have a certain level of control on both your treatment group (the one who receives light) and your control group (the one in the cupboard). Enough control at least that all conditions are strictly identical but the one parameter you’re experimenting with (light in this case). Obviously, this doesn’t apply in social sciences nor in data science.

Then why do I include it in this article you might ask? Well, basically because this is the reference method. All causal inference methods are in a way hacks designed to reproduce this simple methodology in conditions where you shouldn’t be able to make conclusions if you followed strictly the rules explained by your middle school teacher.

Rung 2 – Statistical Experiments (aka A/B tests)

Probably the most well-known causal inference method in tech: A/B tests, a.k.a Randomized Controlled Trials for our Biostatistics friends. The idea behind statistical experiments is to rely on randomness and sample size to mitigate the inability to put your treatment and control groups in the exact same conditions. Fundamental statistical theorems like the law of large numbers, the Central Limit theorem or Bayesian inference gives guarantees that this will work and a way to deduce estimates and their precision from the data you collect.

Arguably, an Experiments platform should be one of the first projects any Data Science team should invest in (once all the foundational levels are in place, of course). The impact of setting up an experiments culture in tech companies has been very well documented and has earned companies like Google, Amazon, Microsoft, etc. billions of dollars.

Of course, despite being pretty reliable on paper, A/B tests come with their own sets of caveats. This white paper by Ron Kohavi and other founding members of the Experiments Platform at Microsoft is very useful.

Rung 3 – Quasi-Experiments

As awesome as A/B tests (or RCTs) can be, in some situations they just can’t be performed. This might happen because of lack of tooling (a common case in tech is when a specific framework lacks the proper tools to set up an experiment super quickly and the test becomes counter-productive), ethical concerns, or just simply because you want to study some data ex-post. Fortunately for you if you’re in one of those situations, some methods exist to still be able to get causal estimates of a factor. In rung 3 we talk about the fascinating world of quasi-experiments (also called natural experiments).

A quasi-experiment is the situation when your treatment and control group are divided by a natural process that is not truly random but can be considered close enough to compute estimates. In practice, this means that you will have different methods that will correspond to different assumptions about how “close” you are to the A/B test situation. Among famous examples of natural experiments: using the Vietnam war draft lottery to estimate the impact of being a veteran on your earnings, or the border between New Jersey and Pennsylvania to study the effect of minimum wages on the economy.

Now let me give you a fair warning: when you start looking for quasi-experiments, you can quickly become obsessed by it and start thinking about clever data collection in improbable places… Now you can’t say you haven’t been warned 😜 I have more than a few friends who were lured into attracted by a career in econometrics for the sheer love of natural experiments.

Most popular methods in the world of quasi-experiments are: differences-in-differences (the most common one, according to Scott Cunnigham, author of the Causal Inference Mixtape), Regression Discontinuity Design, Matching, or Instrumental variables (which is an absolutely brilliant construct, but rarely useful in practice). If you’re able to observe (i.e. gather data) on all factors that explain how treatment and control are separated, then a simple linear regression including all factors will give good results.

Rung 4 – The world of counterfactuals

Data mining cheat sheet

Finally, you will sometimes want to try to detect causal factors from data that is purely observational. A classic example in tech is estimating the effect of a new feature when no A/B test was done and you don’t have any kind of group that isn’t receiving the feature that you could use as a control:

Maybe right new you’re thinking: wait… are you saying we can simply look at the data before and after and be allowed to make conclusions? Well, the trick is that often it isn’t that simple to make a rigorous analysis or even compute an estimate. The idea here is to create a model that will allow to compute a counterfactual control group. Counterfactual means “what would have happened hadn’t this feature existed”. If you have a model of your number of users that you have enough confidence in to make some robust predictions, then you basically have everything

Data Science Models Cheat Sheet

There is a catch though. When using counterfactual methods, the quality of your prediction is key. Without getting too much into the technical details, this means that your model not only has to be accurate enough, but also needs to “understand” what underlying factors are driving what you currently observe. If a confounding factor that is independent from your newest rollout varies (economic climate for example), you do not want to attribute this change to your feature. Your model needs to understand this as well if you want to be able to make causal claims.

Data Science Models Cheat Sheet

This is why robustness checks are so important when using counterfactuals. Some cool Causal Inference libraries like Microsoft’s doWhy do these checks automagically for you 😲 Sensitivity methods like the one implemented in the R package tipr can be also very useful to check some assumptions. Finally, how could I write a full article on causal inference without mentioning DAGs? They are a widely used tool to state your assumptions, especially in the case of rung 4 methods.

(Quick side note: right now with the unprecedented Covid-19 crisis, it’s likely that most prediction models used in various applications are way off. Obviously, those cannot be used for counterfactual causal analysis)

Data Science Cheat Sheet Pdf

Technically speaking, rung 4 methods look really much like methods from rung 3, with some small tweaks. For example, synthetic diff-in-diff is a combination of diff-in-diff and matching. For time series data, CausalImpact is a very cool and well-known R package. causalTree is another interesting approach worth looking at. More generally, models carefully crafted with domain expertise and rigorously tested are the best tools to do Causal Inference with only counterfactual control groups.

Python For Data Science Cheat Sheet

Hope this cheat sheet will help you find the right method for your causal analyses and be impactful for your business! Let us know about your best #causalwins on our Twitter, or in the comments!