dirty_cat: machine learning on dirty categories

dirty_cat helps with machine-learning on non-curated categories. It provides encoders that are robust to morphological variants, such as typos, in the category strings.

The SuperVectorizer is a high-level transformer that can automatically turn a heterogeneous data table, as a pandas dataframe, to a numerical array, suitable for machin learning.

Dirty category encoder: scikit-learn’s OneHotEncoder can be replaced by:

  • GapEncoder, scalable and interpretable, where each encoding dimension corresponds to a topic that summarizes substrings captured.

  • SimilarityEncoder, a simple modification of one-hot encoding to capture the strings.

  • MinHashEncoder, very scalable

For a detailed description of the problem of encoding dirty categorical data, see Similarity encoding for learning with dirty categorical variables 1 and Encoding high-cardinality string categorical variables 2.


$ pip install –user dirty_cat

Recent changes

Requires Python 3

Using dirty_cat

API documentation

Encoders / Vectorizers


This encoder can be understood as a continuous encoding on a set of latent categories estimated from the data.


Encode string categorical features as a numeric array, minhash method applied to ngram decomposition of strings based on ngram decomposition of the string.


Encode string categorical features as a numeric array.


Encode categorical features as a numeric array given a target vector.


Easily transforms a heterogeneous data table (such as a dataframe) to a numerical array for machine learning.

Data download


fetches the employee_salaries dataset


fetches the medical charge dataset


fetches the midwest survey dataset


fetches the open payements dataset


fetches the road safety dataset


fetches the traffic violations dataset


Returns the directories in which dirty_cat looks for data.


dirty_cat is for now a repository for developing ideas with high-quality implementations, a form of a research project: there is still little known about the problems of dirty categories. We hope that tradeoffs will emerge in the long run, and that these tradeoffs will enable us to do better software. We really need people giving feedback on success and failures with the different techniques and pointing us to open datasets on which we can do more empirical work. We also welcome contributions in the scope of dirty categories.

See also

Many classic categorical encoding schemes are available here: http://contrib.scikit-learn.org/categorical-encoding/

Similarity encoding in also available in Spark ML: https://github.com/rakutentech/spark-dirty-cat


Patricio Cerda, Gaël Varoquaux, Balázs Kégl. Similarity encoding for learning with dirty categorical variables. 2018. Machine Learning journal, Springer.


Patricio Cerda, Gaël Varoquaux. Encoding high-cardinality string categorical variables. 2020. IEEE Transactions on Knowledge & Data Engineering.