| +.mldr | Generates a new mldr object joining the rows in the two mldrs given as input |
| ==.mldr | Checks if two mldr objects have the same structure |
| accuracy | Multi-label averaged evaluation metrics |
| Averaged metrics | Multi-label averaged evaluation metrics |
| average_precision | Multi-label ranking-based evaluation metrics |
| Basic metrics | Multi-label evaluation metrics |
| birds | birds |
| concurrenceReport | Generates a label concurrence report |
| coverage | Multi-label ranking-based evaluation metrics |
| emotions | emotions |
| example_auc | Multi-label ranking-based evaluation metrics |
| fmeasure | Multi-label averaged evaluation metrics |
| genbase | genbase |
| hamming_loss | Multi-label evaluation metrics |
| labelInteractions | Provides data about interactions between labels |
| macro_auc | Multi-label ranking-based evaluation metrics |
| macro_fmeasure | Multi-label averaged evaluation metrics |
| macro_precision | Multi-label averaged evaluation metrics |
| macro_recall | Multi-label averaged evaluation metrics |
| micro_auc | Multi-label ranking-based evaluation metrics |
| micro_fmeasure | Multi-label averaged evaluation metrics |
| micro_precision | Multi-label averaged evaluation metrics |
| micro_recall | Multi-label averaged evaluation metrics |
| mldr | Creates an object representing a multilabel dataset |
| mldrGUI | Launches the web-based GUI for mldr |
| mldr_evaluate | Evaluate predictions made by a multilabel classifier |
| mldr_from_dataframe | Generates an mldr object from a data.frame and a vector with label indices |
| mldr_to_labels | Label matrix of an MLD |
| mldr_transform | Transformns an MLDR into binary or multiclass datasets |
| one_error | Multi-label ranking-based evaluation metrics |
| plot.mldr | Generates graphic representations of an mldr object |
| precision | Multi-label averaged evaluation metrics |
| print.mldr | Prints the mldr content |
| Ranking-based metrics | Multi-label ranking-based evaluation metrics |
| ranking_loss | Multi-label ranking-based evaluation metrics |
| read.arff | Read an ARFF file |
| recall | Multi-label averaged evaluation metrics |
| remedial | Decouples highly imbalanced labels |
| roc | ROC curve |
| roc.mldr | ROC curve |
| subset_accuracy | Multi-label evaluation metrics |
| summary.mldr | Provides a summary of measures about the mldr |
| write_arff | Write an '"mldr"' object to a file |
| [.mldr | Filter rows in a'mldr' returning a new 'mldr' |