A C D E F G I M N O P Q R S T V W X Z
| add_julia_processes | Add additional Julia worker processes to parallelize workloads |
| all_treatment_combinations | Return a dataframe containing all treatment combinations of one or more treatment vectors, ready for use as treatment candidates in 'fit_predict!' or 'predict' |
| apply | Return the leaf index in a tree model into which each point in the features falls |
| apply_nodes | Return the indices of the points in the features that fall into each node of a trained tree model |
| as.mixeddata | Convert a vector of values to IAI mixed data format |
| autoplot.grid_search | Construct a 'ggplot2::ggplot' object plotting grid search results for Optimal Feature Selection learners |
| autoplot.roc_curve | Construct a 'ggplot2::ggplot' object plotting the ROC curve |
| autoplot.similarity_comparison | Construct a 'ggplot2::ggplot' object plotting the results of the similarity comparison |
| autoplot.stability_analysis | Construct a 'ggplot2::ggplot' object plotting the results of the stability analysis |
| categorical_classification_reward_estimator | Learner for conducting reward estimation with categorical treatments and classification outcomes |
| categorical_regression_reward_estimator | Learner for conducting reward estimation with categorical treatments and regression outcomes |
| categorical_reward_estimator | Learner for conducting reward estimation with categorical treatments |
| categorical_survival_reward_estimator | Learner for conducting reward estimation with categorical treatments and survival outcomes |
| cleanup_installation | Remove all traces of automatic Julia/IAI installation |
| clone | Return an unfitted copy of a learner with the same parameters |
| convert_treatments_to_numeric | Convert 'treatments' from symbol/string format into numeric values. |
| copy_splits_and_refit_leaves | Copy the tree split structure from one learner into another and refit the models in each leaf of the tree using the supplied data |
| decision_path | Return a matrix where entry '(i, j)' is true if the 'i'th point in the features passes through the 'j'th node in a trained tree model. |
| delete_rich_output_param | Delete a global rich output parameter |
| equal_propensity_estimator | Learner that estimates equal propensity for all treatments. |
| fit | Fits a model to the training data |
| fit_and_expand | Fit an imputation learner with training features and create adaptive indicator features to encode the missing pattern |
| fit_cv | Fits a grid search to the training data with cross-validation |
| fit_predict | Fit a reward estimation model on features, treatments and outcomes and return predicted counterfactual rewards for each observation, as well as the score of the internal estimators. |
| fit_transform | Fit an imputation model using the given features and impute the missing values in these features |
| fit_transform_cv | Train a grid using cross-validation with features and impute all missing values in these features |
| get_best_params | Return the best parameter combination from a grid |
| get_classification_label | Return the predicted label at a node of a tree |
| get_classification_proba | Return the predicted probabilities of class membership at a node of a tree |
| get_cluster_assignments | Return the indices of the trees assigned to each cluster, under the clustering of a given number of trees |
| get_cluster_details | Return the centroid information for each cluster, under the clustering of a given number of trees |
| get_cluster_distances | Return the distances between the centroids of each pair of clusters, under the clustering of a given number of trees |
| get_depth | Get the depth of a node of a tree |
| get_estimation_densities | Return the total kernel density surrounding each treatment candidate for the propensity/outcome estimation problems in a fitted learner. |
| get_features_used | Return the names of the features used by the learner |
| get_grid_results | Return a summary of the results from the grid search |
| get_grid_result_details | Return a vector of lists detailing the results of the grid search |
| get_grid_result_summary | Return a summary of the results from the grid search |
| get_learner | Return the fitted learner using the best parameter combination from a grid |
| get_lower_child | Get the index of the lower child at a split node of a tree |
| get_machine_id | Return the machine ID for the current computer. |
| get_num_fits | Return the number of fits along the path in the trained learner |
| get_num_nodes | Return the number of nodes in a trained learner |
| get_num_samples | Get the number of training points contained in a node of a tree |
| get_params | Return the value of all parameters on a learner |
| get_parent | Get the index of the parent node at a node of a tree |
| get_policy_treatment_outcome | Return the quality of the treatments at a node of a tree |
| get_policy_treatment_rank | Return the treatments ordered from most effective to least effective at a node of a tree |
| get_prediction_constant | Return the constant term in the prediction in the trained learner |
| get_prediction_weights | Return the weights for numeric and categoric features used for prediction in the trained learner |
| get_prescription_treatment_rank | Return the treatments ordered from most effective to least effective at a node of a tree |
| get_regression_constant | Return the constant term in the regression prediction at a node of a tree |
| get_regression_weights | Return the weights for each feature in the regression prediction at a node of a tree |
| get_rich_output_params | Return the current global rich output parameter settings |
| get_roc_curve_data | Extract the underlying data from an ROC curve (as returned by 'roc_curve') |
| get_split_categories | Return the categoric/ordinal information used in the split at a node of a tree |
| get_split_feature | Return the feature used in the split at a node of a tree |
| get_split_threshold | Return the threshold used in the split at a node of a tree |
| get_split_weights | Return the weights for numeric and categoric features used in the hyperplane split at a node of a tree |
| get_stability_results | Return the trained trees in order of increasing objective value, along with their variable importance scores for each feature |
| get_survival_curve | Return the survival curve at a node of a tree |
| get_survival_curve_data | Extract the underlying data from a survival curve (as returned by 'predict' or 'get_survival_curve') |
| get_survival_expected_time | Return the predicted expected survival time at a node of a tree |
| get_survival_hazard | Return the predicted hazard ratio at a node of a tree |
| get_train_errors | Extract the training objective value for each candidate tree in the comparison, where a lower value indicates a better solution |
| get_tree | Return a copy of the learner that uses a specific tree rather than the tree with the best training objective. |
| get_upper_child | Get the index of the upper child at a split node of a tree |
| glmnetcv_classifier | Learner for training GLMNet models for classification problems with cross-validation |
| glmnetcv_regressor | Learner for training GLMNet models for regression problems with cross-validation |
| glmnetcv_survival_learner | Learner for training GLMNet models for survival problems with cross-validation |
| grid_search | Controls grid search over parameter combinations |
| iai_setup | Initialize Julia and the IAI package. |
| imputation_learner | Generic learner for imputing missing values |
| impute | Impute missing values using either a specified method or through validation |
| impute_cv | Impute missing values using cross validation |
| install_julia | Download and install Julia automatically. |
| install_system_image | Download and install the IAI system image automatically. |
| is_categoric_split | Check if a node of a tree applies a categoric split |
| is_hyperplane_split | Check if a node of a tree applies a hyperplane split |
| is_leaf | Check if a node of a tree is a leaf |
| is_mixed_ordinal_split | Check if a node of a tree applies a mixed ordinal/categoric split |
| is_mixed_parallel_split | Check if a node of a tree applies a mixed parallel/categoric split |
| is_ordinal_split | Check if a node of a tree applies a ordinal split |
| is_parallel_split | Check if a node of a tree applies a parallel split |
| mean_imputation_learner | Learner for conducting mean imputation |
| missing_goes_lower | Check if points with missing values go to the lower child at a split node of of a tree |
| multi_questionnaire | Generic function for constructing an interactive questionnaire using multiple tree learners |
| multi_questionnaire.default | Construct an interactive questionnaire using multiple tree learners as specified by questions |
| multi_questionnaire.grid_search | Construct an interactive tree questionnaire using multiple tree learners from the results of a grid search |
| multi_tree_plot | Generic function for constructing an interactive tree visualization of multiple tree learners |
| multi_tree_plot.default | Construct an interactive tree visualization of multiple tree learners as specified by questions |
| multi_tree_plot.grid_search | Construct an interactive tree visualization of multiple tree learners from the results of a grid search |
| numeric_classification_reward_estimator | Learner for conducting reward estimation with numeric treatments and classification outcomes |
| numeric_regression_reward_estimator | Learner for conducting reward estimation with numeric treatments and regression outcomes |
| numeric_reward_estimator | Learner for conducting reward estimation with numeric treatments |
| numeric_survival_reward_estimator | Learner for conducting reward estimation with numeric treatments and survival outcomes |
| optimal_feature_selection_classifier | Learner for conducting Optimal Feature Selection on classification problems |
| optimal_feature_selection_regressor | Learner for conducting Optimal Feature Selection on regression problems |
| optimal_tree_classifier | Learner for training Optimal Classification Trees |
| optimal_tree_policy_maximizer | Learner for training Optimal Policy Trees where the policy should aim to maximize outcomes |
| optimal_tree_policy_minimizer | Learner for training Optimal Policy Trees where the policy should aim to minimize outcomes |
| optimal_tree_prescription_maximizer | Learner for training Optimal Prescriptive Trees where the prescriptions should aim to maximize outcomes |
| optimal_tree_prescription_minimizer | Learner for training Optimal Prescriptive Trees where the prescriptions should aim to minimize outcomes |
| optimal_tree_regressor | Learner for training Optimal Regression Trees |
| optimal_tree_survival_learner | Learner for training Optimal Survival Trees |
| optimal_tree_survivor | Learner for training Optimal Survival Trees |
| opt_knn_imputation_learner | Learner for conducting optimal k-NN imputation |
| opt_svm_imputation_learner | Learner for conducting optimal SVM imputation |
| opt_tree_imputation_learner | Learner for conducting optimal tree-based imputation |
| plot.grid_search | Plot a grid search results for Optimal Feature Selection learners |
| plot.roc_curve | Plot an ROC curve |
| plot.similarity_comparison | Plot a similarity comparison |
| plot.stability_analysis | Plot a stability analysis |
| predict | Return the predictions made by the model for each point in the features |
| predict_expected_survival_time | Return the expected survival time estimate made by a model for each point in the features. |
| predict_hazard | Return the fitted hazard coefficient estimate made by a model for each point in the features. |
| predict_outcomes | Return the predicted outcome for each treatment made by a model for each point in the features |
| predict_proba | Return the probabilities of class membership predicted by a model for each point in the features |
| predict_reward | Return counterfactual rewards estimated using learner parameters for each observation in the supplied data and predictions |
| predict_shap | Calculate SHAP values for all points in the features using the learner |
| predict_treatment_outcome | Return the estimated quality of each treatment in the trained model of the learner for each point in the features |
| predict_treatment_rank | Return the treatments in ranked order of effectiveness for each point in the features |
| print_path | Print the decision path through the learner for each sample in the features |
| prune_trees | Use the trained trees in a learner along with the supplied validation data to determine the best value for the 'cp' parameter and then prune the trees according to this value |
| questionnaire | Specify an interactive questionnaire of a tree learner |
| random_forest_classifier | Learner for training random forests for classification problems |
| random_forest_regressor | Learner for training random forests for regression problems |
| random_forest_survival_learner | Learner for training random forests for survival problems |
| rand_imputation_learner | Learner for conducting random imputation |
| read_json | Read in a learner or grid saved in JSON format |
| refit_leaves | Refit the models in the leaves of a trained learner using the supplied data |
| reset_display_label | Reset the predicted probability displayed to be that of the predicted label when visualizing a learner |
| reward_estimator | Learner for conducting reward estimation with categorical treatments |
| roc_curve | Generic function for constructing an ROC curve |
| roc_curve.default | Construct an ROC curve from predicted probabilities and true labels |
| roc_curve.learner | Construct an ROC curve using a trained model on the given data |
| score | Generic function for calculating scores |
| score.default | Calculate the score for a set of predictions on the given data |
| score.learner | Calculate the score for a model on the given data |
| set_display_label | Show the probability of a specified label when visualizing a learner |
| set_julia_seed | Set the random seed in Julia |
| set_params | Set all supplied parameters on a learner |
| set_reward_kernel_bandwidth | Save a new reward kernel bandwidth inside a learner, and return new reward predictions generated using this bandwidth for the original data used to train the learner. |
| set_rich_output_param | Sets a global rich output parameter |
| set_threshold | For a binary classification problem, update the the predicted labels in the leaves of the learner to predict a label only if the predicted probability is at least the specified threshold. |
| show_in_browser | Show interactive visualization of an object (such as a learner or curve) in the default browser |
| show_questionnaire | Show an interactive questionnaire based on a learner in default browser |
| similarity_comparison | Conduct a similarity comparison between the final tree in a learner and all trees in a new learner to consider the tradeoff between training performance and similarity to the original tree |
| single_knn_imputation_learner | Learner for conducting heuristic k-NN imputation |
| split_data | Split the data into training and test datasets |
| stability_analysis | Conduct a stability analysis of the trees in a tree learner |
| transform | Impute missing values in a dataframe using a fitted imputation model |
| transform_and_expand | Transform features with a trained imputation learner and create adaptive indicator features to encode the missing pattern |
| tree_plot | Specify an interactive tree visualization of a tree learner |
| tune_reward_kernel_bandwidth | Conduct the reward kernel bandwidth tuning procedure for a range of starting bandwidths and return the final tuned values. |
| variable_importance | Generate a ranking of the variables in the learner according to their importance during training. The results are normalized so that they sum to one. |
| variable_importance_similarity | Calculate similarity between the final tree in a tree learner with all trees in new tree learner using variable importance scores. |
| write_booster | Write the internal booster saved in the learner to file |
| write_dot | Output a learner in .dot format |
| write_html | Output a learner as an interactive browser visualization in HTML format |
| write_json | Output a learner or grid in JSON format |
| write_pdf | Output a learner as a PDF image |
| write_png | Output a learner as a PNG image |
| write_questionnaire | Output a learner as an interactive questionnaire in HTML format |
| write_svg | Output a learner as a SVG image |
| xgboost_classifier | Learner for training XGBoost models for classification problems |
| xgboost_regressor | Learner for training XGBoost models for regression problems |
| xgboost_survival_learner | Learner for training XGBoost models for survival problems |
| zero_imputation_learner | Learner for conducting zero-imputation |