Package com.oracle.bmc.ailanguage.model
Class TextClassificationModelMetrics.Builder
- java.lang.Object
- 
- com.oracle.bmc.ailanguage.model.TextClassificationModelMetrics.Builder
 
- 
- Enclosing class:
- TextClassificationModelMetrics
 
 public static class TextClassificationModelMetrics.Builder extends Object 
- 
- 
Constructor SummaryConstructors Constructor Description Builder()
 - 
Method SummaryAll Methods Instance Methods Concrete Methods Modifier and Type Method Description TextClassificationModelMetrics.Builderaccuracy(Float accuracy)The fraction of the labels that were correctly recognised .TextClassificationModelMetricsbuild()TextClassificationModelMetrics.Buildercopy(TextClassificationModelMetrics model)TextClassificationModelMetrics.BuildermacroF1(Float macroF1)F1-score, is a measure of a model\u2019s accuracy on a datasetTextClassificationModelMetrics.BuildermacroPrecision(Float macroPrecision)Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)TextClassificationModelMetrics.BuildermacroRecall(Float macroRecall)Measures the model’s ability to predict actual positive classes.TextClassificationModelMetrics.BuildermicroF1(Float microF1)F1-score, is a measure of a model\u2019s accuracy on a datasetTextClassificationModelMetrics.BuildermicroPrecision(Float microPrecision)Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)TextClassificationModelMetrics.BuildermicroRecall(Float microRecall)Measures the model’s ability to predict actual positive classes.TextClassificationModelMetrics.BuilderweightedF1(Float weightedF1)F1-score, is a measure of a model\u2019s accuracy on a datasetTextClassificationModelMetrics.BuilderweightedPrecision(Float weightedPrecision)Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)TextClassificationModelMetrics.BuilderweightedRecall(Float weightedRecall)Measures the model’s ability to predict actual positive classes.
 
- 
- 
- 
Method Detail- 
accuracypublic TextClassificationModelMetrics.Builder accuracy(Float accuracy) The fraction of the labels that were correctly recognised .- Parameters:
- accuracy- the value to set
- Returns:
- this builder
 
 - 
microF1public TextClassificationModelMetrics.Builder microF1(Float microF1) F1-score, is a measure of a model\u2019s accuracy on a dataset- Parameters:
- microF1- the value to set
- Returns:
- this builder
 
 - 
microPrecisionpublic TextClassificationModelMetrics.Builder microPrecision(Float microPrecision) Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)- Parameters:
- microPrecision- the value to set
- Returns:
- this builder
 
 - 
microRecallpublic TextClassificationModelMetrics.Builder microRecall(Float microRecall) Measures the model’s ability to predict actual positive classes.It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct. - Parameters:
- microRecall- the value to set
- Returns:
- this builder
 
 - 
macroF1public TextClassificationModelMetrics.Builder macroF1(Float macroF1) F1-score, is a measure of a model\u2019s accuracy on a dataset- Parameters:
- macroF1- the value to set
- Returns:
- this builder
 
 - 
macroPrecisionpublic TextClassificationModelMetrics.Builder macroPrecision(Float macroPrecision) Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)- Parameters:
- macroPrecision- the value to set
- Returns:
- this builder
 
 - 
macroRecallpublic TextClassificationModelMetrics.Builder macroRecall(Float macroRecall) Measures the model’s ability to predict actual positive classes.It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct. - Parameters:
- macroRecall- the value to set
- Returns:
- this builder
 
 - 
weightedF1public TextClassificationModelMetrics.Builder weightedF1(Float weightedF1) F1-score, is a measure of a model\u2019s accuracy on a dataset- Parameters:
- weightedF1- the value to set
- Returns:
- this builder
 
 - 
weightedPrecisionpublic TextClassificationModelMetrics.Builder weightedPrecision(Float weightedPrecision) Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)- Parameters:
- weightedPrecision- the value to set
- Returns:
- this builder
 
 - 
weightedRecallpublic TextClassificationModelMetrics.Builder weightedRecall(Float weightedRecall) Measures the model’s ability to predict actual positive classes.It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct. - Parameters:
- weightedRecall- the value to set
- Returns:
- this builder
 
 - 
buildpublic TextClassificationModelMetrics build() 
 - 
copypublic TextClassificationModelMetrics.Builder copy(TextClassificationModelMetrics model) 
 
- 
 
-