libZSservicesZSamazonka-comprehendZSamazonka-comprehend
Copyright(c) 2013-2021 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone

Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Description

 
Synopsis

Documentation

data EntityRecognizerEvaluationMetrics Source #

Detailed information about the accuracy of an entity recognizer.

See: newEntityRecognizerEvaluationMetrics smart constructor.

Constructors

EntityRecognizerEvaluationMetrics' 

Fields

  • recall :: Maybe Double

    A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.

  • precision :: Maybe Double

    A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.

  • f1Score :: Maybe Double

    A measure of how accurate the recognizer results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.

Instances

Instances details
Eq EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Read EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Show EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Generic EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Associated Types

type Rep EntityRecognizerEvaluationMetrics :: Type -> Type #

NFData EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Hashable EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

FromJSON EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

type Rep EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

type Rep EntityRecognizerEvaluationMetrics = D1 ('MetaData "EntityRecognizerEvaluationMetrics" "Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics" "libZSservicesZSamazonka-comprehendZSamazonka-comprehend" 'False) (C1 ('MetaCons "EntityRecognizerEvaluationMetrics'" 'PrefixI 'True) (S1 ('MetaSel ('Just "recall") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "precision") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "f1Score") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))))

newEntityRecognizerEvaluationMetrics :: EntityRecognizerEvaluationMetrics Source #

Create a value of EntityRecognizerEvaluationMetrics with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:recall:EntityRecognizerEvaluationMetrics', entityRecognizerEvaluationMetrics_recall - A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.

$sel:precision:EntityRecognizerEvaluationMetrics', entityRecognizerEvaluationMetrics_precision - A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.

$sel:f1Score:EntityRecognizerEvaluationMetrics', entityRecognizerEvaluationMetrics_f1Score - A measure of how accurate the recognizer results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.

entityRecognizerEvaluationMetrics_recall :: Lens' EntityRecognizerEvaluationMetrics (Maybe Double) Source #

A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.

entityRecognizerEvaluationMetrics_precision :: Lens' EntityRecognizerEvaluationMetrics (Maybe Double) Source #

A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.

entityRecognizerEvaluationMetrics_f1Score :: Lens' EntityRecognizerEvaluationMetrics (Maybe Double) Source #

A measure of how accurate the recognizer results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.