libZSservicesZSamazonka-comprehendZSamazonka-comprehend
Copyright(c) 2013-2021 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone

Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics

Description

 
Synopsis

Documentation

data EntityTypesEvaluationMetrics Source #

Detailed information about the accuracy of an entity recognizer for a specific entity type.

See: newEntityTypesEvaluationMetrics smart constructor.

Constructors

EntityTypesEvaluationMetrics' 

Fields

  • recall :: Maybe Double

    A measure of how complete the recognizer results are for a specific entity type in the test data. High recall means that the recognizer returned most of the relevant results.

  • precision :: Maybe Double

    A measure of the usefulness of the recognizer results for a specific entity type in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.

  • f1Score :: Maybe Double

    A measure of how accurate the recognizer results are for a specific entity type in the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.

Instances

Instances details
Eq EntityTypesEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics

Read EntityTypesEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics

Show EntityTypesEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics

Generic EntityTypesEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics

Associated Types

type Rep EntityTypesEvaluationMetrics :: Type -> Type #

NFData EntityTypesEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics

Hashable EntityTypesEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics

FromJSON EntityTypesEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics

type Rep EntityTypesEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics

type Rep EntityTypesEvaluationMetrics = D1 ('MetaData "EntityTypesEvaluationMetrics" "Amazonka.Comprehend.Types.EntityTypesEvaluationMetrics" "libZSservicesZSamazonka-comprehendZSamazonka-comprehend" 'False) (C1 ('MetaCons "EntityTypesEvaluationMetrics'" 'PrefixI 'True) (S1 ('MetaSel ('Just "recall") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "precision") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "f1Score") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))))

newEntityTypesEvaluationMetrics :: EntityTypesEvaluationMetrics Source #

Create a value of EntityTypesEvaluationMetrics with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:recall:EntityTypesEvaluationMetrics', entityTypesEvaluationMetrics_recall - A measure of how complete the recognizer results are for a specific entity type in the test data. High recall means that the recognizer returned most of the relevant results.

$sel:precision:EntityTypesEvaluationMetrics', entityTypesEvaluationMetrics_precision - A measure of the usefulness of the recognizer results for a specific entity type in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.

$sel:f1Score:EntityTypesEvaluationMetrics', entityTypesEvaluationMetrics_f1Score - A measure of how accurate the recognizer results are for a specific entity type in the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.

entityTypesEvaluationMetrics_recall :: Lens' EntityTypesEvaluationMetrics (Maybe Double) Source #

A measure of how complete the recognizer results are for a specific entity type in the test data. High recall means that the recognizer returned most of the relevant results.

entityTypesEvaluationMetrics_precision :: Lens' EntityTypesEvaluationMetrics (Maybe Double) Source #

A measure of the usefulness of the recognizer results for a specific entity type in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.

entityTypesEvaluationMetrics_f1Score :: Lens' EntityTypesEvaluationMetrics (Maybe Double) Source #

A measure of how accurate the recognizer results are for a specific entity type in the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.