libZSservicesZSamazonka-kinesisZSamazonka-kinesis
Copyright(c) 2013-2021 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone

Amazonka.Kinesis.Types.Record

Description

 
Synopsis

Documentation

data Record Source #

The unit of data of the Kinesis data stream, which is composed of a sequence number, a partition key, and a data blob.

See: newRecord smart constructor.

Constructors

Record' 

Fields

  • encryptionType :: Maybe EncryptionType

    The encryption type used on the record. This parameter can be one of the following values:

    • NONE: Do not encrypt the records in the stream.
    • KMS: Use server-side encryption on the records in the stream using a customer-managed AWS KMS key.
  • approximateArrivalTimestamp :: Maybe POSIX

    The approximate time that the record was inserted into the stream.

  • sequenceNumber :: Text

    The unique identifier of the record within its shard.

  • data' :: Base64

    The data blob. The data in the blob is both opaque and immutable to Kinesis Data Streams, which does not inspect, interpret, or change the data in the blob in any way. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).

  • partitionKey :: Text

    Identifies which shard in the stream the data record is assigned to.

Instances

Instances details
Eq Record Source # 
Instance details

Defined in Amazonka.Kinesis.Types.Record

Methods

(==) :: Record -> Record -> Bool #

(/=) :: Record -> Record -> Bool #

Read Record Source # 
Instance details

Defined in Amazonka.Kinesis.Types.Record

Show Record Source # 
Instance details

Defined in Amazonka.Kinesis.Types.Record

Generic Record Source # 
Instance details

Defined in Amazonka.Kinesis.Types.Record

Associated Types

type Rep Record :: Type -> Type #

Methods

from :: Record -> Rep Record x #

to :: Rep Record x -> Record #

NFData Record Source # 
Instance details

Defined in Amazonka.Kinesis.Types.Record

Methods

rnf :: Record -> () #

Hashable Record Source # 
Instance details

Defined in Amazonka.Kinesis.Types.Record

Methods

hashWithSalt :: Int -> Record -> Int #

hash :: Record -> Int #

FromJSON Record Source # 
Instance details

Defined in Amazonka.Kinesis.Types.Record

type Rep Record Source # 
Instance details

Defined in Amazonka.Kinesis.Types.Record

type Rep Record = D1 ('MetaData "Record" "Amazonka.Kinesis.Types.Record" "libZSservicesZSamazonka-kinesisZSamazonka-kinesis" 'False) (C1 ('MetaCons "Record'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "encryptionType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EncryptionType)) :*: S1 ('MetaSel ('Just "approximateArrivalTimestamp") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))) :*: (S1 ('MetaSel ('Just "sequenceNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text) :*: (S1 ('MetaSel ('Just "data'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Base64) :*: S1 ('MetaSel ('Just "partitionKey") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text)))))

newRecord Source #

Create a value of Record with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:encryptionType:Record', record_encryptionType - The encryption type used on the record. This parameter can be one of the following values:

  • NONE: Do not encrypt the records in the stream.
  • KMS: Use server-side encryption on the records in the stream using a customer-managed AWS KMS key.

$sel:approximateArrivalTimestamp:Record', record_approximateArrivalTimestamp - The approximate time that the record was inserted into the stream.

$sel:sequenceNumber:Record', record_sequenceNumber - The unique identifier of the record within its shard.

$sel:data':Record', record_data - The data blob. The data in the blob is both opaque and immutable to Kinesis Data Streams, which does not inspect, interpret, or change the data in the blob in any way. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens accepts and returns only raw unencoded data.

$sel:partitionKey:Record', record_partitionKey - Identifies which shard in the stream the data record is assigned to.

record_encryptionType :: Lens' Record (Maybe EncryptionType) Source #

The encryption type used on the record. This parameter can be one of the following values:

  • NONE: Do not encrypt the records in the stream.
  • KMS: Use server-side encryption on the records in the stream using a customer-managed AWS KMS key.

record_approximateArrivalTimestamp :: Lens' Record (Maybe UTCTime) Source #

The approximate time that the record was inserted into the stream.

record_sequenceNumber :: Lens' Record Text Source #

The unique identifier of the record within its shard.

record_data :: Lens' Record ByteString Source #

The data blob. The data in the blob is both opaque and immutable to Kinesis Data Streams, which does not inspect, interpret, or change the data in the blob in any way. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens accepts and returns only raw unencoded data.

record_partitionKey :: Lens' Record Text Source #

Identifies which shard in the stream the data record is assigned to.