Note

This is the documentation for the current state of the development branch of Qiskit Experiments. The documentation or APIs here can change prior to being released.

Data Processing (qiskit_experiments.data_processing)

Data processing is the act of taking the data returned by the backend and converting it into a format that can be analyzed. It is implemented as a chain of data processing steps that transform various input data, e.g. IQ data, into a desired format, e.g. population, which can be analyzed.

These data transformations may consist of multiple steps, such as kerneling and discrimination. Each step is implemented by a DataAction also called a node.

The data processor implements the __call__() method. Once initialized, it can thus be used as a standard python function:

processor = DataProcessor(input_key="memory", [Node1(), Node2(), ...])
out_data = processor(in_data)

The data input to the processor is a sequence of dictionaries each representing the result of a single circuit. The output of the processor is a numpy array whose shape and data type depend on the combination of the nodes in the data processor.

Uncertainties that arise from quantum measurements or finite sampling can be taken into account in the nodes: a standard error can be generated in a node and can be propagated through the subsequent nodes in the data processor. Correlation between computed values is also considered.

Classes

DataProcessor(input_key[, data_actions])

A DataProcessor defines a sequence of operations to perform on experimental data.

DataAction([validate])

Abstract action done on measured data to process it.

TrainableDataAction([validate])

A base class for data actions that need training.

Data Processing Nodes

Probability(outcome[, alpha_prior, validate])

Compute the mean probability of a single measurement outcome from counts.

MarginalizeCounts(qubits_to_keep[, validate])

A data action to marginalize count dictionaries.

ToImag([scale, validate])

IQ data post-processing.

ToReal([scale, validate])

IQ data post-processing.

SVD([validate])

Singular Value Decomposition of averaged IQ data.

DiscriminatorNode(discriminators[, validate])

A class to discriminate kerneled data, e.g., IQ data, to produce counts.

MemoryToCounts([validate])

A data action that takes discriminated data and transforms it into a counts dict.

AverageData(axis[, validate])

A node to average data representable as numpy arrays.

BasisExpectationValue([validate])

Compute expectation value of measured basis from probability.

MinMaxNormalize([validate])

Normalizes the data.

ShotOrder(value[, names, module, qualname, ...])

Shot order allowed values.

RestlessNode([validate, memory_allocation])

An abstract node for restless data processing nodes.

RestlessToCounts(num_qubits[, validate])

Post-process restless data and convert restless memory to counts.

RestlessToIQ([validate])

Post-process restless data and convert restless memory to IQ data.

Discriminators

BaseDiscriminator()

An abstract base class for serializable discriminators used in the DiscriminatorNode data action nodes.

SkLDA(lda)

A wrapper for the scikit-learn linear discriminant analysis.

SkQDA(qda)

A wrapper for the SKlearn quadratic discriminant analysis.