Parallel workflows in QiskitPatterns#

In this document, we will learn how to run distributed workflows inside a pattern. In this case, we will compute the quasi-probability distribution in parallel for a list of quantum circuits.

Let’s take a look at the pattern file ./source_files/pattern_with_parallel_workflow.py.

# source_files/pattern_with_parallel_workflow.py

from quantum_serverless import get_arguments, save_result, distribute_task, get

from qiskit import QuantumCircuit
from qiskit.primitives import Sampler


@distribute_task()
def distributed_sample(circuit: QuantumCircuit):
    """Distributed task that returns quasi distribution for given circuit."""
    return Sampler().run(circuit).result().quasi_dists[0]


arguments = get_arguments()
circuits = arguments.get("circuits")


# run distributed tasks as async function
# we get task references as a return type
sample_task_references = [
    distributed_sample(circuit)
    for circuit in circuits
]

# now we need to collect results from task references
results = get(sample_task_references)

save_result({
    "results": results
})

There are a lot of new concepts introduced in this pattern, so let’s go over them in more detail:

The distribute_task decorator converts a function into a distributed task. This means that the function will be executed on compute resources asynchronously and in parallel to the main context of the pattern.

When you call a converted function, it will return a reference to the function execution instead of the result. In order to get the result back, you need to call the get function on the function reference. get will wait until the function is finished and return the result of the function execution.

In the pattern above, we have applied the distribute_task decorator to the distributed_sample function. This function takes a QuantumCircuit as input and returns the quasi distribution for that circuit.

After we have defined the distributed_sample function, we read the circuits from the pattern arguments using the get_arguments function. We then call the distributed_sample function for each of the circuits, which creates a reference to each of the function executions.

These function executions will run in parallel on compute resources, and we get task references as the return type. We store these task references in the sample_task_references list.

After we have created the task references for each of the function executions, we need to collect the results from these tasks. We do this by calling the get function on the list of task references, which waits until all the tasks have completed and returns the results.

Once we have the results, we can save them using the save_result function.

Essentially, this pattern reads the circuits from the pattern arguments, executes the distributed_sample function on each circuit in parallel, collects the results from the function executions, and saves the results.

⚠ This provider is set up with default credentials to a test cluster intended to run on your machine. For information on setting up infrastructure on your local machine, check out the guide on local infrastructure setup.

[1]:
from quantum_serverless import ServerlessClient
import os

serverless = ServerlessClient(
    token=os.environ.get("GATEWAY_TOKEN", "awesome_token"),
    host=os.environ.get("GATEWAY_HOST", "http://localhost:8000"),
)

serverless
[1]:
<ServerlessProvider: gateway-provider>

Let’s create a list of random circuits which we will be passed as arguments to the pattern.

[2]:
from qiskit.circuit.random import random_circuit

circuits = [random_circuit(2, 2) for _ in range(3)]
[circuit.measure_all() for circuit in circuits]
circuits
[2]:
[<qiskit.circuit.quantumcircuit.QuantumCircuit at 0x7fa03049b850>,
 <qiskit.circuit.quantumcircuit.QuantumCircuit at 0x7fa03049b880>,
 <qiskit.circuit.quantumcircuit.QuantumCircuit at 0x7fa03049b3a0>]

Run pattern as usual, but pass the circuits in as a keyword argument, circuits.

[3]:
from quantum_serverless import QiskitFunction

function = QiskitFunction(
    title="pattern-with-parallel-workflow",
    entrypoint="pattern_with_parallel_workflow.py",
    working_dir="./source_files/",
)

serverless.upload(function)
[3]:
'pattern-with-parallel-workflow'
[4]:
job = serverless.run("pattern-with-parallel-workflow", arguments={"circuits": circuits})
job
[4]:
<Job | 721339ce-1409-4326-8263-e0fa6b0bb6a8>
[5]:
job.status()
[5]:
'QUEUED'
[6]:
job.result()
[6]:
{'results': [{'0': 0.5474338956985804, '3': 0.4525661043014198},
  {'0': 0.9999999999999998},
  {'0': 0.9572706116932064, '3': 0.0427293883067935}]}