Skip to main content

Haiqu.block_vector_loading(data: Sequence[Number] | Sequence[Sequence[Number]], num_blocks: int | Sequence[int] | None = None, target_num_qubits: int | None = None, num_layers: int = 2, truncation_cutoff: Real = 1e-06, fine_tuning_iterations: int = 20, name: str | None = None) → DataLoadingJobModel

Generate a block-wise quantum circuit that prepares an arbitrary vector or matrix. Given a vector or matrix of real or complex data, this method creates a Data Loading job that runs in the Haiqu cloud. The result of this job is a circuit which can be used to supply the data to a quantum algorithm for processing. Unlike vector_loading(), which uses the fewest qubits possible to encode the data, the block-wise strategy in block_vector_loading() trades circuit depth for width. If additional qubits are available, they can be exploited to split the problem into several blocks, each of which is simpler. This reduces the overall depth of the circuit, making it more amenable to execution on noisy devices. Exactly one of num_blocks and target_num_qubits must be specified, which will determine how the vector or matrix is decomposed into blocks. The complexity and quality of the generated circuit can be controlled by the num_layers, truncation_cutoff, and fine_tuning_iterations parameters.
  • Parameters:
    • data (Sequence *[*Number ] | Sequence *[*Sequence *[*Number ] ]) — The vector or matrix with data to encode.
    • num_blocks (int | Sequence *[*int ] | None) — The number of blocks into which to split the data. It must be a single number in one dimension and a pair of numbers (rows and columns) in two dimensions. If None (default), the number of blocks is inferred from target_num_qubits, which must be specified.
    • target_num_qubits (int | None) — The qubit budget to assume when automatically determining the number of blocks. If None (default), the number of qubits depends on num_blocks, which must be specified.
    • num_layers (int) — The number of layers in the generated circuit. More layers can improve the quality of the circuit blocks at the cost of a deeper circuit. Defaults to 2.
    • truncation_cutoff (Real) — The entanglement cutoff for later layers. Increasing this threshold may result in a smaller (but more approximate) circuit. Defaults to 1e-6.
    • fine_tuning_iterations (int) — The maximum number of fine-tuning iterations to perform after each layer is added. Increasing this limit may improve the quality of the circuit by using more classical resources. Defaults to 20.
    • name (str | None) — The name for the job and the produced circuit. If None (default), a name will be automatically generated.
  • Returns: The Data Loading job that will generate the block-wise circuit for the data.
  • Return type: DataLoadingJobModel

Examples

>>> vector = [0.5, 0.2, 1, 14, 0.3, 5, 0.2, 0.6]  # 8 elements (will split into 2 two-qubit blocks)
>>> job = haiqu.block_vector_loading(data=vector, num_blocks=2, name="Block Vector Loading")
>>> bvl_gate, fidelity = job.result()  # bvl_gate is a Qiskit-compatible gate
>>> print(f"Vector was loaded with fidelity {fidelity:.6f}")
Vector was loaded with fidelity 1.000000
>>> print(f"Block vector loading used {job.num_qubits} qubits")
Block vector loading used 4 qubits
>>> circuit = qiskit.QuantumCircuit(job.num_qubits)
>>> circuit.append(bvl_gate, range(job.num_qubits))
>>> circuit.draw()
     ┌────────────────────────────────────────────────────────────┐
q_0: ┤0
     │                                                            │
q_1: ┤1
     │  Haiqucircuit(circ-12345678-1234-5678-1234-567812345678,4) │
q_2: ┤2
     │                                                            │
q_3: ┤3
     └────────────────────────────────────────────────────────────┘