Skip to main content
A key challenge in quantum computing applications — such as quantum machine learning, finance, and optimization — is efficiently encoding classical data into quantum states. While loading data into the amplitudes of quantum state allows to load 2NumQubits2^{NumQubits} data points, the standard circuits to prepare such states require exponentially large number of operations.

Loading the data vector

Here we introduce another loading procedure available in Haiqu SDK - Vector Loading — the process of embedding a classical data vector directly into the amplitudes of a quantum state resulting in compact, linear in depth, circuits.
Unlike fixed probability-based distribution loading, introduced earlier, vector loading allows precise control over the quantum state and is particularly useful for applications in quantum simulation, machine learning, and quantum kernel methods.
The Haiqu SDK allows you to prepare custom quantum states from classical vectors by calling haiqu.vector_loading(...).
job = haiqu.vector_loading(
    name="Data Loading Circuit",    # Unique name
    data=data_vector                # Data vector. 
    )
Data vector could be real or complex valued one-dimensional vector, which contains values to encode in the amplitudes. The function automatically adjusts to the closest minimal necessary number of qubits. Current supported max size of the vector is 220=1,048,5762^{20} =1,048,576.
For more details explore practical notebook VectorLoadingBasics.ipynb.

Loading of utility-scale data vectors

Vector encoding of a dataset of size N requires log(N)\log (N) qubits and often leads to deep circuits to achieve high fidelity of data state preparation. By splitting the data vector into smaller blocks, each block contains less data, making it easier to encode. This approach introduces a practical trade-off between qubit count and fidelity:
More blocks → more qubits → shallower circuits → higher fidelity.
The Haiqu SDK allows you to prepare custom quantum states from classical vectors when calling haiqu.block_vector_loading(...). For example,
random_vector_1d = rng.random(200)

job = haiqu.block_vector_loading(
    name="Example with target_num_qubits",
    data=random_vector_1d,  # example 1D vector
    target_num_qubits=100,  # specify target number of qubits
    num_layers=1
)

gate, fidelity = job.result()
will load a random vector of size 200200 into the shallow single-layered gate circuit with fidelity close to 11. For more details explore practical notebook BlockVectorLoadingBasics.ipynb.

Vector Loading specifications

ParameterDetails
Number of qubitsUp to 20 qubits
Input data1D vector
Data typeReal and complex values
Data sizeUp to ~1M features in the vector
Runtime0.5–2 minutes
Runtime scalingLinear scaling with number of qubits
Circuit size (gates count)O(n), n = number of qubits
Circuit depthO(n/2), n = number of qubits
Circuit connectivityLinear
Other circuit properties- No mid-circuit measurements
- Only CNOT and single-qubit rotation gates
- No ancilla qubits
- No post-selection required in state preparation
Returned metricsQuantum state fidelity is returned for the ideal state prepared by the circuit

Block Vector Loading specifications

ParameterDetails
Number of qubits1000+ qubits; no more than 20 qubits for a single block
Input data1D vector
2D matrix
Data typeReal and complex values
Data sizeAny, with no more than ~1M features for a single block
Runtime0.5–2 minutes per block
Runtime scalingLinear scaling with number of qubits
Circuit size (gates count)O(n), n = number of qubits
Circuit depthO(m/2), m = number of qubits in each block
Circuit connectivityLinear within each block
Other circuit properties- No mid-circuit measurements
- Only CNOT and single-qubit rotation gates
- No ancilla qubits
- No post-selection required in state preparation
Returned metricsQuantum state fidelity is returned for the ideal state prepared by the circuit