PyVISA Backend
pyvisa-galois registers as the @galois PyVISA backend. Existing scripts switch over by changing one line — pyvisa.ResourceManager() → pyvisa.ResourceManager("@galois") — and immediately gain remote access to every instrument the daemon can see, without the script needing tailnet credentials, NI-VISA, or any direct USB/GPIB drivers.
Installation
Section titled “Installation”pyvisa-galois is a pure-Python package depending only on pyvisa and httpx. It is currently distributed from the Galois internal index — see your team admin for the install URL. Once a public PyPI release ships, the install reduces to:
pip install pyvisa pyvisa-galoisUntil then, install from the Galois index with pip install --index-url <url> pyvisa-galois. Python 3.10+ is required.
Drop-in usage
Section titled “Drop-in usage”import pyvisa
rm = pyvisa.ResourceManager("@galois")
print(rm.list_resources())# ('GPIB0::24::INSTR', 'USB0::0x2A8D::0x0101::MY54505555::INSTR', 'TCPIP::192.168.1.42::INSTR')
scope = rm.open_resource("TCPIP::192.168.1.42::INSTR")print(scope.query("*IDN?"))scope.write("CHAN1:SCAL 0.1")scope.close()If you’ve used PyVISA before, that’s it — query, write, read, read_raw, timeouts, and locking all work as PyVISA scripts expect.
How it routes
Section titled “How it routes”The backend reads two environment variables on import:
| Variable | Required | Default | Description |
|---|---|---|---|
GALOIS_BACKEND_URL | yes (when remote) | http://localhost:8000 | Base URL of the Galois Cloud backend. |
GALOIS_AUTH_TOKEN | yes (when remote) | empty | Short-lived JWT for the kernel/team. |
Every operation is an HTTP call to the backend, which proxies it over gRPC to the chosen edge daemon. A request flows:
your script ─→ pyvisa-galois ─→ HTTPS ─→ Galois Cloud ─→ gRPC over tailnet ─→ edge daemon ─→ instrumentThe backend handles edge selection, instrument locking, and timeouts. Your script never needs to know where the instrument physically lives.
Status code mapping
Section titled “Status code mapping”pyvisa-galois translates HTTP status codes into PyVISA StatusCode values, so error handling looks identical to NI-VISA:
| HTTP | PyVISA StatusCode |
|---|---|
| 200 / 201 / 204 | success |
| 400 | error_invalid_object |
| 404 | error_resource_not_found |
| 408 | error_timeout |
| 501 | error_nonsupported_operation |
| 502 | error_system_error |
| 503 | error_resource_busy |
Catching the standard PyVISA exceptions just works:
import pyvisafrom pyvisa.errors import VisaIOError
rm = pyvisa.ResourceManager("@galois")try: inst = rm.open_resource("USB0::0xDEAD::0xBEEF::SN::INSTR")except VisaIOError as e: print("not connected:", e)Profile-aware typed proxies
Section titled “Profile-aware typed proxies”When you open_resource() an instrument that has a YAML profile, the backend returns the matched profile’s capabilities and pyvisa-galois attaches a typed proxy directly to the PyVISA resource object. Profile commands surface as methods on the resource alongside the standard PyVISA API:
smu = rm.open_resource("GPIB0::24::INSTR")
# Standard PyVISA — always availablesmu.query("*IDN?")
# Profile commands — present when a profile matchedsmu.set_voltage(voltage=1.5) # keyword-only argscurrent = smu.measure_current()smu.output_state(value="ON")Proxy method parameters are keyword-only — they enforce the parameter names from the profile so a typo at call-time fails fast rather than silently mapping to the wrong SCPI argument. Instruments without a matching profile are returned unchanged; calling a profile-only method on them raises AttributeError.
PyVISA’s own methods always win on name collision: if a profile defines a command named query or write, smu.query(...) still resolves to PyVISA’s. The profile version is reachable via the proxy stored on smu._galois_proxy. The legacy rm.visalib.get_proxy(smu.session) lookup is preserved for backward compatibility.
Vendor SDK fallback (RPyC-style)
Section titled “Vendor SDK fallback (RPyC-style)”For instruments that ship with a Python vendor SDK rather than SCPI — Quantum Design PPMS via MultiPyVu, Zurich Instruments MFLI, etc. — pyvisa-galois exposes a remote-object helper:
from pyvisa_galois import remote_connect
ppms = remote_connect( "MultiPyVu", # module name on the edge edge_id="edge_8a7b…", # which edge owns the instrument)ppms.set_temperature(100.0, 5.0) # 100 K at 5 K/minprint(ppms.get_temperature())The signature is remote_connect(module, edge_id=None, backend_url=None, auth_token=None). The module name doubles as the import target — the daemon importlib-loads it on the edge and the call is dispatched as a ProxySDKCall gRPC. edge_id defaults to GALOIS_EDGE_ID from the environment, mirroring the cloud kernel injection pattern.
Migration from @py / NI-VISA
Section titled “Migration from @py / NI-VISA”You almost certainly need to change exactly one line:
# Beforerm = pyvisa.ResourceManager() # NI-VISA# orrm = pyvisa.ResourceManager("@py") # pyvisa-py
# Afterrm = pyvisa.ResourceManager("@galois")Things that just work:
list_resources(),open_resource(),close()query(),write(),read(),read_raw(),read_bytes()- Timeouts via the
timeoutattribute - Resource locking through
access_mode query_ascii_values,query_binary_values
Things that don’t apply (and which the backend silently no-ops or maps to error_nonsupported_operation):
- Direct GPIB IFC / REN / serial-poll calls
- USB raw control transfers (use the SDK proxy if you need this)
- Resource-level interrupt handlers
Debugging
Section titled “Debugging”Get diagnostic info from the running backend:
import pyvisarm = pyvisa.ResourceManager("@galois")print(rm.visalib.get_debug_info())Galois Cloud PyVISA Backend v0.2.0 Transport: HTTP -> Backend -> gRPC -> Edge Backend: <whatever GALOIS_BACKEND_URL was set to>The backend uses Python’s standard logging module. Configure it the usual way to see HTTP-level chatter:
import logginglogging.basicConfig(level=logging.DEBUG)What’s next
Section titled “What’s next”- Python from a notebook — when you want raw gRPC alongside
@galois - gRPC & WebSocket APIs — bypass PyVISA entirely for high-throughput streaming
- Connecting Instruments — make sure the daemon is finding your hardware