Analytics Engineer
Risklabs
📋 Descripción del Trabajo
Risk Labs is the foundation and core team behind UMA and Across. The Risk Labs team operates as one cohesive culture, but focuses on two core protocols: UMA and Across. UMA and Across are decentralised protocols governed by community members across the globe in DAOs, and are supported by Risk Labs Foundation. UMA’s optimistic oracle (OO) can record any verifiable truth or data onto a blockchain. Across is leading the future of interoperability with its frontier intents-based architecture.
We are a remote-first, globally distributed team focused on building infrastructure that pushes crypto forward.
WHY THIS ROLE EXISTS
This is the first analytics engineering role at Risk Labs. The transformation layer has reached a level of complexity that demands a dedicated owner, and right now that work is distributed across people hired to do other things. You’ll sit within Data Engineering, reporting to the Platform Engineering Lead, and work most closely with our Data Analytics Lead and Product team as your primary stakeholders.
We are serious about building a truly agentic data platform, and this hire is a prerequisite for that. Agentic systems are only as good as the data they run on. Without deterministic, well-governed, and consistently-defined data served from a single authoritative source, any AI initiative we pursue is built on unstable ground. This role is the foundation that makes all of it possible.
WHAT YOU’LL OWN
1. The Transformation Layer You are the DRI for everything between raw ingestion and the clean data layer. You own the modelling strategy and are trusted to push back when a request would compromise what we’ve built. You work with the Analytics Lead to align on priorities and with Platform on infrastructure constraints.
2. Refactor and Legacy Migration We have inherited complexity: undocumented logic, redundant models, and systems built for speed rather than longevity. You’ll audit what we have, cut what we don’t need, and rebuild the rest into something clean, traceable, and maintainable. You decide what gets retired versus migrated, and you own the sequencing.
3. Data Quality and Testing You’ll design and own our approach to data quality: what we test, how we test it, and what happens when something breaks. We want proactive alerting and self-healing pipelines where possible. You’ll work with the Analytics Lead to codify business logic tests and implement column-level lineage across the transformation layer.
4. BigQuery Cost Optimisation You’ll own the efficiency of our query and storage footprint, refactoring models and materialisation strategies to reduce unnecessary spend, and keeping a close eye on cost as agentic data usage scales.
5. Event Data and Product Observability Working closely with Product and the Analytics Lead, you’ll build a robust event data model that gives us meaningful observability across our full product suite. You’ll bring experience with event data and tooling like Amplitude to help us design a scalable in-house approach to product analytics, built with intent rather than assembled reactively.
WHAT SUCCESS LOOKS LIKE
– The transformation layer has a clear, documented owner. Questions about where a metric comes from have fast, traceable answers.
– BigQuery costs are meaningfully lower within the first few months, without degradation in the data we’re serving.
– New product launches ship with data instrumentation built in from day one.
– You have materially freed up the Analytics Lead to focus on analysis and strategic insight, not data preparation.
– Automated data quality tests are running in production and catching issues before they reach stakeholders.
– When something breaks, the root cause is understood and resolved by you, not escalated.
– AI agents and tooling at Risk Labs are pulling data from a governed, deterministic, well-documented data layer, and you built the foundation that made that possible.
– You manage your own priorities, communicate proactively when things shift, and rarely need to be told what to do next.
SKILLS AND EXPERIENCE
Required
– Deep, demonstrable expertise in data modelling across multiple time horizons, dimensions, and levels of granularity
– Advanced SQL: performant, readable, and warehouse-aware
– Experience owning a transformation layer in production, including a meaningful refactor or migration
– Hands-on experience designing and implementing data quality frameworks: testing, alerting, and lineage
– Experience with event data and product analytics tooling (Amplitude, Segment, or similar)
– Experience with crypto data, or data environments characterised by high normalisation, irregular schemas, and significant inherited complexity
– Strong cross-functional communication; able to work closely with non-technical stakeholders without losing precision
– Comfortable with ambiguity and able to manage a shifting backlog without losing momentum
Nice to Have
– Experience wit