pipeline.running

Hi, I'mGabriel Ronny

Data Engineer

I build pipelines that turn raw data into reliable decisions.

I orchestrate, model and deliver data like someone designing a living system.

// about

Data engineering with the soul of a distributed system.

I'm a Data Engineer passionate about turning the chaos of heterogeneous sources into predictable, observable flows.

I work with orchestration, dimensional modeling, data lakes and real-time pipelines — always looking for the point where engineering, business and creativity meet.

Well-orchestrated data is the biggest competitive edge a company can have today.

// dag.career

My path, drawn as a DAG.

Each node is a task — completed, running or scheduled. Hover to see the details.

completed
running
scheduled
  1. task.btgrunning

    BTG Pactual

    BTG Pactual

    Data Engineer · Feb/2025 – present

    Data engineering at one of Latin America's largest investment banks — evolving pipelines, modeling and infrastructure to support critical decisions at scale.

  2. task.actDigitalcompleted

    ACT Digital

    ACT Digital

    Data Engineer · Dec/2023 – Jan/2025

    ETL/ELT pipelines in Airflow and processing with PySpark + EMR. Reconciliation of banking products (PIX, TED, boletos, cards, vehicle debts, utilities), parquet/CSV storage on S3 and DataClearing for downstream teams.

  3. task.bancoPancompleted

    Banco PAN

    Banco PAN

    Data Engineer · Jan/2023 – Aug/2023

    Data modeling on AWS BigData and ETL/ELT pipelines orchestrated with Airflow + PySpark. Analytics over payroll loans, FGTS, credit cards and vehicle financing, with strategic dashboards in Tableau.

  4. task.b3completed

    B3

    B3

    Data Analytics · Feb/2021 – Jan/2023

    Data analysis over stock market, vehicle and real-estate financing datasets with Python, SQL and PySpark. Modeling for new-product prototypes on BigData (AWS Athena/EMR, Hadoop, Hive) and relationship with external clients of the exchange.

// stack

The tools I orchestrate with.

What I use to design reliable pipelines and scalable platforms.

Orchestration

  • Apache Airflow
  • dbt
  • Dagster
  • Prefect

Languages

  • Python
  • C#
  • SQL
  • TypeScript
  • Bash

Storage & Warehouse

  • PostgreSQL
  • SQL Server
  • Snowflake
  • S3 / Data Lake

Cloud & Infra

  • AWS
  • Docker
  • Terraform

Quality & Observability

  • Great Expectations
  • OpenLineage
  • Soda
  • Grafana

// what.i.do

I deliver data as a product — not as a forgotten report.

Data pipelines

Ingestion, orchestration and transformation with Airflow, dbt and Python.

Data platform

Data lake, warehouse and modeling layers designed to scale teams.

Quality & Observability

Contracts, tests, lineage and metrics that let you sleep at night.

Automation

Flows that kill manual work and give time back to what matters.

// contact

Please contact me.

I prefer straight conversations — reach me on LinkedIn or drop an email.