BRINTA is a cutting-edge tax automation platform built for LATAM. It streamlines tax calculations, reporting, filing, and remittance, making the process seamless and automated.
Senior Data Engineer
We seek an experienced Senior Data Engineer to join our team. This full-time position offers remote from LATAM, you’ll be at the heart of this mission, helping shape a scalable, reliable, and secure data infrastructure from the ground up. If you have a passion for building data systems that can handle massive volumes of information while being intuitive, maintainable, and efficient, we want to hear from you!
This role is perfect for someone who thrives in a collaborative, fast-paced, high-impact environment. You’ll work closely with other engineers, product managers, and tax experts to develop and optimize data processing systems that power our core product.
Key Responsibilities:
- End-to-end platform architecture: Design, implement, and maintain scalable and reliable data systems that power our tax automation platform, ensuring seamless flow from raw client data ingestion to form file generation and transmission.
- Data pipeline design and implementation: Development of complex ETL processes, integrating multiple data sources and orchestrating them.
- Data processing & automation: Develop and optimize data pipelines for high efficiency and low latency. Automate processes to ensure data is clean and reliable.
- Infrastructure management: Own the infrastructure around data storage and processing, ensuring it’s well-architected, secure, and performant. This includes managing RDS, EMR, S3 buckets, and other related AWS services.
- Mentorship and collaboration: Share your knowledge and best practices with the team, mentor other engineers, and serve as a "data champion" in cross-functional meetings with product and tax teams to integrate data pipelines with our broader product and backend infrastructure.
- Scalable systems: Build systems that can scale with our growing user base, ensuring performance, reliability and security across the board.
Qualifications:
- 5+ years of experience in data engineering or a related field, with a strong focus on building scalable data pipelines and systems.
- Strong experience coding data workflows and automation tasks with Python.
- Strong experience in ETL design, distributed data processing, and orchestration using Spark and Airflow.
- Solid experience working with unstructured data from various sources (e.g., JSON, XML, XLSX, CSV, etc.), ensuring it’s appropriately ingested and processed for further use.
- Deep understanding of relational databases (schema modelling, data querying, optimization, ORMs, troubleshooting).
- Solid experience working with cloud infrastructure, ideally in AWS.
- Comfortable working in an agile, cross-functional team environment, with the ability to communicate clearly and work collaboratively.