Data Engineer
Company Overview
Blue Orange Digital is a boutique data & AI consultancy that delivers enterprise-grade results. We design and build modern data platforms, analytics, and ML/AI agent solutions for mid-market and enterprise clients across Private Equity, Financial Services, Healthcare, and Retail.
Our teams work with technologies like Databricks, Snowflake, dbt, and the broader Microsoft ecosystem to turn messy, real-world data into trustworthy, actionable insight.
We are a builder-led, client-first culture that prizes ownership, clear communication, and shipping high-impact work.
Note: Please submit your resume in English, as all application materials must be in English for review and consideration.
Overview
Engagement Type: Staff Augmentation (3–6 months)
Type: Contractor
Experience Level: Senior to Mid-Senior
Location: This role is based in our New York City office and requires onsite presence approximately three days per week. Candidates must reside in or be able to commute from the NY Tri-State Area (NY, NJ, CT).
Relocation: No relocation assistance is provided for this position.
The client is seeking an experienced Data Engineer with a Financial Services background for a 3–6 month staff augmentation engagement, with extension possibilities, to support existing data workflows, ensure continuity of delivery, and advance the client’s Snowflake and dbt infrastructure.
The ideal candidate is able to meaningfully contribute at a high-level architectural “plan of attack” level and also contribute directly with hands-on, performant Python, pandas, numpy, and SQL engineering, delivering client-compatible, efficient, clean, working code. Familiarity and experience with API and data source ingestion, data cleansing and transformations, time series analysis, and data modeling are critical success factors.
This role is ideal for a hands-on, senior-level Data Engineer with deep expertise in Snowflake, Python-based data pipelines (pandas and numpy), and complex SQL transformation logic. The engineer will collaborate with internal teams during a one-month knowledge transfer period and then independently own defined deliverables.
The engagement combines data pipeline engineering, SQL transformation management, and dashboard workflow support within an AWS-centric financial services environment.
Core Responsibilities
1. Snowflake Engineering & Optimization
Manage and enhance the client’s Snowflake data warehouse, focusing on structure, performance, and cost optimization.
Design and implement complex SQL and Python transformations at the pandas/polars/numpy level across multiple schemas and data domains.
Analyze existing Snowflake setup and recommend improvements in storage structure, clustering, and performance tuning.
Implement best practices around data partitioning, security, role-based access control, and query performance.
2. Python Data Pipeline Development
Develop and maintain Python-based data pipelines integrated with AWS and Snowflake.
Plug into an existing large, production-level codebase, contributing clean, tested, modular code.
Leverage AWS components (e.g., S3, Lambda, Glue, Step Functions) to orchestrate end-to-end workflows.
Write Python scripts and utilities (pandas, polars, numpy) to automate data validation, ingestion, and reporting processes.
3. dbt & Transformation Layer
Implement and refine dbt models to transform raw data into analytics-ready structures.
Define and enforce data testing, lineage tracking, and documentation through dbt best practices.
Integrate dbt with CI/CD pipelines for automated testing and deployment.
Collaborate with analysts to expose dbt-modeled data directly to Tableau and other BI tools.
4. Analytics & Dashboard Workflow Support
Support Tableau dashboard workflows, maintaining and optimizing data sources and extracts.
Work cross-functionally with analysts and business users to ensure data freshness and performance.
Enhance self-service analytics by ensuring governed, well-modeled data sources.
5. Cross-Team Collaboration & Knowledge Transfer
Participate in knowledge transfer (approximately one month) with existing team members to understand architecture, pipelines, and operational processes.
Work collaboratively with internal engineering, data science, and business analytics teams.
Deliver thorough documentation and ensure smooth handoff post-engagement.
Technical Environment
Data Warehouse:
Snowflake (core platform; advanced optimization expected)
Programming / Scripting:
Python (data pipelines, automation, ETL orchestration)
Libraries: pandas, polars, numpy
Data Transformation:
dbt (data build tool) — model development, testing, documentation
Visualization:
Tableau — dashboard and data source support
Cloud Infrastructure:
AWS stack (S3, Glue, Lambda, Step Functions, IAM)
Orchestration / CI/CD:
Airflow, dbt Cloud, GitHub Actions, or AWS-native scheduling
Version Control:
Git / GitHub / GitLab
Data Quality & Governance:
dbt tests
Great Expectations (nice to have)
Documentation best practices
Ideal Profile
5–8+ years of professional experience as a Data Engineer, Analytics Engineer, or Senior Data Developer, with 1–3 years of experience (or more) in Financial Services.
Strong expertise with Snowflake, including schema design, query optimization, and performance tuning.
Advanced SQL skills, including complex joins, CTEs, window functions, and efficient transformations.
Proficient in Python for data manipulation, pipeline development, and integration with Snowflake and AWS, particularly using pandas, polars, and numpy.
Experience using dbt for transformation and documentation.
Experience with Prefect for job scheduling and workflow management.
Familiarity with Tableau for analytics support and data delivery.
Experience with Microsoft SQL Server.
Comfortable integrating into a mature, existing codebase and collaborating in a fast-paced, cross-functional environment.
Excellent communicator and problem-solver, able to operate independently with clearly defined outcomes.
Engagement Details
Ramp Period: Approximately one month of knowledge transfer with an existing team member
Delivery Model: Staff Augmentation (embedded with the internal data engineering team)
Reporting To: Data Engineering Lead / Director of Data Operations
Success Criteria: Clear, outcome-driven deliverables with strong communication and documentation
Key Deliverables
Maintain and optimize existing Snowflake-based pipelines for performance and cost efficiency.
Deliver new Python data workflows integrated with AWS and Snowflake.
Implement or enhance the dbt transformation layer for analytics-ready datasets.
Provide dashboard data source stability and support for Tableau users.
Complete and document all work to ensure a smooth handoff at project conclusion.
Benefits:
Flexible Schedule
Unlimited Paid Time Off (PTO)
Paid parental/bereavement leave
Worldwide recognized clients to build skills for an excellent resume
Top-notch team to learn and grow with
Salary: $11,458.33 - $16,041.6 (monthly range, depending on experience)
Background checks may be required for certain positions/projects.
Blue Orange Digital is an equal-opportunity employer.
- Department
- Engineering
- Role
- Data Engineer
- Locations
- New York, Elizabeth, Jersey City, Newark, Paterson
- Monthly salary
- $11,458.33 - $16,041.67
About Blue Orange Digital
Blue Orange Digital is a data and AI consulting firm that helps companies turn complex data into real business outcomes. We partner with organizations across industries to design and deploy scalable data infrastructure, advanced analytics, and AI-powered solutions. Our team is fully remote, globally distributed, and driven by curiosity, impact, and innovation.