Join a venture-backed Series A start-up in hyper-growth mode, disrupting the massive 4.8T global shipping and cross-border e-commerce space. Our mission is to make every brand a global brand by making international shipping and returns easy, affordable, and friction-free. We power the world’s high-growth DTC brands and 3PLs worldwide. Work with a stellar team that raises the bar on innovation and is committed to our customer’s and partners’ success, fast-paced, inclusive, and collaborative team environment with plenty of opportunities to learn and grow.
We are looking for a Data Engineer that can work with a variety of tools to accomplish tasks that span analytics to machine learning. The candidate will be someone that can work well with a team but also is a self starter on individualized tasks. An ideal person for this role can navigate issues independently while still communicating progress to the team. This role will be working on a team with other Data Engineers and Data Scientists.
Responsibilities:
- Build and manage data, dashboards and monitoriting of application and system logs using ELK stack
- Manage the performance and reliability of data in our Redshift Data Warehouse
- Manage data models and schemas across all our Data
- Manage ELT pipelines in AWS Glue and DBT.
- Working closely with cross teams to ensure consistency, security and performance of data;
- Contribute to the design of information and operational support systems;
Requirements:
- Expert in ELK stack
- Expert in Redshift
- Expert in ETL patterns
- Experience in Python and Node.js
- Experience in AWS data tools (Glue, Athena, DynamoDB, RDS, Redshift, Lambda)
- Knowledge of pipeline technologies such as sagemaker or kubeflow
- Knowledge of reporting tools such as Tableau and Kibana
- Knowledge of Spark
Nice to Have:
- Experience with machine learning projects