Hire Big Data Engineers within a week
Looking to hire Big data engineers?
With swift recruitment and a dedication to your success, we’re here to transform your vision into reality.
Hire Top Remote Software Dev Whizards!
Exp : 5+ Years
$30 / hr
Manikanta K
Big Data Engineer
Data Engineer with 5+ Years of experience in BI development using Big Data and Cloud services
Key Skills
- Python
- Big Data
- MS-SQL Server
- Azure SQL
- TFS
- VSTS
- Azure Data Lake
Manikanta K
Data Engineer
Exp : 5.5 Years
$30 / hr
Data Engineer with 5+ Years of experience in BI development using Big Data and Cloud services
Key Skills
- Python
- Big Data
Additional Skills
- MS-SQL server
- Azure SQL
- TFS
- VSTS
- Azure data lake
- Data factory
- SSIS
Detailed Experience
- Extensive experience working on Azure cloud and providing solutions involving several services like Datalake, VM, ADF, Azure Function, Databricks etc
- 2 years of experience working on AWS cloud and providing solutions involving several services like S3, EC2, Glue, Lambda, Athena etc
- Capable of writing complex SQL queries and able to tune the performance
- Design and Development of Big data Applications in Apache Spark, Azure
- Experience in utilizing MSSQL, Azure SQL, and Redshift.
- Excellent verbal and written communication skills and proven team player.
Exp : 5 Years
$30 / hr
Shashank
Big Data Engineer
Data Engineer with 5 Years of experience in Python, Big Data and Cloud services
Key Skills
- Python
- SQL
- AWS
- Big Data
- Oracle
- MySQL
- SQL Server
- PostgreSQL
Shashank
Data Engineer
Exp : 5 Years
$30 / hr
Data Engineer with 5 Years of experience in Python, Big Data and Cloud services.
Key Skills
- Python
- SQL
- AWS
- Big Data
Additional Skills
- Oracle
- MySQL
- SQL Server
- Postgres
- Apache Spark
- Pyspark
- DMS
- RDS
- Glue
- Lambda
- Dynamo
- Cloudwatch
Detailed Experience
- Proficient with AWS cloud services to develop cost-effective and accurate data pipelines and optimize them.
- Capable of handling multiple data sources like DynamoDB, RDS, JSON, text, CSV.
- Developed Pyspark scripts in Databricks to transform data and load them into data tables.
- Good experience in the creation of pipelines for loan audits, and risk analysis for RBI compliance.
- Automated the generation of PMS reports using Pyspark.
- Involvement in data migration activities and data validation post data migration.
- Expert in developing Pysprak scripts to transform data to new data models.
- Created a data pipeline for a client to price their products and an ETL pipeline to compare the pricing of their product with their direct competition.
Exp : 4+ Years
$25 / hr
Vivekanand C
Big Data Engineer
Data Engineer with 4+ years of experience in ETL development and crafting robust Data
Warehouse solutions.
Key Skills
- AWS Services
- Python
- SQL
- Big Data
- Airflow
- Github
- JIRA
- Oracle SQL
Vivekanand C
Data Engineer
Exp : 4+ Years
$25 / hr
Data Engineer with 4+ years of experience in ETL development and crafting robust Data
Warehouse solutions.
Key Skills
- AWS services
- Python
- SQL
- Big Data
Additional Skills
- Airflow
- Github
- JIRA
- Oracle SQL
- Jupyter
- V S Code
Detailed Experience
- Capable of leveraging a suite of technologies, including Python, SQL, PySpark, and AWS services like EMR, Glue, Redshift, Athena, EC2, and S3, to transform raw data into actionable insights.
- Development and implementation of ETL solutions using Python, PySpark, SQL, and AWS services, particularly AWS Glue and AWS EMR.
- Proficient in orchestrating ETL Data Pipelines using Apache Airflow, integrating S3 as a Data Lake, Glue for Data Transformation, and Redshift for Data Warehousing to create end-to-end ETL pipelines.
- Testing and data validation using Athena to ensure data accuracy and reliability after transformation.
- Successfull implementation of robust Data Warehousing solutions with Redshift to streamline downstream data consumption.
- Building Data Pipelines, Data Lakes, and Data Warehouses while demonstrating strong knowledge of normalization, Slowly Changing Dimension (SCD) handling, Fact and Dimension tables.
- Extensive familiarity with a range of AWS services, including EMR, Glue, Redshift, S3, Athena, Lambda, EC2, and IAM, facilitating comprehensive data engineering solutions.
- Expertise in Oracle Database, adept at crafting complex SQL queries for data retrieval and manipulation.
- Sound understanding of SQL concepts such as views, subqueries, joins, string, window, and date functions.
- Proficient in PySpark concepts, including advanced joins, Spark architecture, performance optimization, RDDs, and Dataframes.
- Skilled in performance tuning and optimization of Spark jobs, utilizing tools like Spark Web UI, Spark History Server, and Cluster logs.
Exp : 4 Years
$25 / hr
Rohit M
Big Data Engineer
Data Engineer with 3+ years of relevant experience on the Big Data platform and AWS services.
Key Skills
- AWS Services
- Python
- PySpark
- Flask
- Django
- REST APIs
- MySQL
- MongoDB
Rohit M
Data Engineer
Exp : 4 Years
$25 / hr
Data Engineer with 3+ years of relevant experience on the Big Data platform and AWS services.
Key Skills
- Python
- PySpark
- AWS
Additional Skills
- Flask
- Django
- REST APIs
- MySQL
- MongoDB
- PostgreSQL
- GIT
- Docker
- Bamboo
- Bit Bucket
- Spark Streaming
Detailed Experience
- Experience in building data pipelines using AWSservices such as EC2, ECS, Glue and Lambda.
- Involved in writing Spark SQL scripts for data processing as per business requirements.
- Exception Handling and performance optimization techniques on python scripts using spark data frames.
- Expertise in developing business logic in Python, PySpark.
- Good experience in writing queries in SQL.
- Proficient in working with data storage and retrieval using AWS S3 and integrating it with Spark and PySpark for efficient data processing.
- Development of ETL workflows using PySpark and Glue to transform, validate, and load large amounts of data from various sources to the AWS data lake.
- Expertise in designing and implementing scalable data architectures in AWS, including data modeling and database design using technologies like Redshift and RDS.
- Strong experience in using tools like GIT, Docker, JIRA
- Proficient in programming by using the IDE’s such as Eclipse, PyCharm, VS Code
- Hands-on experience in spark Streaming.
- Usage of Databricks for a variety of big data use cases, such as data preparation, ETL, data exploration and visualization, machine learning, and real-time analytics.