
Data Engineer
- Hortolândia - SP
- Permanente
- Período integral
- Design and develop data warehouse and data solutions for one of the largest US Clients
- Collaborate with specialists and client stakeholders to understand business objectives and translate them into effective data solutions.
- Apply critical thinking and a structured methodology to drive data transformation initiatives with a blend of domain expertise, consulting skills, and software engineering.
- Take ownership of assigned tasks, including effort estimation and time management, ensuring timely and high-quality delivery.
- Develop and implement data solutions that meet both business and technical requirements with a strong attention to detail.
- Work closely with cross-functional teams to gather, structure, and organize data for optimal use.
- Foster a collaborative team environment, supporting and learning from peers to achieve shared goals.
- Continuously seek opportunities to improve team workflows and development practices.
- Provide third-level support for data solutions, addressing complex issues and ensuring system reliability.
- Contribute to a culture of innovation and growth by transforming data into actionable insights.
Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here.Who You AreYou’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others.Required Skills and ExperienceCloud & Data Platforms
- Proficiency with AWS Glue (ETL, Data Catalog, Crawlers)
- Hands-on experience with AWS Redshift, Lambda, and S3
- Experience working with Databricks
- Familiarity with Confluent Schema Registry
- Experience using orchestration tools such as Airflow or Control-M
- Experience migrating ETL processes from Talend to AWS Glue
- Strong development skills using PySpark or Scala for ETL
- Knowledge of Kafka-based streaming and batch ingestion
- Ability to design real-time and batch data pipelines
- Experience with data profiling and governance tools such as Ataccama and Voyager
- Understanding of data lineage tracking and metadata management
- Familiarity with PII detection and data masking techniques
- Experience with schema validation and data contracts
- Proficiency in Python scripting
- Experience using Pytest for unit testing
- Ability to implement automated data profiling and anomaly detection
- Knowledge of Data Lake architecture using Parquet or Delta formats
- Experience in feature engineering for AI/ML applications
- Ability to curate datasets optimized for AI/ML readiness
- Experience integrating and optimizing Power BI dashboards
- Ability to build real-time analytics dashboards
- Knowledge of incremental refresh and row-level security in reporting tools