Job Posting Organization: The International Rescue Committee (IRC) is a global humanitarian organization that provides assistance to people affected by conflict and disaster. Established in 1933, the IRC operates in over 40 countries and has a workforce of more than 14,000 employees. The organization’s mission is to help people survive, recover, and rebuild their lives after crises. The Technology and Operations department plays a crucial role in supporting the IRC’s work by delivering reliable and scalable technological solutions that enhance the organization’s operations worldwide.
Job Overview: The Data Engineer position at IRC is pivotal in supporting the implementation, configuration, and maintenance of data systems and pipelines within the organization’s data environment. This role is integral to building and operating ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes, as well as data integrations and cloud-based data platforms such as Azure Databricks, Synapse, and Fabric. The successful candidate will be responsible for maintaining Lakehouse data environments, which involves monitoring pipeline execution, supporting data loads, and assisting in data modeling tasks under the guidance of senior team members. This is a hands-on technical role that requires foundational knowledge in data engineering, a willingness to learn, and strong collaboration skills. The Data Engineer will work closely with senior engineers and architects, although they will not be responsible for deputizing for the Data Architect or owning critical security" style="border-bottom: 1px dotted #007bff !important;">security responsibilities.
Duties and Responsibilities:
Design, build, and maintain reliable ETL/ELT data pipelines for batch and near-real-time processing from both internal and external sources using tools such as Azure Data Factory or Databricks workflows.
Implement data validation, testing, and reconciliation checks, including dbt tests where applicable.
Monitor pipeline health, performance, and reliability, identifying issues and collaborating with senior engineers to resolve them.
Write SQL and Python queries for data extraction and transformation.
Support the documentation of processes, standards, and improvements.
Assist in solution design by preparing data samples, documentation, or prototype queries.
Collaborate with the Data Team and business/departmental priority setters to align data strategies with organizational goals.
Required Qualifications: The ideal candidate should possess a strong foundation in data engineering principles and practices. They should have advanced SQL skills, including proficiency in joins, window functions, common table expressions (CTEs), and performance tuning. Additionally, they should be skilled in Python for data processing, APIs, and automation, with a basic understanding of PySpark. Familiarity with data modeling concepts, including star and snowflake schemas, as well as fact and dimension tables, is essential. Experience in building, monitoring, and optimizing ETL/ELT pipelines is required, along with knowledge of dbt Core or dbt Cloud for developing, scheduling, and maintaining models. Familiarity with CI/CD tools, such as Git or other version control systems, is also necessary. Strong problem-solving skills, attention to detail, and good communication and teamwork abilities are critical for success in this role.
Educational Background: While the specific educational background is not explicitly stated, a degree in Computer Science, Information Technology, Data Science, or a related field is typically expected for a Data Engineer position. Relevant certifications in data engineering or cloud platforms may also be beneficial.
Experience: Candidates should have 2 to 4 years of hands-on experience in data engineering, data processing, or software engineering. This experience should include practical work with data systems, data pipelines, and cloud-based data platforms, demonstrating a solid understanding of data engineering concepts and practices.
Languages: English is the mandatory language for this position, as effective communication is essential for collaboration within the team and with other departments. Additional languages may be considered a plus, depending on the specific needs of the organization and its operations in various regions.
Additional Notes: This position is remote, allowing for flexibility in work location. The role is designed for individuals who are looking to grow their skills in data engineering within a supportive and collaborative environment. The IRC values diversity and encourages applications from individuals of all backgrounds, including those from underrepresented groups. Compensation and benefits details are typically provided during the interview process.
Info
Job Posting Disclaimer
This job posting is provided for informational purposes only. The accuracy of the job description, qualifications, and other details mentioned is the sole responsibility of the employer or the organization listing the job. We do not guarantee the validity or legitimacy of this job posting. Candidates are advised to conduct their own due diligence and verify the details directly with the employer before applying.
We are not liable for any decisions or actions taken by applicants in response to this job listing. By applying, you agree that all application processes, interviews, and potential job offers are managed exclusively by the listed employer or organization.
Beware of fraudulent job offers. Do not provide sensitive personal information or make any payments to secure a job.